Jump to content

Data analysis: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Citation bot (talk | contribs)
Removed parameters. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 79/355
 
(625 intermediate revisions by more than 100 users not shown)
Line 1: Line 1:
{{short description|The process of analyzing data to discover useful information and support decision-making}}
{{Data Visualization}}
{{Data Visualization}}
{{Computational physics}}
{{Computational physics}}
'''Data analysis''', also known as '''analysis of data''' or '''data analytics''', is a process of inspecting, [[Data cleansing|cleansing]], [[Data transformation|transforming]], and [[Data modeling|modeling]] [[data]] with the goal of discovering useful information, suggesting conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, in different business, science, and social science domains.


'''Data analysis''' is the process of inspecting, [[Data cleansing|cleansing]], [[Data transformation|transforming]], and [[Data modeling|modeling]] [[data]] with the goal of discovering useful information, informing conclusions, and supporting [[decision-making]].<ref name="Auerbach Publications">{{Citation|title=Transforming Unstructured Data into Useful Information|date=2014-03-12|url=http://dx.doi.org/10.1201/b16666-14|work=Big Data, Mining, and Analytics|pages=227–246|publisher=Auerbach Publications|doi=10.1201/b16666-14|isbn=978-0-429-09529-0|access-date=2021-05-29}}</ref> Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and is used in different business, science, and social science domains.<ref>{{Citation|title=The Multiple Facets of Correlation Functions|url=http://dx.doi.org/10.1017/9781108241922.013|work=Data Analysis Techniques for Physical Scientists|year=2017|pages=526–576|publisher=Cambridge University Press|doi=10.1017/9781108241922.013|isbn=978-1-108-41678-8|access-date=2021-05-29}}</ref> In today's business world, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively.<ref>Xia, B. S., & Gong, P. (2015). Review of business intelligence through data analysis. ''Benchmarking'', ''21''(2), 300-311. {{doi|10.1108/BIJ-08-2012-0050}}</ref>
[[Data mining]] is a particular data analysis technique that focuses on modeling and knowledge discovery for predictive rather than purely descriptive purposes, while [[business intelligence]] covers data analysis that relies heavily on aggregation, focusing on business information.<ref>[https://spotlessdata.com/blog/exploring-data-analysis Exploring Data Analysis]</ref> In statistical applications data analysis can be divided into [[descriptive statistics]], [[exploratory data analysis]] (EDA), and [[Statistical hypothesis testing|confirmatory data analysis]] (CDA). EDA focuses on discovering new features in the data and CDA on confirming or falsifying existing hypotheses. [[Predictive analytics]] focuses on application of statistical models for predictive forecasting or classification, while [[text analytics]] applies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a species of [[unstructured data]]. All are varieties of data analysis.


[[Data mining]] is a particular data analysis technique that focuses on statistical modeling and knowledge discovery for predictive rather than purely descriptive purposes, while [[business intelligence]] covers data analysis that relies heavily on aggregation, focusing mainly on business information.<ref>[https://web.archive.org/web/20171018181046/https://spotlessdata.com/blog/exploring-data-analysis Exploring Data Analysis]</ref> In statistical applications, data analysis can be divided into [[descriptive statistics]], [[exploratory data analysis]] (EDA), and [[Statistical hypothesis testing|confirmatory data analysis]] (CDA).<ref>{{Citation|title=Data Coding and Exploratory Analysis (EDA) Rules for Data Coding Exploratory Data Analysis (EDA) Statistical Assumptions|date=2004-08-16|url=http://dx.doi.org/10.4324/9781410611420-6|work=SPSS for Intermediate Statistics|pages=42–67|publisher=Routledge|doi=10.4324/9781410611420-6|isbn=978-1-4106-1142-0|access-date=2021-05-29}}</ref> EDA focuses on discovering new features in the data while CDA focuses on confirming or falsifying existing [[hypotheses]].<ref>{{Cite journal|date=2014-10-01|title=New European ICT call focuses on PICs, lasers, data transfer|url=http://dx.doi.org/10.1117/2.4201410.10|journal=SPIE Professional|doi=10.1117/2.4201410.10|issn=1994-4403|last1=Spie }}</ref><ref>{{Cite book|last1=Samandar|first1=Petersson|first2=Sofia|last2=Svantesson|title=Skapandet av förtroende inom eWOM : En studie av profilbildens effekt ur ett könsperspektiv|date=2017|publisher=Högskolan i Gävle, Företagsekonomi|oclc=1233454128}}</ref> [[Predictive analytics]] focuses on the application of statistical models for predictive forecasting or classification, while [[text analytics]] applies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a species of [[unstructured data]]. All of the above are varieties of data analysis.<ref>{{Cite journal|last=Goodnight|first=James|date=2011-01-13|title=The forecast for predictive analytics: hot and getting hotter|url=http://dx.doi.org/10.1002/sam.10106|journal=Statistical Analysis and Data Mining: The ASA Data Science Journal|volume=4|issue=1|pages=9–10|doi=10.1002/sam.10106|s2cid=38571193 |issn=1932-1864}}</ref>
[[Data integration]] is a precursor to data analysis, and data analysis is closely linked to [[data visualization]] and data dissemination. The term ''data analysis'' is sometimes used as a synonym for data modeling.


[[Data integration]] is a precursor to data analysis, and data analysis is closely linked to [[Data and information visualization|data visualization]] and data dissemination.<ref>{{Cite book|last=Sherman|first=Rick|url=https://www.worldcat.org/oclc/894555128|title=Business intelligence guidebook: from data integration to analytics|date=4 November 2014|isbn=978-0-12-411528-6|location=Amsterdam|oclc=894555128}}</ref>
==The process of data analysis== really?
[[File:Data visualization process v1.png|right|350px|thumb|Data science process flowchart]]
Analysis refers to breaking a whole into its separate components<ref>{{Cite journal|date=2017-02-16|title=Data analysis|url=https://en.wikipedia.org/enwiki/w/index.php?title=Data_analysis&oldid=765730284|journal=Wikipedia|language=en}}</ref> for individual examination. Data analysis is a [[Process theory|process]] for obtaining raw data and converting it into information useful for decision-making by users. Data is collected and analyzed to answer questions, test hypotheses or disprove theories.<ref name="Judd and McClelland 1989">{{cite book
| last = Judd, Charles and
| first = McCleland, Gary
| year = 1989
| title = [[Data Analysis]] | publisher = Harcourt Brace Jovanovich
| isbn = 0-15-516765-0}}</ref>


==Data analysis process==
Statistician [[John Tukey]] defined data analysis in 1961 as: "Procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data."<ref>[http://projecteuclid.org/download/pdf_1/euclid.aoms/1177704711 John Tukey-The Future of Data Analysis-July 1961]</ref>
[[File:Data visualization process v1.png|right|350px|thumb|Data science process flowchart from ''Doing Data Science'', by Schutt&nbsp;& O'Neil (2013)]]

''Analysis'' refers to dividing a whole into its separate components for individual examination.<ref>{{Citation|last=Field|first=John|title=Dividing listening into its components|url=http://dx.doi.org/10.1017/cbo9780511575945.008|work=Listening in the Language Classroom|year=2009|pages=96–109|place=Cambridge|publisher=Cambridge University Press|doi=10.1017/cbo9780511575945.008|isbn=978-0-511-57594-5|access-date=2021-05-29}}</ref> ''Data analysis'' is a [[Process theory|process]] for obtaining [[raw data]], and subsequently converting it into information useful for decision-making by users.<ref name="Auerbach Publications"/> ''Data'' is collected and analyzed to answer questions, test hypotheses, or disprove theories.<ref name="Judd and McClelland 1989">{{cite book
There are several phases that can be distinguished, described below. The phases are iterative, in that feedback from later phases may result in additional work in earlier phases.<ref name="O'Neil and Schutt 2014">{{cite book
| last1 = Judd|first1=Charles
| last = O'Neil, Cathy and
| last2 = McCleland|first2=Gary
| first = Schutt, Rachel
| year = 2014
| title = [[Doing Data Science]] | publisher = O'Reilly
| isbn = 978-1-449-35865-5}}</ref>

==The process of data analysis==
[[File:Data visualization process v1.png|right|350px|thumb|Data science process flowchart]]
Analysis refers to breaking a whole into its separate components<ref>{{Cite journal|date=2017-02-16|title=Data analysis|url=https://en.wikipedia.org/enwiki/w/index.php?title=Data_analysis&oldid=765730284|journal=Wikipedia|language=en}}</ref> for individual examination. Data analysis is a [[Process theory|process]] for obtaining raw data and converting it into information useful for decision-making by users. Data is collected and analyzed to answer questions, test hypotheses or disprove theories.<ref name="Judd and McClelland 1989">{{cite book
| last = Judd, Charles and
| first = McCleland, Gary
| year = 1989
| year = 1989
| title = [[Data Analysis]] | publisher = Harcourt Brace Jovanovich
| title = Data Analysis | publisher = Harcourt Brace Jovanovich
| isbn = 0-15-516765-0}}</ref>
| isbn = 0-15-516765-0
}}</ref>


Statistician [[John Tukey]] defined data analysis in 1961 as: "Procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data."<ref>[http://projecteuclid.org/download/pdf_1/euclid.aoms/1177704711 John Tukey-The Future of Data Analysis-July 1961]</ref>
Statistician [[John Tukey]], defined data analysis in 1961, as:<blockquote>"Procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data."<ref>{{Cite journal |url=http://projecteuclid.org/download/pdf_1/euclid.aoms/1177704711 |title=John Tukey-The Future of Data Analysis-July 1961 |journal=The Annals of Mathematical Statistics |date=March 1962 |volume=33 |issue=1 |pages=1–67 |doi=10.1214/aoms/1177704711 |access-date=2015-01-01 |archive-date=2020-01-26 |archive-url=https://web.archive.org/web/20200126232007/https://projecteuclid.org/download/pdf_1/euclid.aoms/1177704711 |url-status=live |last1=Tukey |first1=John W. }}</ref></blockquote>There are several phases that can be distinguished, described below. The phases are [[Iteration|iterative]], in that feedback from later phases may result in additional work in earlier phases.<ref name="Schutt & O'Neil">{{cite book
| author2-last = O'Neil | author2-first= Cathy | author2-link= Cathy O'Neil

| author1-last = Schutt | author1-first= Rachel
There are several phases that can be distinguished, described below. The phases are iterative, in that feedback from later phases may result in additional work in earlier phases.<ref name="O'Neil and Schutt 2014">{{cite book
| year = 2013
| last = O'Neil, Cathy and
| title = Doing Data Science | publisher = [[O'Reilly Media]]
| first = Schutt, Rachel
| isbn = 978-1-449-35865-5}}</ref> The [[Cross-industry standard process for data mining|CRISP framework]], used in [[data mining]], has similar steps.
| year = 2014
| title = [[Doing Data Science]] | publisher = O'Reilly
| isbn = 978-1-449-35865-5}}</ref>


===Data requirements===
===Data requirements===
The data necessary as inputs to the analysis are specified based upon the requirements of those directing the analysis or customers who will use the finished product of the analysis. The general type of entity upon which the data will be collected is referred to as an experimental unit (e.g., a person or population of people). Specific variables regarding a population (e.g., age and income) may be specified and obtained. Data may be numerical or categorical (i.e., a text label for numbers).<ref name="O'Neil and Schutt 2014"/>
The data is necessary as inputs to the analysis, which is specified based upon the requirements of those directing the analytics (or customers, who will use the finished product of the analysis).<ref>{{Citation|title=USE OF THE DATA|date=2015-02-06|url=http://dx.doi.org/10.1002/9781118986370.ch18|work=Handbook of Petroleum Product Analysis|pages=296–303|place=Hoboken, NJ|publisher=John Wiley & Sons, Inc|doi=10.1002/9781118986370.ch18|isbn=978-1-118-98637-0|access-date=2021-05-29}}</ref><ref>{{Cite book|last=Ainsworth|first=Penne|title=Introduction to accounting : an integrated approach|date=20 May 2019|publisher=John Wiley & Sons |isbn=978-1-119-60014-5|oclc=1097366032}}</ref> The general type of entity upon which the data will be collected is referred to as an [[Statistical unit|experimental unit]] (e.g., a person or population of people). Specific variables regarding a population (e.g., age and income) may be specified and obtained. Data may be numerical or categorical (i.e., a text label for numbers).<ref name="Schutt & O'Neil"/>

===Data collection===
Data is collected from a variety of sources. The requirements may be communicated by analysts to custodians of the data, such as information technology personnel within an organization. The data may also be collected from sensors in the environment, such as traffic cameras, satellites, recording devices, etc. It may also be obtained through interviews, downloads from online sources, or reading documentation.<ref name="O'Neil and Schutt 2014"/>

===Data processing===
[[File:Relationship of data, information and intelligence.png|thumb|350px|The phases of the [[intelligence cycle]] used to convert raw information into actionable intelligence or knowledge are conceptually similar to the phases in data analysis.]]
Data initially obtained must be processed or organised for analysis. For instance, these may involve placing data into rows and columns in a table format (i.e., [[data model|structured data]]) for further analysis, such as within a spreadsheet or statistical software.<ref name="O'Neil and Schultz 2014"/>

===Data cleaning===
Once processed and organized, the data may be incomplete, contain duplicates, or contain errors. The need for data cleaning will arise from problems in the way that data is entered and stored. Data cleaning is the process of preventing and correcting these errors. Common tasks include record matching, identifying inaccuracy of data, overall quality of existing data,<ref>[https://www.suntecindia.com/blog/clean-data-in-crm-the-key-to-generate-sales-ready-leads-and-boost-your-revenue-pool/ Clean Data in CRM: The Key to Generate Sales-Ready Leads and Boost Your Revenue Pool] Retrieved 29th July, 2016</ref> deduplication, and column segmentation.<ref>{{cite web|title=Data Cleaning|url=http://research.microsoft.com/en-us/projects/datacleaning/|publisher=Microsoft Research|accessdate=26 October 2013}}</ref> Such data problems can also be identified through a variety of analytical techniques. For example, with financial information, the totals for particular variables may be compared against separately published numbers believed to be reliable.<ref name= "Koomey1">[http://www.perceptualedge.com/articles/b-eye/quantitative_data.pdf Perceptual Edge-Jonathan Koomey-Best practices for understanding quantitative data-February 14, 2006]</ref> Unusual amounts above or below pre-determined thresholds may also be reviewed. There are several types of data cleaning that depend on the type of data such as phone numbers, email addresses, employers etc. Quantitative data methods for outlier detection can be used to get rid of likely incorrectly entered data. Textual data spellcheckers can be used to lessen the amount of mistyped words, but it is harder to tell if the words themselves are correct.<ref>{{cite journal|last=Hellerstein|first=Joseph|title=Quantitative Data Cleaning for Large Databases|journal=EECS Computer Science Division|date=27 February 2008|page=3|url=http://db.cs.berkeley.edu/jmh/papers/cleaning-unece.pdf|accessdate=26 October 2013}}</ref>

===Exploratory data analysis===
Once the data is cleaned, it can be analyzed. Analysts may apply a variety of techniques referred to as [[exploratory data analysis]] to begin understanding the messages contained in the data.<ref>[http://www.perceptualedge.com/articles/ie/the_right_graph.pdf Stephen Few-Perceptual Edge-Selecting the Right Graph For Your Message-September 2004]</ref><ref>[http://cll.stanford.edu/~willb/course/behrens97pm.pdf Behrens-Principles and Procedures of Exploratory Data Analysis-American Psychological Association-1997]</ref> The process of exploration may result in additional data cleaning or additional requests for data, so these activities may be iterative in nature. [[Descriptive statistics]] such as the average or median may be generated to help understand the data. [[Data visualization]] may also be used to examine the data in graphical format, to obtain additional insight regarding the messages within the data.<ref name="O'Neil and Schutt 2014"/>

===Modeling and algorithms===
Mathematical formulas or models called [[algorithms]] may be applied to the data to identify relationships among the variables, such as [[Correlation and dependence|correlation]] or [[causality|causation]]. In general terms, models may be developed to evaluate a particular variable in the data based on other variable(s) in the data, with some residual error depending on model accuracy (i.e., Data = Model + Error).<ref name="Judd and McClelland 1989"/>

[[Inferential statistics]] includes techniques to measure relationships between particular variables. For example, [[regression analysis]] may be used to model whether a change in advertising (independent variable X) explains the variation in sales (dependent variable Y). In mathematical terms, Y (sales) is a function of X (advertising). It may be described as Y = aX + b + error, where the model is designed such that a and b minimize the error when the model predicts Y for a given range of values of X. Analysts may attempt to build models that are descriptive of the data to simplify analysis and communicate results.<ref name="Judd and McClelland 1989"/>

===Data product===
A data product is a computer application that takes data inputs and generates outputs, feeding them back into the environment. It may be based on a model or algorithm. An example is an application that analyzes data about customer purchasing history and recommends other purchases the customer might enjoy.<ref name="O'Neil and Schutt 2014"/>

===Communication===
[[File:Social Network Analysis Visualization.png|thumb|250px|[[Data visualization]] to understand the results of a data analysis.<ref>{{Cite journal | volume = 10| issue = 3| last = Grandjean| first = Martin| title = La connaissance est un réseau| journal =Les Cahiers du Numérique| date = 2014| pages = 37–54| url = http://www.martingrandjean.ch/wp-content/uploads/2015/02/Grandjean-2014-Connaissance-reseau.pdf| doi=10.3166/lcn.10.3.37-54}}</ref>]]

{{Main article|Data visualization}}
Once the data is analyzed, it may be reported in many formats to the users of the analysis to support their requirements. The users may have feedback, which results in additional analysis. As such, much of the analytical cycle is iterative.<ref name="O'Neil and Schutt 2014"/>

When determining how to communicate the results, the analyst may consider [[data visualization]] techniques to help clearly and efficiently communicate the message to the audience. Data visualization uses [[information displays]] such as tables and charts to help communicate key messages contained in the data. Tables are helpful to a user who might lookup specific numbers, while charts (e.g., bar charts or line charts) may help explain the quantitative messages contained in the data.


===Data collection===
===Data collection ===
Data is collected from a variety of sources. The requirements may be communicated by analysts to custodians of the data, such as information technology personnel within an organization. The data may also be collected from sensors in the environment, such as traffic cameras, satellites, recording devices, etc. It may also be obtained through interviews, downloads from online sources, or reading documentation.<ref name="O'Neil and Schutt 2014"/>
Data is collected from a variety of sources.<ref>{{Cite book|last=Margo|first=Robert A.|title=Wages and labor markets in the United States, 1820-1860|date=2000|publisher=University of Chicago Press|isbn=0-226-50507-3|oclc=41285104}}</ref><ref>{{Cite journal|title=Table 1: Data type and sources of data collected for this research.|journal=PeerJ|date=7 May 2021|volume=9|pages=e11387|doi=10.7717/peerj.11387/table-1|last1=Olusola|first1=Johnson Adedeji|last2=Shote|first2=Adebola Adekunle|last3=Ouigmane|first3=Abdellah|last4=Isaifan|first4=Rima J. |doi-access=free }}</ref> A [[List of datasets for machine-learning research|list of data sources]] are available for study & research. The requirements may be communicated by analysts to [[Data custodian|custodians]] of the data; such as, [[Information systems technician|Information Technology personnel]] within an organization.<ref>{{Citation|last=MacPherson|first=Derek|title=Information Technology Analysts' Perspectives|date=2019-10-16|url=http://dx.doi.org/10.4324/9780429437564-12|work=Data Strategy in Colleges and Universities|pages=168–183|publisher=Routledge|doi=10.4324/9780429437564-12|isbn=978-0-429-43756-4|s2cid=211738958|access-date=2021-05-29}}</ref> '''Data collection''' or '''data gathering''' is the process of gathering and [[measuring]] [[information]] on targeted variables in an established system, which then enables one to answer relevant questions and evaluate outcomes. The data may also be collected from sensors in the environment, including traffic cameras, satellites, recording devices, etc. It may also be obtained through interviews, downloads from online sources, or reading documentation.<ref name="Schutt & O'Neil"/>


===Data processing===
===Data processing===
[[File:Relationship of data, information and intelligence.png|thumb|350px|The phases of the [[intelligence cycle]] used to convert raw information into actionable intelligence or knowledge are conceptually similar to the phases in data analysis.]]
[[File:Relationship of data, information and intelligence.png|thumb|350px|The phases of the [[intelligence cycle]] used to convert raw information into actionable intelligence or knowledge are conceptually similar to the phases in data analysis.]]
Data initially obtained must be processed or organised for analysis. For instance, these may involve placing data into rows and columns in a table format (i.e., [[data model|structured data]]) for further analysis, such as within a spreadsheet or statistical software.<ref name="O'Neil and Schultz 2014"/>
Data, when initially obtained, must be processed or organized for analysis.<ref>{{Cite book|last=Nelson|first=Stephen L.|title=Excel data analysis for dummies|date=2014|publisher=Wiley|isbn=978-1-118-89810-9|oclc=877772392}}</ref><ref>{{Cite journal|title=Figure 3—source data 1. Raw and processed values obtained through qPCR.|date=30 August 2017|doi=10.7554/elife.28468.029 |doi-access=free }}</ref> For instance, these may involve placing data into rows and columns in a table format (''known as'' [[data model|structured data]]) for further analysis, often through the use of spreadsheet(excel) or statistical software.<ref name="Schutt & O'Neil"/>


===Data cleaning===
===Data cleaning===
{{Main|Data cleansing}}
Once processed and organized, the data may be incomplete, contain duplicates, or contain errors. The need for data cleaning will arise from problems in the way that data is entered and stored. Data cleaning is the process of preventing and correcting these errors. Common tasks include record matching, identifying inaccuracy of data, overall quality of existing data,<ref>[https://www.suntecindia.com/blog/clean-data-in-crm-the-key-to-generate-sales-ready-leads-and-boost-your-revenue-pool/ Clean Data in CRM: The Key to Generate Sales-Ready Leads and Boost Your Revenue Pool] Retrieved 29th July, 2016</ref> deduplication, and column segmentation.<ref>{{cite web|title=Data Cleaning|url=http://research.microsoft.com/en-us/projects/datacleaning/|publisher=Microsoft Research|accessdate=26 October 2013}}</ref> Such data problems can also be identified through a variety of analytical techniques. For example, with financial information, the totals for particular variables may be compared against separately published numbers believed to be reliable.<ref name= "Koomey1">[http://www.perceptualedge.com/articles/b-eye/quantitative_data.pdf Perceptual Edge-Jonathan Koomey-Best practices for understanding quantitative data-February 14, 2006]</ref> Unusual amounts above or below pre-determined thresholds may also be reviewed. There are several types of data cleaning that depend on the type of data such as phone numbers, email addresses, employers etc. Quantitative data methods for outlier detection can be used to get rid of likely incorrectly entered data. Textual data spellcheckers can be used to lessen the amount of mistyped words, but it is harder to tell if the words themselves are correct.<ref>{{cite journal|last=Hellerstein|first=Joseph|title=Quantitative Data Cleaning for Large Databases|journal=EECS Computer Science Division|date=27 February 2008|page=3|url=http://db.cs.berkeley.edu/jmh/papers/cleaning-unece.pdf|accessdate=26 October 2013}}</ref>
Once processed and organized, the data may be incomplete, contain duplicates, or contain errors.<ref name="Bohannon">{{Cite journal|last=Bohannon|first=John|date=2016-02-24|title=Many surveys, about one in five, may contain fraudulent data|journal=Science|doi=10.1126/science.aaf4104|issn=0036-8075|doi-access=free}}</ref><ref>{{Cite book|first1=Garber|last1=Jeannie Scruggs|last2=Gross|first2=Monty|last3=Slonim|first3=Anthony D.|title=Avoiding common nursing errors|date=2010|publisher=Wolters Kluwer Health/Lippincott Williams & Wilkins|isbn=978-1-60547-087-0|oclc=338288678}}</ref> The need for ''data cleaning'' will arise from problems in the way that the datum are entered and stored.<ref name="Bohannon"/> Data cleaning is the process of preventing and correcting these errors. Common tasks include record matching, identifying inaccuracy of data, overall quality of existing data, deduplication, and column segmentation.<ref>{{cite web|title=Data Cleaning|url=http://research.microsoft.com/en-us/projects/datacleaning/|publisher=Microsoft Research|access-date=26 October 2013|archive-date=29 October 2013|archive-url=https://web.archive.org/web/20131029200356/http://research.microsoft.com/en-us/projects/datacleaning/|url-status=live}}</ref> Such data problems can also be identified through a variety of analytical techniques. For example; with financial information, the totals for particular variables may be compared against separately published numbers that are believed to be reliable.<ref>{{Cite journal|last1=Hancock|first1=R.G.V.|last2=Carter|first2=Tristan|date=February 2010|title=How reliable are our published archaeometric analyses? Effects of analytical techniques through time on the elemental analysis of obsidians|url=http://dx.doi.org/10.1016/j.jas.2009.10.004|journal=Journal of Archaeological Science|volume=37|issue=2|pages=243–250|doi=10.1016/j.jas.2009.10.004|bibcode=2010JArSc..37..243H |issn=0305-4403}}</ref><ref name="Koomey1">{{Cite web |url=http://www.perceptualedge.com/articles/b-eye/quantitative_data.pdf |title=Perceptual Edge-Jonathan Koomey-Best practices for understanding quantitative data-February 14, 2006 |access-date=November 12, 2014 |archive-date=October 5, 2014 |archive-url=https://web.archive.org/web/20141005075112/http://www.perceptualedge.com/articles/b-eye/quantitative_data.pdf |url-status=live }}</ref> Unusual amounts, above or below predetermined thresholds, may also be reviewed. There are several types of data cleaning, that are dependent upon the type of data in the set; this could be phone numbers, email addresses, employers, or other values.<ref>{{Cite journal|last1=Peleg|first1=Roni|last2=Avdalimov|first2=Angelika|last3=Freud|first3=Tamar|date=2011-03-23|title=Providing cell phone numbers and email addresses to Patients: the physician's perspective|journal=BMC Research Notes|volume=4|issue=1|page=76|doi=10.1186/1756-0500-4-76|pmid=21426591|issn=1756-0500|pmc=3076270 |doi-access=free }}</ref><ref>{{Cite book|last=Goodman|first=Lenn Evan|title=Judaism, human rights, and human values|date=1998|publisher=Oxford University Press|isbn=0-585-24568-1|oclc=45733915}}</ref> Quantitative data methods for outlier detection, can be used to get rid of data that appears to have a higher likelihood of being input incorrectly.<ref>{{Cite journal|title=Blind joint maximum likelihood channel estimation and data detection for single-input multiple-output systems|last=Hanzo|first=Lajos|url=http://dx.doi.org/10.1049/iet-tv.44.786|access-date=2021-05-29|doi=10.1049/iet-tv.44.786|url-access=subscription}}</ref> Textual data spell checkers can be used to lessen the amount of mistyped words. However, it is harder to tell if the words themselves are correct.<ref>{{cite journal|last=Hellerstein|first=Joseph|title=Quantitative Data Cleaning for Large Databases|journal=EECS Computer Science Division|date=27 February 2008|page=3|url=http://db.cs.berkeley.edu/jmh/papers/cleaning-unece.pdf|access-date=26 October 2013|archive-date=13 October 2013|archive-url=https://web.archive.org/web/20131013011223/http://db.cs.berkeley.edu/jmh/papers/cleaning-unece.pdf|url-status=live}}</ref>


===Exploratory data analysis===
===Exploratory data analysis===
Once the data is cleaned, it can be analyzed. Analysts may apply a variety of techniques referred to as [[exploratory data analysis]] to begin understanding the messages contained in the data.<ref>[http://www.perceptualedge.com/articles/ie/the_right_graph.pdf Stephen Few-Perceptual Edge-Selecting the Right Graph For Your Message-September 2004]</ref><ref>[http://cll.stanford.edu/~willb/course/behrens97pm.pdf Behrens-Principles and Procedures of Exploratory Data Analysis-American Psychological Association-1997]</ref> The process of exploration may result in additional data cleaning or additional requests for data, so these activities may be iterative in nature. [[Descriptive statistics]] such as the average or median may be generated to help understand the data. [[Data visualization]] may also be used to examine the data in graphical format, to obtain additional insight regarding the messages within the data.<ref name="O'Neil and Schutt 2014"/>
Once the datasets are cleaned, they can then be analyzed. Analysts may apply a variety of techniques, referred to as [[exploratory data analysis]], to begin understanding the messages contained within the obtained data.<ref>{{Cite journal|journal=PeerJ Computer Science|doi=10.7717/peerj-cs.20/supp-1|title=CFSAN SNP Pipeline: An automated method for constructing SNP matrices from next-generation sequence data|date=26 August 2015|volume=1|pages=e20|last1=Davis|first1=Steve|last2=Pettengill|first2=James B.|last3=Luo|first3=Yan|last4=Payne|first4=Justin|last5=Shpuntoff|first5=Al|last6=Rand|first6=Hugh|last7=Strain|first7=Errol |doi-access=free }}</ref> The process of data exploration may result in additional data cleaning or additional requests for data; thus, the initialization of the ''iterative phases'' mentioned in the lead paragraph of this section.<ref>{{Cite journal|date=December 1999|title=FTC requests additional data|url=http://dx.doi.org/10.1016/s1359-6128(99)90509-8|journal=Pump Industry Analyst|volume=1999|issue=48|pages=12|doi=10.1016/s1359-6128(99)90509-8|issn=1359-6128}}</ref> [[Descriptive statistics]], such as, the average or median, can be generated to aid in understanding the data.<ref>{{Cite journal|date=2017|title=Exploring your Data with Data Visualization & Descriptive Statistics: Common Descriptive Statistics for Quantitative Data|url=http://dx.doi.org/10.4135/9781529732795|doi=10.4135/9781529732795}}</ref><ref>{{Cite book|last=Murray|first=Daniel G.|title=Tableau your data! : fast and easy visual analysis with Tableau Software|date=2013|publisher=J. Wiley & Sons|isbn=978-1-118-61204-0|oclc=873810654}}</ref> [[Data visualization]] is also a technique used, in which the analyst is able to examine the data in a graphical format in order to obtain additional insights, regarding the messages within the data.<ref name="Schutt & O'Neil"/>


===Modeling and algorithms===
===Modeling and algorithms===
Mathematical formulas or models called [[algorithms]] may be applied to the data to identify relationships among the variables, such as [[Correlation and dependence|correlation]] or [[causality|causation]]. In general terms, models may be developed to evaluate a particular variable in the data based on other variable(s) in the data, with some residual error depending on model accuracy (i.e., Data = Model + Error).<ref name="Judd and McClelland 1989"/>
'''Mathematical formulas''' or '''models''' (also known as '''[[algorithms]]'''), may be applied to the data in order to identify relationships among the variables; for example, using [[Correlation and dependence|correlation]] or [[causality|causation]].<ref>{{Citation|last=Ben-Ari|first=Mordechai|title=First-Order Logic: Formulas, Models, Tableaux|date=2012|url=http://dx.doi.org/10.1007/978-1-4471-4129-7_7|work=Mathematical Logic for Computer Science|pages=131–154|place=London|publisher=Springer London|doi=10.1007/978-1-4471-4129-7_7|isbn=978-1-4471-4128-0|access-date=2021-05-31}}</ref><ref>{{Cite book|first=Ernest|last=Sosa|title=Causation|date=2011|publisher=Oxford Univ. Press|isbn=978-0-19-875094-9|oclc=767569031}}</ref> In general terms, models may be developed to evaluate a specific variable based on other variable(s) contained within the dataset, with some ''[[Residual bit error rate|residual error]]'' depending on the implemented model's accuracy (''e.g.'', Data = Model + Error).<ref>{{Cite journal|title=Figure 2. Variable importance by permutation, averaged over 25 models.|journal=eLife|date=28 February 2017|volume=6|pages=e22053|doi=10.7554/elife.22053.004|last1=Evans|first1=Michelle V.|last2=Dallas|first2=Tad A.|last3=Han|first3=Barbara A.|last4=Murdock|first4=Courtney C.|last5=Drake|first5=John M.|editor1=Brady, Oliver |doi-access=free }}</ref><ref name="Judd and McClelland 1989"/>


[[Inferential statistics]] includes techniques to measure relationships between particular variables. For example, [[regression analysis]] may be used to model whether a change in advertising (independent variable X) explains the variation in sales (dependent variable Y). In mathematical terms, Y (sales) is a function of X (advertising). It may be described as Y = aX + b + error, where the model is designed such that a and b minimize the error when the model predicts Y for a given range of values of X. Analysts may attempt to build models that are descriptive of the data to simplify analysis and communicate results.<ref name="Judd and McClelland 1989"/>
[[Inferential statistics]] includes utilizing techniques that measure the relationships between particular variables.<ref>{{Cite journal|title=Table 3: Descriptive (mean ± SD), inferential (95% CI) and qualitative statistics (ES) of all variables between self-selected and predetermined conditions.|journal=PeerJ|date=12 November 2020|volume=8|pages=e10361|doi=10.7717/peerj.10361/table-3|last1=Watson|first1=Kevin|last2=Halperin|first2=Israel|last3=Aguilera-Castells|first3=Joan|last4=Iacono|first4=Antonio Dello |doi-access=free }}</ref> For example, [[regression analysis]] may be used to model whether a change in advertising (''independent variable X''), provides an explanation for the variation in sales (''dependent variable Y'').<ref>{{Cite journal|title=Table 3: Best regression models between LIDAR data (independent variable) and field-based Forestereo data (dependent variable), used to map spatial distribution of the main forest structure variables.|journal=PeerJ|date=22 October 2020|volume=8|pages=e10158|doi=10.7717/peerj.10158/table-3|last1=Cortés-Molino|first1=Álvaro|last2=Aulló-Maestro|first2=Isabel|last3=Fernandez-Luque|first3=Ismael|last4=Flores-Moya|first4=Antonio|last5=Carreira|first5=José A.|last6=Salvo|first6=A. Enrique |doi-access=free }}</ref> In mathematical terms, ''Y'' (sales) is a function of ''X'' (advertising).<ref>{{Citation|url=http://dx.doi.org/10.5040/9781472561671.ch-003|publisher=Beck/Hart|doi=10.5040/9781472561671.ch-003|isbn=978-1-4725-6167-1|access-date=2021-05-31|title=International Sales Terms|year=2014}}</ref> It may be described as (''Y'' = ''aX'' + ''b'' + error), where the model is designed such that (''a'') and (''b'') minimize the error when the model predicts ''Y'' for a given range of values of ''X''.<ref>{{Cite journal|last=Nwabueze|first=JC|date=2008-05-21|title=Performances of estimators of linear model with auto-correlated error terms when the independent variable is normal|url=http://dx.doi.org/10.4314/jonamp.v9i1.40071|journal=Journal of the Nigerian Association of Mathematical Physics|volume=9|issue=1|doi=10.4314/jonamp.v9i1.40071|issn=1116-4336}}</ref> Analysts may also attempt to build models that are descriptive of the data, in an aim to simplify analysis and communicate results.<ref name="Judd and McClelland 1989"/>


===Data product===
===Data product===
A data product is a computer application that takes data inputs and generates outputs, feeding them back into the environment. It may be based on a model or algorithm. An example is an application that analyzes data about customer purchasing history and recommends other purchases the customer might enjoy.<ref name="O'Neil and Schutt 2014"/>
A '''data product''' is a computer application that takes ''data inputs'' and generates ''outputs'', feeding them back into the environment.<ref>{{Cite journal|last=Conway|first=Steve|date=2012-07-04|title=A Cautionary Note on Data Inputs and Visual Outputs in Social Network Analysis|url=http://dx.doi.org/10.1111/j.1467-8551.2012.00835.x|journal=British Journal of Management|volume=25|issue=1|pages=102–117|doi=10.1111/j.1467-8551.2012.00835.x|hdl=2381/36068|s2cid=154347514|issn=1045-3172}}</ref> It may be based on a model or algorithm. For instance, an application that analyzes data about customer purchase history, and uses the results to recommend other purchases the customer might enjoy.<ref>{{Citation|title=Customer Purchases and Other Repeated Events|date=2016-01-29|url=http://dx.doi.org/10.1002/9781119183419.ch8|work=Data Analysis Using SQL and Excel®|pages=367–420|place=Indianapolis, Indiana|publisher=John Wiley & Sons, Inc.|doi=10.1002/9781119183419.ch8|isbn=978-1-119-18341-9|access-date=2021-05-31}}</ref><ref name="Schutt & O'Neil"/>


===Communication===
===Communication===
[[File:Social Network Analysis Visualization.png|thumb|250px|[[Data visualization]] to understand the results of a data analysis.<ref>{{Cite journal | volume = 10| issue = 3| last = Grandjean| first = Martin| title = La connaissance est un réseau| journal =Les Cahiers du Numérique| date = 2014| pages = 37–54| url = http://www.martingrandjean.ch/wp-content/uploads/2015/02/Grandjean-2014-Connaissance-reseau.pdf| doi=10.3166/lcn.10.3.37-54}}</ref>]]
[[File:Social Network Analysis Visualization.png|thumb|250px|[[Data visualization]] is used to help understand the results after data is analyzed.<ref>{{Cite journal| volume = 10| issue = 3| last = Grandjean| first = Martin| title = La connaissance est un réseau| journal = Les Cahiers du Numérique| date = 2014| pages = 37–54| url = http://www.martingrandjean.ch/wp-content/uploads/2015/02/Grandjean-2014-Connaissance-reseau.pdf| doi = 10.3166/lcn.10.3.37-54| access-date = 2015-05-05| archive-date = 2015-09-27| archive-url = https://web.archive.org/web/20150927170721/http://www.martingrandjean.ch/wp-content/uploads/2015/02/Grandjean-2014-Connaissance-reseau.pdf| url-status = live}}</ref>]]


{{Main article|Data visualization}}
{{Main|Data and information visualization}}
Once the data is analyzed, it may be reported in many formats to the users of the analysis to support their requirements. The users may have feedback, which results in additional analysis. As such, much of the analytical cycle is iterative.<ref name="O'Neil and Schutt 2014"/>
Once data is analyzed, it may be reported in many formats to the users of the analysis to support their requirements.<ref>{{Citation|title=Data requirements for semiconductor die. Exchange data formats and data dictionary|url=http://dx.doi.org/10.3403/02271298|publisher=BSI British Standards|doi=10.3403/02271298|access-date=2021-05-31}}</ref> The users may have feedback, which results in additional analysis. As such, much of the analytical cycle is iterative.<ref name="Schutt & O'Neil"/>


When determining how to communicate the results, the analyst may consider [[data visualization]] techniques to help clearly and efficiently communicate the message to the audience. Data visualization uses [[information displays]] such as tables and charts to help communicate key messages contained in the data. Tables are helpful to a user who might lookup specific numbers, while charts (e.g., bar charts or line charts) may help explain the quantitative messages contained in the data.
When determining how to communicate the results, the analyst may consider implementing a variety of data visualization techniques to help communicate the message more clearly and efficiently to the audience.<ref>{{Cite journal|last=Yee|first=D.|date=1985-04-01|title=How to Communicate Your Message to an Audience Effectively|url=http://dx.doi.org/10.1093/geront/25.2.209|journal=The Gerontologist|volume=25|issue=2|pages=209|doi=10.1093/geront/25.2.209|issn=0016-9013}}</ref> Data visualization uses [[information displays]] (graphics such as, tables and charts) to help communicate key messages contained in the data.<ref>{{Cite journal|title=Supplemental Information 1: Raw data for charts and tables|date=11 June 2019|doi=10.7287/peerj.preprints.27793v1/supp-1|last1=Bemowska-Kałabun|first1=Olga|last2=Wąsowicz|first2=Paweł|last3=Napora-Rutkowski|first3=Łukasz|last4=Nowak-Życzyńska|first4=Zuzanna|last5=Wierzbicka|first5=Małgorzata |doi-access=free }}</ref> [[Table (information)|Tables]] are a valuable tool by enabling the ability of a user to query and focus on specific numbers; while charts (e.g., bar charts or line charts), may help explain the quantitative messages contained in the data.<ref>{{Cite book|date=2021|title=Visualizing Data About UK Museums: Bar Charts, Line Charts and Heat Maps|url=http://dx.doi.org/10.4135/9781529768749|doi=10.4135/9781529768749|isbn=9781529768749|s2cid=240967380}}</ref>


==Quantitative messages==
==Quantitative messages==
{{Main article|Data visualization}}
{{Main|Data and information visualization}}
[[File:Total Revenues and Outlays as Percent GDP 2013.png|thumb|right|250px|A time series illustrated with a line chart demonstrating trends in U.S. federal spending and revenue over time.]]
[[File:Total Revenues and Outlays as Percent GDP 2013.png|thumb|right|250px|A time series illustrated with a line chart demonstrating trends in U.S. federal spending and revenue over time.]]
[[File:U.S. Phillips Curve 2000 to 2013.png|thumb|right|250px|A scatterplot illustrating correlation between two variables (inflation and unemployment) measured at points in time.]]
[[File:U.S. Phillips Curve 2000 to 2013.png|thumb|right|250px|A scatterplot illustrating the correlation between two variables (inflation and unemployment) measured at points in time.]]
Author Stephen Few described eight types of quantitative messages that users may attempt to understand or communicate from a set of data and the associated graphs used to help communicate the message. Customers specifying requirements and analysts performing the data analysis may consider these messages during the course of the process.
Stephen Few described eight types of quantitative messages that users may attempt to understand or communicate from a set of data and the associated graphs used to help communicate the message.<ref>{{Cite journal|last=Tunqui Neira|first=José Manuel|date=2019-09-19|title=Thank you for your review. Please find in the attached pdf file a detailed response to the points you raised.|doi=10.5194/hess-2019-325-ac2|s2cid=241041810 |doi-access=free }}</ref> Customers specifying requirements and analysts performing the data analysis may consider these messages during the course of the process.<ref>{{Citation|last=Brackett|first=John W.|title=Performing Requirements Analysis Project Courses for External Customers|date=1989|url=http://dx.doi.org/10.1007/978-1-4613-9614-7_20|work=Issues in Software Engineering Education|pages=276–285|place=New York, NY|publisher=Springer New York|doi=10.1007/978-1-4613-9614-7_20|isbn=978-1-4613-9616-1|access-date=2021-06-03}}</ref>
#Time-series: A single variable is captured over a period of time, such as the unemployment rate over a 10-year period. A [[line chart]] may be used to demonstrate the trend.<ref>{{Cite journal|title=Figure 2: Bi-monthly mealybug population fluctuations in southern Vietnam, over a 2-year time period.|journal=PeerJ|date=19 October 2018|volume=6|pages=e5796|doi=10.7717/peerj.5796/fig-2|last1=Wyckhuys|first1=Kris A. G.|last2=Wongtiem|first2=Prapit|last3=Rauf|first3=Aunu|last4=Thancharoen|first4=Anchana|last5=Heimpel|first5=George E.|last6=Le|first6=Nhung T. T.|last7=Fanani|first7=Muhammad Zainal|last8=Gurr|first8=Geoff M.|last9=Lundgren|first9=Jonathan G.|last10=Burra|first10=Dharani D.|last11=Palao|first11=Leo K.|last12=Hyman|first12=Glenn|last13=Graziosi|first13=Ignazio|last14=Le|first14=Vi X.|last15=Cock|first15=Matthew J. W.|last16=Tscharntke|first16=Teja|last17=Wratten|first17=Steve D.|last18=Nguyen|first18=Liem V.|last19=You|first19=Minsheng|last20=Lu|first20=Yanhui|last21=Ketelaar|first21=Johannes W.|last22=Goergen|first22=Georg|last23=Neuenschwander|first23=Peter |doi-access=free }}</ref>
#Time-series: A single variable is captured over a period of time, such as the unemployment rate over a 10-year period. A [[line chart]] may be used to demonstrate the trend.
#Ranking: Categorical subdivisions are ranked in ascending or descending order, such as a ranking of sales performance (the ''measure'') by sales persons (the ''category'', with each sales person a ''categorical subdivision'') during a single period. A [[bar chart]] may be used to show the comparison across the sales persons.
#Ranking: Categorical subdivisions are ranked in ascending or descending order, such as a ranking of sales performance (the ''measure'') by salespersons (the ''category'', with each salesperson a ''categorical subdivision'') during a single period.<ref>{{Citation|last=Riehl|first=Emily|title=A sampling of 2-categorical aspects of quasi-category theory|url=http://dx.doi.org/10.1017/cbo9781107261457.019|work=Categorical Homotopy Theory|year=2014|pages=318–336|place=Cambridge|publisher=Cambridge University Press|doi=10.1017/cbo9781107261457.019|isbn=978-1-107-26145-7|access-date=2021-06-03}}</ref> A [[bar chart]] may be used to show the comparison across the salespersons.<ref>{{cite book | doi=10.1007/1-4020-0612-8_1063 | chapter=X-Bar Chart | title=Encyclopedia of Production and Manufacturing Management | date=2000 | page=841 | isbn=978-0-7923-8630-8 | last1=Swamidass | first1=P. M. }}</ref>
#Part-to-whole: Categorical subdivisions are measured as a ratio to the whole (i.e., a percentage out of 100%). A [[pie chart]] or bar chart can show the comparison of ratios, such as the market share represented by competitors in a market.
#Part-to-whole: Categorical subdivisions are measured as a ratio to the whole (i.e., a percentage out of 100%). A [[pie chart]] or bar chart can show the comparison of ratios, such as the market share represented by competitors in a market.<ref>{{Cite journal|title=Chart C5.3. Percentage of 15-19 year-olds not in education, by labour market status (2012)|url=http://dx.doi.org/10.1787/888933119055|access-date=2021-06-03|doi=10.1787/888933119055}}</ref>
#Deviation: Categorical subdivisions are compared against a reference, such as a comparison of actual vs. budget expenses for several departments of a business for a given time period. A bar chart can show comparison of the actual versus the reference amount.
#Deviation: Categorical subdivisions are compared against a reference, such as a comparison of actual vs. budget expenses for several departments of a business for a given time period. A bar chart can show the comparison of the actual versus the reference amount.<ref>{{Cite journal|title=Chart 7: Households: final consumption expenditure versus actual individual consumption|url=http://dx.doi.org/10.1787/665527077310|access-date=2021-06-03|doi=10.1787/665527077310}}</ref>
#Frequency distribution: Shows the number of observations of a particular variable for given interval, such as the number of years in which the stock market return is between intervals such as 0-10%, 11-20%, etc. A [[histogram]], a type of bar chart, may be used for this analysis.
#Frequency distribution: Shows the number of observations of a particular variable for a given interval, such as the number of years in which the stock market return is between intervals such as 0–10%, 11–20%, etc. A [[histogram]], a type of bar chart, may be used for this analysis.<ref>{{Cite journal|title=Figure 4. Frequency of hemifusion (measured as DiD fluorescence dequenching) as a function of number of bound Alexa-fluor-555/3-110-22 molecules.|journal=eLife|date=12 July 2018|volume=7|pages=e36461|doi=10.7554/elife.36461.006|last1=Chao|first1=Luke H.|last2=Jang|first2=Jaebong|last3=Johnson|first3=Adam|last4=Nguyen|first4=Anthony|last5=Gray|first5=Nathanael S.|last6=Yang|first6=Priscilla L.|last7=Harrison|first7=Stephen C.|editor1=Jahn, Reinhard|editor2=Schekman, Randy |doi-access=free }}</ref>
#Correlation: Comparison between observations represented by two variables (X,Y) to determine if they tend to move in the same or opposite directions. For example, plotting unemployment (X) and inflation (Y) for a sample of months. A [[scatter plot]] is typically used for this message.
#Correlation: Comparison between observations represented by two variables (X,Y) to determine if they tend to move in the same or opposite directions. For example, plotting unemployment (X) and inflation (Y) for a sample of months. A [[scatter plot]] is typically used for this message.<ref>{{Cite journal|title=Table 2: Graph comparison between Scatter plot, Violin + Scatter plot, Heatmap and ViSiElse graph.|journal=PeerJ|date=3 February 2020|volume=8|pages=e8341|doi=10.7717/peerj.8341/table-2|last1=Garnier|first1=Elodie M.|last2=Fouret|first2=Nastasia|last3=Descoins|first3=Médéric |doi-access=free }}</ref>
#Nominal comparison: Comparing categorical subdivisions in no particular order, such as the sales volume by product code. A bar chart may be used for this comparison.
#Nominal comparison: Comparing categorical subdivisions in no particular order, such as the sales volume by product code. A bar chart may be used for this comparison.<ref>{{Cite journal|date=2009|title=Product comparison chart: Wearables|url=http://dx.doi.org/10.1037/e539162010-006|access-date=2021-06-03|website=PsycEXTRA Dataset|doi=10.1037/e539162010-006}}</ref>
#Geographic or geospatial: Comparison of a variable across a map or layout, such as the unemployment rate by state or the number of persons on the various floors of a building. A [[cartogram]] is a typical graphic used.<ref>[http://www.perceptualedge.com/articles/ie/the_right_graph.pdf Stephen Few-Perceptual Edge-Selecting the Right Graph for Your Message-2004]</ref><ref>[http://www.perceptualedge.com/articles/misc/Graph_Selection_Matrix.pdf Stephen Few-Perceptual Edge-Graph Selection Matrix]</ref>
#Geographic or geospatial: Comparison of a variable across a map or layout, such as the unemployment rate by state or the number of persons on the various floors of a building. A [[cartogram]] is a typical graphic used.<ref>{{Cite web |url=http://www.perceptualedge.com/articles/ie/the_right_graph.pdf |title=Stephen Few-Perceptual Edge-Selecting the Right Graph for Your Message-2004 |access-date=2014-10-29 |archive-date=2014-10-05 |archive-url=https://web.archive.org/web/20141005080924/http://www.perceptualedge.com/articles/ie/the_right_graph.pdf |url-status=live }}</ref><ref>{{Cite web |url=http://www.perceptualedge.com/articles/misc/Graph_Selection_Matrix.pdf |title=Stephen Few-Perceptual Edge-Graph Selection Matrix |access-date=2014-10-29 |archive-date=2014-10-05 |archive-url=https://web.archive.org/web/20141005080945/http://www.perceptualedge.com/articles/misc/Graph_Selection_Matrix.pdf |url-status=live }}</ref>


==Techniques for analyzing quantitative data==
==Analyzing quantitative data==
{{See also|Problem solving}}
{{See also|Problem solving}}
Author Jonathan Koomey has recommended a series of best practices for understanding quantitative data. These include:
Author [[Jonathan Koomey]] has recommended a series of best practices for understanding quantitative data.<ref>{{Cite journal|date=2008-10-01|title=Recommended Best Practices|url=http://dx.doi.org/10.14217/9781848590151-8-en|access-date=2021-06-03|doi=10.14217/9781848590151-8-en}}</ref> These include:
*Check raw data for anomalies prior to performing your analysis;
*Check raw data for anomalies prior to performing an analysis;
*Re-perform important calculations, such as verifying columns of data that are formula driven;
*Re-perform important calculations, such as verifying columns of data that are formula driven;
*Confirm main totals are the sum of subtotals;
*Confirm main totals are the sum of subtotals;
*Check relationships between numbers that should be related in a predictable way, such as ratios over time;
*Check relationships between numbers that should be related in a predictable way, such as ratios over time;
*Normalize numbers to make comparisons easier, such as analyzing amounts per person or relative to GDP or as an index value relative to a base year;
*Normalize numbers to make comparisons easier, such as analyzing amounts per person or relative to GDP or as an index value relative to a base year;
*Break problems into component parts by analyzing factors that led to the results, such as [[DuPont analysis]] of return on equity.<ref name= "Koomey1"/>
*Break problems into component parts by analyzing factors that led to the results, such as [[DuPont analysis]] of return on equity.<ref name="Koomey1"/>


For the variables under examination, analysts typically obtain [[descriptive statistics]] for them, such as the mean (average), [[median]], and [[standard deviation]]. They may also analyze the [[probability distribution|distribution]] of the key variables to see how the individual values cluster around the mean.
For the variables under examination, analysts typically obtain [[descriptive statistics]] for them, such as the mean (average), [[median]], and [[standard deviation]].<ref>{{Cite journal|title=Table 1: Descriptive statistics (mean ± standard-deviation) for somatic variables and physical fitness ítems for males and females.|journal=PeerJ|date=30 November 2017|volume=5|pages=e4032|doi=10.7717/peerj.4032/table-1|last1=Hobold|first1=Edilson|last2=Pires-Lopes|first2=Vitor|last3=Gómez-Campos|first3=Rossana|last4=Arruda|first4=Miguel de|last5=Andruske|first5=Cynthia Lee|last6=Pacheco-Carrillo|first6=Jaime|last7=Cossio-Bolaños|first7=Marco Antonio |doi-access=free }}</ref> They may also analyze the [[probability distribution|distribution]] of the key variables to see how the individual values cluster around the mean.<ref>{{Cite journal|title=Table 2: Cluster analysis presenting mean values of psychological variables per cluster group.|journal=PeerJ|date=13 September 2016|volume=4|pages=e2421|doi=10.7717/peerj.2421/table-2|last1=Ablin|first1=Jacob N.|last2=Zohar|first2=Ada H.|last3=Zaraya-Blum|first3=Reut|last4=Buskila|first4=Dan |doi-access=free }}</ref>
[[File:US_Employment_Statistics_-_March_2015.png|thumb|250px|right|An illustration of the [[MECE principle]] used for data analysis.]] The consultants at [[McKinsey and Company]] named a technique for breaking a quantitative problem down into its component parts called the [[MECE principle]]. Each layer can be broken down into its components; each of the sub-components must be [[Mutually exclusive events|mutually exclusive]] of each other and [[Collectively exhaustive events|collectively]] add up to the layer above them. The relationship is referred to as "Mutually Exclusive and Collectively Exhaustive" or MECE. For example, profit by definition can be broken down into total revenue and total cost. In turn, total revenue can be analyzed by its components, such as revenue of divisions A, B, and C (which are mutually exclusive of each other) and should add to the total revenue (collectively exhaustive).
[[File:US_Employment_Statistics_-_March_2015.png|thumb|250px|right|An illustration of the [[MECE principle]] used for data analysis.]] The consultants at [[McKinsey and Company]] named a technique for breaking a quantitative problem down into its component parts called the [[MECE principle]].<ref>{{Citation|title=Consultants Employed by McKinsey & Company|date=2008-07-30|url=http://dx.doi.org/10.4324/9781315701974-15|work=Organizational Behavior 5|pages=77–82|publisher=Routledge|doi=10.4324/9781315701974-15|isbn=978-1-315-70197-4|access-date=2021-06-03}}</ref> Each layer can be broken down into its components; each of the sub-components must be [[Mutually exclusive events|mutually exclusive]] of each other and [[Collectively exhaustive events|collectively]] add up to the layer above them.<ref>{{Citation|last=Antiphanes|editor1-first=S. Douglas|editor1-last=Olson|title=H6 Antiphanes fr.172.1-4, from Women Who Looked Like Each Other or Men Who Looked Like Each Other|url=http://dx.doi.org/10.1093/oseo/instance.00232915|work=Broken Laughter: Select Fragments of Greek Comedy|year=2007|publisher=Oxford University Press|doi=10.1093/oseo/instance.00232915|isbn=978-0-19-928785-7|access-date=2021-06-03}}</ref> The relationship is referred to as "Mutually Exclusive and Collectively Exhaustive" or MECE. For example, profit by definition can be broken down into total revenue and total cost.<ref>{{Cite journal|last=Carey|first=Malachy|date=November 1981|title=On Mutually Exclusive and Collectively Exhaustive Properties of Demand Functions|url=http://dx.doi.org/10.2307/2553697|journal=Economica|volume=48|issue=192|pages=407–415|doi=10.2307/2553697|jstor=2553697|issn=0013-0427}}</ref> In turn, total revenue can be analyzed by its components, such as the revenue of divisions A, B, and C (which are mutually exclusive of each other) and should add to the total revenue (collectively exhaustive).<ref>{{Cite journal|title=Total tax revenue|url=http://dx.doi.org/10.1787/352874835867|access-date=2021-06-03|doi=10.1787/352874835867}}</ref>


Analysts may use robust statistical measurements to solve certain analytical problems. [[Hypothesis testing]] is used when a particular hypothesis about the true state of affairs is made by the analyst and data is gathered to determine whether that state of affairs is true or false. For example, the hypothesis might be that "Unemployment has no effect on inflation", which relates to an economics concept called the [[Phillips Curve]]. Hypothesis testing involves considering the likelihood of [[Type I and type II errors]], which relate to whether the data supports accepting or rejecting the hypothesis.
Analysts may use robust statistical measurements to solve certain analytical problems.<ref>{{Cite journal|date=1968-06-03|title=Dual-use car may solve transportation problems|url=http://dx.doi.org/10.1021/cen-v046n024.p044|journal=Chemical & Engineering News Archive|volume=46|issue=24|pages=44|doi=10.1021/cen-v046n024.p044|issn=0009-2347}}</ref> [[Hypothesis testing]] is used when a particular hypothesis about the true state of affairs is made by the analyst and data is gathered to determine whether that state of affairs is true or false.<ref>{{Cite journal|last=Heckman|date=1978|title=Simple Statistical Models for Discrete Panel Data Developed and Applied to Test the Hypothesis of True State Dependence against the Hypothesis of Spurious State Dependence|url=http://dx.doi.org/10.2307/20075292|journal=Annales de l'inséé|issue=30/31|pages=227–269|doi=10.2307/20075292|jstor=20075292|issn=0019-0209}}</ref><ref>{{Cite book|first=Dean|last=Koontz|title=False Memory|date=2017|publisher=Headline Book Publishing|isbn=978-1-4722-4830-5|oclc=966253202}}</ref> For example, the hypothesis might be that "Unemployment has no effect on inflation", which relates to an economics concept called the [[Phillips Curve]].<ref>{{Citation|last=Munday|first=Stephen C. R.|title=Unemployment, Inflation and the Phillips Curve|date=1996|url=http://dx.doi.org/10.1007/978-1-349-24986-2_11|work=Current Developments in Economics|pages=186–218|place=London|publisher=Macmillan Education UK|doi=10.1007/978-1-349-24986-2_11|isbn=978-0-333-64444-7|access-date=2021-06-03}}</ref> Hypothesis testing involves considering the likelihood of [[Type I and type II errors]], which relate to whether the data supports accepting or rejecting the hypothesis.<ref>{{Cite journal|last=Louangrath|first=Paul I.|date=2013|title=Alpha and Beta Tests for Type I and Type II Inferential Errors Determination in Hypothesis Testing|url=http://dx.doi.org/10.2139/ssrn.2332756|journal=SSRN Electronic Journal|doi=10.2139/ssrn.2332756|issn=1556-5068}}</ref><ref>{{Cite book|first=Ann M.|last=Walko|title=Rejecting the second generation hypothesis : maintaining Estonian ethnicity in Lakewood, New Jersey|date=2006|publisher=AMS Press|isbn=0-404-19454-0|oclc=467107876}}</ref>


[[Regression analysis]] may be used when the analyst is trying to determine the extent to which independent variable X affects dependent variable Y (e.g., "To what extent do changes in the unemployment rate (X) affect the inflation rate (Y)?"). This is an attempt to model or fit an equation line or curve to the data, such that Y is a function of X.
[[Regression analysis]] may be used when the analyst is trying to determine the extent to which independent variable X affects dependent variable Y (e.g., "To what extent do changes in the unemployment rate (X) affect the inflation rate (Y)?").<ref name="Yanamandra 57–68">{{Cite journal|last=Yanamandra|first=Venkataramana|date=September 2015|title=Exchange rate changes and inflation in India: What is the extent of exchange rate pass-through to imports?|url=http://dx.doi.org/10.1016/j.eap.2015.07.004|journal=Economic Analysis and Policy|volume=47|pages=57–68|doi=10.1016/j.eap.2015.07.004|issn=0313-5926}}</ref> This is an attempt to model or fit an equation line or curve to the data, such that Y is a function of X.<ref>{{Cite book|first1=Nawarathna|last1=Mudiyanselage|first2=Pubudu Manoj|last2=Nawarathna|title=Characterization of epigenetic changes and their connection to gene expression abnormalities in clear cell renal cell carcinoma|oclc=1190697848}}</ref><ref>{{Cite journal|title=Appendix 1—figure 5. Curve data included in Appendix 1—table 4 (solid points) and the theoretical curve by using the Hill equation parameters of Appendix 1—table 5 (curve line).|journal=eLife|date=29 June 2017|volume=6|pages=e25233|doi=10.7554/elife.25233.027|last1=Moreno Delgado|first1=David|last2=Møller|first2=Thor C.|last3=Ster|first3=Jeanne|last4=Giraldo|first4=Jesús|last5=Maurel|first5=Damien|last6=Rovira|first6=Xavier|last7=Scholler|first7=Pauline|last8=Zwier|first8=Jurriaan M.|last9=Perroy|first9=Julie|last10=Durroux|first10=Thierry|last11=Trinquet|first11=Eric|last12=Prezeau|first12=Laurent|last13=Rondard|first13=Philippe|last14=Pin|first14=Jean-Philippe|editor1=Chao, Moses V |doi-access=free }}</ref>


[https://www.erim.eur.nl/centres/necessary-condition-analysis/ Necessary condition analysis] (NCA) may be used when the analyst is trying to determine the extent to which independent variable X allows variable Y (e.g., "To what extent is a certain unemployment rate (X) necessary for a certain inflation rate (Y)?"). Whereas (multiple) regression analysis uses additive logic where each X-variable can produce the outcome and the X's can compensate for each other (they are sufficient but not necessary), necessary condition analysis (NCA) uses necessity logic, where one or more X-variables allow the outcome to exist, but may not produce it (they are necessary but not sufficient). Each single necessary condition must be present and compensation is not possible.
[[Necessary condition analysis]] (NCA) may be used when the analyst is trying to determine the extent to which independent variable X allows variable Y (e.g., "To what extent is a certain unemployment rate (X) necessary for a certain inflation rate (Y)?").<ref name="Yanamandra 57–68"/> Whereas (multiple) regression analysis uses additive logic where each X-variable can produce the outcome and the X's can compensate for each other (they are sufficient but not necessary),<ref>{{Cite web|url=https://doi.org/10.1049%2Fiet-tv.48.859|last=Feinmann|first=Jane|title=How Can Engineers and Journalists Help Each Other?|access-date=2021-06-03|doi=10.1049/iet-tv.48.859|url-access=subscription|type=Video|publisher=The Institute of Engineering & Technology}}</ref> necessary condition analysis (NCA) uses necessity logic, where one or more X-variables allow the outcome to exist, but may not produce it (they are necessary but not sufficient). Each single necessary condition must be present and compensation is not possible.<ref>{{Cite journal|last=Dul|first=Jan|date=2015|title=Necessary Condition Analysis (NCA): Logic and Methodology of 'Necessary But Not Sufficient' Causality|url=http://dx.doi.org/10.2139/ssrn.2588480|journal=SSRN Electronic Journal|doi=10.2139/ssrn.2588480|hdl=1765/77890|s2cid=219380122|issn=1556-5068}}</ref>


==Analytical activities of data users==
==Analytical activities of data users==
[[File:User-activities.png|Analytic activities of data visualization users|thumb|right|350px]]
Users may have particular data points of interest within a data set, as opposed to general messaging outlined above. Such low-level user analytic activities are presented in the following table. The taxonomy can also be organized by three poles of activities: retrieving values, finding data points, and arranging data points.<ref>Robert Amar, James Eagan, and John Stasko (2005) [http://www.cc.gatech.edu/~stasko/papers/infovis05.pdf "Low-Level Components of Analytic Activity in Information Visualization"]</ref><ref>William Newman (1994) [http://www.mdnpress.com/wmn/pdfs/chi94-pro-formas-2.pdf "A Preliminary Analysis of the Products of HCI Research, Using Pro Forma Abstracts"]</ref><ref>Mary Shaw (2002) [http://www.cs.cmu.edu/~Compose/ftp/shaw-fin-etaps.pdf "What Makes Good Research in Software Engineering?"]</ref>

Users may have particular data points of interest within a data set, as opposed to the general messaging outlined above. Such low-level user analytic activities are presented in the following table. The taxonomy can also be organized by three poles of activities: retrieving values, finding data points, and arranging data points.<ref>Robert Amar, James Eagan, and John Stasko (2005) [http://www.cc.gatech.edu/~stasko/papers/infovis05.pdf "Low-Level Components of Analytic Activity in Information Visualization"] {{Webarchive|url=https://web.archive.org/web/20150213074349/http://www.cc.gatech.edu/~stasko/papers/infovis05.pdf |date=2015-02-13 }}</ref><ref>William Newman (1994) [http://www.mdnpress.com/wmn/pdfs/chi94-pro-formas-2.pdf "A Preliminary Analysis of the Products of HCI Research, Using Pro Forma Abstracts"] {{Webarchive|url=https://web.archive.org/web/20160303212019/http://www.mdnpress.com/wmn/pdfs/chi94-pro-formas-2.pdf |date=2016-03-03 }}</ref><ref>Mary Shaw (2002) [https://www.cs.cmu.edu/~Compose/ftp/shaw-fin-etaps.pdf "What Makes Good Research in Software Engineering?"] {{Webarchive|url=https://web.archive.org/web/20181105042928/http://www.cs.cmu.edu/~Compose/ftp/shaw-fin-etaps.pdf |date=2018-11-05 }}</ref><ref name="ConTaaS">{{cite conference|title=ConTaaS: An Approach to Internet-Scale Contextualisation for Developing Efficient Internet of Things Applications|conference=Proceedings of the 50th Hawaii International Conference on System Sciences (HICSS50 2017)|year=2017|publisher=University of Hawaiʻi at Mānoa|doi=10.24251/HICSS.2017.715|hdl=10125/41879|last1=Yavari|first1=Ali|last2=Jayaraman|first2=Prem Prakash|last3=Georgakopoulos|first3=Dimitrios|last4=Nepal|first4=Surya|isbn=9780998133102}}</ref>


{| class="wikitable" border="1"
{| class="wikitable"
! align="center" | # !! width="160pt" | Task !! General<br>Description !! Pro Forma<br>Abstract !! width="35%" | Examples
! align="center" | # !! width="160" | Task !! General<br />Description !! Pro Forma<br />Abstract !! width="35%" | Examples
|-
|-
| align="center" | 1
| align="center" | 1
Line 175: Line 132:
''- What director/film has won the most awards?''
''- What director/film has won the most awards?''


''- What Robin Williams film has the most recent release date?''
''- What Marvel Studios film has the most recent release date?''
|-
|-
| align="center" | 5
| align="center" | 5
Line 195: Line 152:
| align="center" | 7
| align="center" | 7
| '''Characterize Distribution'''
| '''Characterize Distribution'''
| Given a set of data cases and a quantitative attribute of interest, characterize the distribution of that attribute’s values over the set.
| Given a set of data cases and a quantitative attribute of interest, characterize the distribution of that attribute's values over the set.
| What is the distribution of values of attribute A in a set S of data cases?
| What is the distribution of values of attribute A in a set S of data cases?
| ''- What is the distribution of carbohydrates in cereals?''
| ''- What is the distribution of carbohydrates in cereals?''
Line 224: Line 181:


''- Is there a trend of increasing film length over the years?''
''- Is there a trend of increasing film length over the years?''
|-
| align="center" | 11
| ''' [[Contextualization (computer science)|Contextualization]]<ref name="ConTaaS"/>'''
| Given a set of data cases, find contextual relevancy of the data to the users.
| Which data cases in a set S of data cases are relevant to the current users' context?
| ''- Are there groups of restaurants that have foods based on my current caloric intake?''
|-
|-
|}
|}


==Barriers to effective analysis==
==Barriers to effective analysis==
Barriers to effective analysis may exist among the analysts performing the data analysis or among the audience. Distinguishing fact from opinion, cognitive biases, and innumeracy are all challenges to sound data analysis.
Barriers to effective analysis may exist among the analysts performing the data analysis or among the audience. Distinguishing fact from opinion, cognitive biases, and innumeracy are all challenges to sound data analysis.<ref>{{Cite journal|date=July 1989|title=Connectivity tool transfers data among database and statistical products|url=http://dx.doi.org/10.1016/0167-9473(89)90021-2|journal=Computational Statistics & Data Analysis|volume=8|issue=2|pages=224|doi=10.1016/0167-9473(89)90021-2|issn=0167-9473}}</ref>


===Confusing fact and opinion===
===Confusing fact and opinion===
{{quote box|quote=You are entitled to your own opinion, but you are not entitled to your own facts.|source=[[Daniel Patrick Moynihan]]|width = 250px}}
{{quote box|quote=You are entitled to your own opinion, but you are not entitled to your own facts.|source=[[Daniel Patrick Moynihan]]|width = 250px}}


Effective analysis requires obtaining relevant [[fact]]s to answer questions, support a conclusion or formal [[opinion]], or test [[hypotheses]]. Facts by definition are irrefutable, meaning that any person involved in the analysis should be able to agree upon them. For example, in August 2010, the [[Congressional Budget Office]] (CBO) estimated that extending the [[Bush tax cuts]] of 2001 and 2003 for the 2011-2020 time period would add approximately $3.3 trillion to the national debt.<ref>{{cite web|url=http://www.cbo.gov/publication/21670|title=Congressional Budget Office-The Budget and Economic Outlook-August 2010-Table 1.7 on Page 24 |format=PDF |accessdate=2011-03-31}}</ref> Everyone should be able to agree that indeed this is what CBO reported; they can all examine the report. This makes it a fact. Whether persons agree or disagree with the CBO is their own opinion.
Effective analysis requires obtaining relevant [[fact]]s to answer questions, support a conclusion or formal [[opinion]], or test [[hypotheses]].<ref>{{Citation|title=Information relevant to your job|date=2007-07-11|url=http://dx.doi.org/10.4324/9780080544304-16|work=Obtaining Information for Effective Management|pages=48–54|publisher=Routledge|doi=10.4324/9780080544304-16|isbn=978-0-08-054430-4|access-date=2021-06-03}}</ref><ref>{{Cite book|last=Lehmann|first=E. L.|title=Testing statistical hypotheses|date=2010|publisher=Springer|isbn=978-1-4419-3178-8|oclc=757477004}}</ref> Facts by definition are irrefutable, meaning that any person involved in the analysis should be able to agree upon them.<ref>{{Citation|last=Fielding|first=Henry|title=Consisting partly of facts, and partly of observations upon them|date=2008-08-14|url=http://dx.doi.org/10.1093/owc/9780199536993.003.0193|work=Tom Jones|publisher=Oxford University Press|doi=10.1093/owc/9780199536993.003.0193|isbn=978-0-19-953699-3|access-date=2021-06-03}}</ref> For example, in August 2010, the [[Congressional Budget Office]] (CBO) estimated that extending the [[Bush tax cuts]] of 2001 and 2003 for the 2011–2020 time period would add approximately $3.3 trillion to the national debt.<ref>{{cite web |url=http://www.cbo.gov/publication/21670 |title=Congressional Budget Office-The Budget and Economic Outlook-August 2010-Table 1.7 on Page 24 |date=18 August 2010 |access-date=2011-03-31 |archive-date=2012-02-27 |archive-url=https://web.archive.org/web/20120227065029/http://cbo.gov/publication/21670 |url-status=live }}</ref> Everyone should be able to agree that indeed this is what CBO reported; they can all examine the report. This makes it a fact. Whether persons agree or disagree with the CBO is their own opinion.<ref>{{Cite journal|date=2017-04-19|title=Students' sense of belonging, by immigrant background|url=http://dx.doi.org/10.1787/9789264273856-table125-en|journal=PISA 2015 Results (Volume III)|series=PISA|doi=10.1787/9789264273856-table125-en|isbn=9789264273818|issn=1996-3777}}</ref>


As another example, the auditor of a public company must arrive at a formal opinion on whether financial statements of publicly traded corporations are "fairly stated, in all material respects." This requires extensive analysis of factual data and evidence to support their opinion. When making the leap from facts to opinions, there is always the possibility that the opinion is [[Type I and type II errors|erroneous]].
As another example, the auditor of a public company must arrive at a formal opinion on whether financial statements of publicly traded corporations are "fairly stated, in all material respects".<ref>{{Cite journal|last=Gordon|first=Roger|date=March 1990|title=Do Publicly Traded Corporations Act in the Public Interest?|url=http://dx.doi.org/10.3386/w3303|location=Cambridge, MA|doi=10.3386/w3303 |journal=National Bureau of Economic Research Working Papers}}</ref> This requires extensive analysis of factual data and evidence to support their opinion. When making the leap from facts to opinions, there is always the possibility that the opinion is [[Type I and type II errors|erroneous]].<ref>{{Citation|last=Minardi|first=Margot|title=Facts and Opinion|date=2010-09-24|url=http://dx.doi.org/10.1093/acprof:oso/9780195379372.003.0003|work=Making Slavery History|pages=13–42|publisher=Oxford University Press|doi=10.1093/acprof:oso/9780195379372.003.0003|isbn=978-0-19-537937-2|access-date=2021-06-03}}</ref>


===Cognitive biases===
===Cognitive biases===
There are a variety of [[cognitive bias]]es that can adversely effect analysis. For example, [[confirmation bias]] is the tendency to search for or interpret information in a way that confirms one's preconceptions. In addition, individuals may discredit information that does not support their views.
There are a variety of [[cognitive bias]]es that can adversely affect analysis. For example, [[confirmation bias]] is the tendency to search for or interpret information in a way that confirms one's preconceptions.<ref>{{Cite thesis|title=Confirmation bias in witness interviewing: Can interviewers ignore their preconceptions?|url=http://dx.doi.org/10.25148/etd.fi14071109|publisher=Florida International University|first=Jillian R|last=Rivard|year=2014 |doi=10.25148/etd.fi14071109}}</ref> In addition, individuals may discredit information that does not support their views.<ref>{{Citation|last=Papineau|first=David|title=Does the Sociology of Science Discredit Science?|date=1988|url=http://dx.doi.org/10.1007/978-94-009-2877-0_2|work=Relativism and Realism in Science|pages=37–57|place=Dordrecht|publisher=Springer Netherlands|doi=10.1007/978-94-009-2877-0_2|isbn=978-94-010-7795-8|access-date=2021-06-03}}</ref>


Analysts may be trained specifically to be aware of these biases and how to overcome them. In his book ''Psychology of Intelligence Analysis'', retired CIA analyst [[Richards Heuer]] wrote that analysts should clearly delineate their assumptions and chains of inference and specify the degree and source of the uncertainty involved in the conclusions. He emphasized procedures to help surface and debate alternative points of view.<ref name="Heuer1">{{cite web|url=https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/art3.html|title=Introduction|work=cia.gov}}</ref>
Analysts may be trained specifically to be aware of these biases and how to overcome them.<ref>{{Cite book|date=2005|editor-last=Bromme|editor-first=Rainer|editor2-last=Hesse|editor2-first=Friedrich W.|editor3-last=Spada|editor3-first=Hans|title=Barriers and Biases in Computer-Mediated Knowledge Communication|url=http://dx.doi.org/10.1007/b105100|doi=10.1007/b105100|isbn=978-0-387-24317-7}}</ref> In his book ''Psychology of Intelligence Analysis'', retired CIA analyst [[Richards Heuer]] wrote that analysts should clearly delineate their assumptions and chains of inference and specify the degree and source of the uncertainty involved in the conclusions.<ref>{{Cite book|last=Heuer|first=Richards|editor1-first=Richards J|editor1-last=Heuer|date=2019-06-10|title=Quantitative Approaches to Political Intelligence|url=http://dx.doi.org/10.4324/9780429303647|doi=10.4324/9780429303647|isbn=9780429303647|s2cid=145675822}}</ref> He emphasized procedures to help surface and debate alternative points of view.<ref name="Heuer1">{{cite web|url=https://www.cia.gov/static/9a5f1162fd0932c29bfed1c030edf4ae/Pyschology-of-Intelligence-Analysis.pdf|title=Introduction|publisher=Central Intelligence Agency|access-date=2021-10-25|archive-date=2021-10-25|archive-url=https://web.archive.org/web/20211025160526/https://www.cia.gov/enwiki/static/9a5f1162fd0932c29bfed1c030edf4ae/Pyschology-of-Intelligence-Analysis.pdf|url-status=live}}</ref>


===Innumeracy===
===Innumeracy===
Effective analysts are generally adept with a variety of numerical techniques. However, audiences may not have such literacy with numbers or [[numeracy]]; they are said to be innumerate. Persons communicating the data may also be attempting to mislead or misinform, deliberately using bad numerical techniques.<ref>[http://www.bloombergview.com/articles/2014-10-28/bad-math-that-passes-for-insight Bloomberg-Barry Ritholz-Bad Math that Passes for Insight-October 28, 2014]</ref>
Effective analysts are generally adept with a variety of numerical techniques. However, audiences may not have such literacy with numbers or [[numeracy]]; they are said to be innumerate.<ref>{{Cite journal|title=Figure 6.7. Differences in literacy scores across OECD countries generally mirror those in numeracy|url=http://dx.doi.org/10.1787/888934081549|access-date=2021-06-03|doi=10.1787/888934081549}}</ref> Persons communicating the data may also be attempting to mislead or misinform, deliberately using bad numerical techniques.<ref>{{Cite web |url=http://www.bloombergview.com/articles/2014-10-28/bad-math-that-passes-for-insight |title=Bloomberg-Barry Ritholz-Bad Math that Passes for Insight-October 28, 2014 |access-date=2014-10-29 |archive-date=2014-10-29 |archive-url=https://web.archive.org/web/20141029014527/http://www.bloombergview.com/articles/2014-10-28/bad-math-that-passes-for-insight |url-status=dead }}</ref>


For example, whether a number is rising or falling may not be the key factor. More important may be the number relative to another number, such as the size of government revenue or spending relative to the size of the economy (GDP) or the amount of cost relative to revenue in corporate financial statements. This numerical technique is referred to as normalization<ref name="Koomey1"/> or common-sizing. There are many such techniques employed by analysts, whether adjusting for inflation (i.e., comparing real vs. nominal data) or considering population increases, demographics, etc. Analysts apply a variety of techniques to address the various quantitative messages described in the section above.
For example, whether a number is rising or falling may not be the key factor. More important may be the number relative to another number, such as the size of government revenue or spending relative to the size of the economy (GDP) or the amount of cost relative to revenue in corporate financial statements.<ref>{{Cite journal|last1=Gusnaini|first1=Nuriska|last2=Andesto|first2=Rony|last3=Ermawati|date=2020-12-15|title=The Effect of Regional Government Size, Legislative Size, Number of Population, and Intergovernmental Revenue on The Financial Statements Disclosure|url=http://dx.doi.org/10.24018/ejbmr.2020.5.6.651|journal=European Journal of Business and Management Research|volume=5|issue=6|doi=10.24018/ejbmr.2020.5.6.651|s2cid=231675715|issn=2507-1076}}</ref> This numerical technique is referred to as normalization<ref name="Koomey1"/> or common-sizing. There are many such techniques employed by analysts, whether adjusting for inflation (i.e., comparing real vs. nominal data) or considering population increases, demographics, etc.<ref>{{Citation|last1=Linsey|first1=Julie S.|author1-link=Julie Linsey|title=Effectiveness of Brainwriting Techniques: Comparing Nominal Groups to Real Teams|date=2011|url=http://dx.doi.org/10.1007/978-0-85729-224-7_22|work=Design Creativity 2010|pages=165–171|place=London|publisher=Springer London|isbn=978-0-85729-223-0|access-date=2021-06-03|last2=Becker|first2=Blake|doi=10.1007/978-0-85729-224-7_22}}</ref> Analysts apply a variety of techniques to address the various quantitative messages described in the section above.<ref>{{Cite journal|last=Lyon|first=J.|date=April 2006|title=Purported Responsible Address in E-Mail Messages|doi=10.17487/rfc4407|url=http://dx.doi.org/10.17487/rfc4407}}</ref>


Analysts may also analyze data under different assumptions or scenarios. For example, when analysts perform [[financial statement analysis]], they will often recast the financial statements under different assumptions to help arrive at an estimate of future cash flow, which they then discount to present value based on some interest rate, to determine the valuation of the company or its stock. Similarly, the CBO analyzes the effects of various policy options on the government's revenue, outlays and deficits, creating alternative future scenarios for key measures.
Analysts may also analyze data under different assumptions or scenario. For example, when analysts perform [[financial statement analysis]], they will often recast the financial statements under different assumptions to help arrive at an estimate of future cash flow, which they then discount to present value based on some interest rate, to determine the valuation of the company or its stock.<ref>{{Cite book|last=Stock|first=Eugene|title=The History of the Church Missionary Society Its Environment, its Men and its Work|date=10 June 2017|publisher=Hansebooks GmbH |isbn=978-3-337-18120-8|oclc=1189626777}}</ref><ref>{{Cite journal|last=Gross|first=William H.|date=July 1979|title=Coupon Valuation and Interest Rate Cycles|url=http://dx.doi.org/10.2469/faj.v35.n4.68|journal=Financial Analysts Journal|volume=35|issue=4|pages=68–71|doi=10.2469/faj.v35.n4.68|issn=0015-198X}}</ref> Similarly, the CBO analyzes the effects of various policy options on the government's revenue, outlays and deficits, creating alternative future scenarios for key measures.<ref>{{Cite journal|title=25. General government total outlays|url=http://dx.doi.org/10.1787/888932348795|access-date=2021-06-03|doi=10.1787/888932348795}}</ref>


==Other topics==
==Other topics==

===Smart buildings===
A data analytics approach can be used in order to predict energy consumption in buildings.<ref name="Towards energy efficiency smart buildings models based on intelligent data analytics">{{cite journal
| last1 = González-Vidal
| first1 = Aurora
| last2 = Moreno-Cano
| first2 = Victoria
| date = 2016
| title = Towards energy efficiency smart buildings models based on intelligent data analytics
| journal = Procedia Computer Science
| volume = 83
| issue = Elsevier
| pages = 994–999
| doi = 10.1016/j.procs.2016.04.213| doi-access= free
}}
</ref> The different steps of the data analysis process are carried out in order to realise smart buildings, where the building management and control operations including heating, ventilation, air conditioning, lighting and security are realised automatically by miming the needs of the building users and optimising resources like energy and time.<ref>{{Citation|title=Low-Energy Air Conditioning and Lighting Control|date=2013-07-04|url=http://dx.doi.org/10.4324/9780203477342-18|work=Building Energy Management Systems|pages=406–439|publisher=Routledge|doi=10.4324/9780203477342-18|isbn=978-0-203-47734-2|access-date=2021-06-03}}</ref>


===Analytics and business intelligence===
===Analytics and business intelligence===
{{Main article|Analytics}}
{{Main|Analytics}}
Analytics is the "extensive use of data, statistical and quantitative analysis, explanatory and predictive models, and fact-based management to drive decisions and actions." It is a subset of [[business intelligence]], which is a set of technologies and processes that use data to understand and analyze business performance.<ref name="Competing on Analytics 2007">{{cite book
Analytics is the "extensive use of data, statistical and quantitative analysis, explanatory and predictive models, and fact-based management to drive decisions and actions." It is a subset of [[business intelligence]], which is a set of technologies and processes that uses data to understand and analyze business performance to drive decision-making .<ref name="Competing on Analytics 2007">{{cite book|last1=Davenport|last2=Harris|first1=Thomas|first2=Jeanne|year=2007|title=Competing on Analytics|publisher = O'Reilly|isbn = 978-1-4221-0332-6}}</ref>
| last = Davenport, Thomas and
| first = Harris, Jeanne
| year = 2007
| title = [[Competing on Analytics]] | publisher = O'Reilly
| isbn = 978-1-4221-0332-6}}</ref>


===Education===
===Education===
In [[education]], most educators have access to a [[data system]] for the purpose of analyzing student data.<ref>Aarons, D. (2009). [https://search.proquest.com/docview/202710770 Report finds states on course to build pupil-data systems.] ''Education Week, 29''(13), 6.</ref> These data systems present data to educators in an [[over-the-counter data]] format (embedding labels, supplemental documentation, and a help system and making key package/display and content decisions) to improve the accuracy of educators' data analyses.<ref>Rankin, J. (2013, March 28). [https://sas.elluminate.com/site/external/recording/playback/link/table/dropin?sid=2008350&suid=D.4DF60C7117D5A77FE3AED546909ED2 How data Systems & reports can either fight or propagate the data analysis error epidemic, and how educator leaders can help.] {{Webarchive|url=https://web.archive.org/web/20190326201414/https://sas.elluminate.com/site/external/recording/playback/link/table/dropin?sid=2008350&suid=D.4DF60C7117D5A77FE3AED546909ED2 |date=2019-03-26 }} ''Presentation conducted from Technology Information Center for Administrative Leadership (TICAL) School Leadership Summit.''</ref>
[[File:User-activities.png|Analytic activities of data visualization users|thumb|right|350px]]
In [[education]], most educators have access to a [[data system]] for the purpose of analyzing student data.<ref>Aarons, D. (2009). [http://search.proquest.com/docview/202710770?accountid=28180 Report finds states on course to build pupil-data systems.] ''Education Week, 29''(13), 6.</ref> These data systems present data to educators in an [[over-the-counter data]] format (embedding labels, supplemental documentation, and a help system and making key package/display and content decisions) to improve the accuracy of educators’ data analyses.<ref>Rankin, J. (2013, March 28). [https://sas.elluminate.com/site/external/recording/playback/link/table/dropin?sid=2008350&suid=D.4DF60C7117D5A77FE3AED546909ED2 How data Systems & reports can either fight or propagate the data analysis error epidemic, and how educator leaders can help.] ''Presentation conducted from Technology Information Center for Administrative Leadership (TICAL) School Leadership Summit.''</ref>


==Practitioner notes==
==Practitioner notes==
This section contains rather technical explanations that may assist practitioners but are beyond the typical scope of a Wikipedia article.
This section contains rather technical explanations that may assist practitioners but are beyond the typical scope of a Wikipedia article.<ref>{{Citation|last=Brödermann|first=Eckart J.|title=Article 2.2.1 (Scope of the Section)|date=2018|url=http://dx.doi.org/10.5771/9783845276564-525|work=Commercial Law|pages=525|publisher=Nomos Verlagsgesellschaft mbH & Co. KG|doi=10.5771/9783845276564-525|isbn=978-3-8452-7656-4|access-date=2021-06-03}}</ref>


===Initial data analysis===
===Initial data analysis===
The most important distinction between the initial data analysis phase and the main analysis phase, is that during initial data analysis one refrains from any analysis that is aimed at answering the original research question. The initial data analysis phase is guided by the following four questions:<ref>Adèr, 2008, p. 337.</ref>
The most important distinction between the initial data analysis phase and the main analysis phase, is that during initial data analysis one refrains from any analysis that is aimed at answering the original research question.<ref>{{Cite journal|last=Jaech|first=J.L.|date=1960-04-21|title=Analysis of dimensional distortion data from initial 24 quality certification tubes|doi=10.2172/10170345|s2cid=110058009 |url=http://dx.doi.org/10.2172/10170345}}</ref> The initial data analysis phase is guided by the following four questions:{{sfn|Adèr|2008a|p=337}}


====Quality of data====
====Quality of data====
The quality of the data should be checked as early as possible. Data quality can be assessed in several ways, using different types of analysis: frequency counts, descriptive statistics (mean, standard deviation, median), normality (skewness, kurtosis, frequency histograms, n: variables are compared with coding schemes of variables external to the data set, and possibly corrected if coding schemes are not comparable.
The quality of the data should be checked as early as possible. Data quality can be assessed in several ways, using different types of analysis: frequency counts, descriptive statistics (mean, standard deviation, median), normality (skewness, kurtosis, frequency histograms), normal [[imputation (statistics)|imputation]] is needed.<ref>{{Cite journal|title=Descriptive statistics indicating the mean, standard deviation and frequency of missing values for each condition (N = number of participants), and for the dependent variables (DV)|journal=PeerJ|date = 19 December 2013|volume = 1|pages = e231|doi=10.7717/peerj.231/table-1|last1 = Kjell|first1 = Oscar N. E.|last2 = Thompson|first2 = Sam | doi-access=free }}</ref>
*Analysis of [[Outlier|extreme observations]]: outlying observations in the data are analyzed to see if they seem to disturb the distribution.<ref>{{Citation|title=Practice for Dealing With Outlying Observations|url=http://dx.doi.org/10.1520/e0178-16a|publisher=ASTM International|doi=10.1520/e0178-16a|access-date=2021-06-03}}</ref>
*Comparison and correction of differences in coding schemes: variables are compared with coding schemes of variables external to the data set, and possibly corrected if coding schemes are not comparable.<ref>{{Citation|title=Alternative Coding Schemes for Dummy Variables|url=http://dx.doi.org/10.4135/9781412985628.n5|work=Regression with Dummy Variables|year=1993|pages=64–75|location=Newbury Park, CA|publisher=SAGE Publications, Inc.|doi=10.4135/9781412985628.n5|isbn=978-0-8039-5128-0|access-date=2021-06-03}}</ref>
*Test for [[common-method variance]].
*Test for [[common-method variance]].
The choice of analyses to assess the data quality during the initial data analysis phase depends on the analyses that will be conducted in the main analysis phase.<ref>Adèr, 2008, p. 338-341.</ref>
The choice of analyses to assess the data quality during the initial data analysis phase depends on the analyses that will be conducted in the main analysis phase.{{sfn|Adèr|2008a|pp=338-341}}


====Quality of measurements====
====Quality of measurements====
The quality of the [[measuring instrument|measurement instruments]] should only be checked during the initial data analysis phase when this is not the focus or research question of the study. One should check whether structure of measurement instruments corresponds to structure reported in the literature.<br />
The quality of the [[measuring instrument|measurement instruments]] should only be checked during the initial data analysis phase when this is not the focus or research question of the study.<ref>{{Cite journal|last=Danilyuk|first=P. M.|date=July 1960|title=Computing the displacement of the initial contour of gears when they are checked by means of balls|url=http://dx.doi.org/10.1007/bf00977716|journal=Measurement Techniques|volume=3|issue=7|pages=585–587|doi=10.1007/bf00977716|bibcode=1960MeasT...3..585D |s2cid=121058145|issn=0543-1972}}</ref><ref>{{Cite book|first=Isadore|last=Newman|title=Qualitative-quantitative research methodology : exploring the interactive continuum|date=1998|publisher=Southern Illinois University Press|isbn=0-585-17889-5|oclc=44962443}}</ref> One should check whether structure of measurement instruments corresponds to structure reported in the literature.

There are two ways to assess measurement
There are two ways to assess measurement quality:
*Analysis of homogeneity ([[internal consistency]]), which gives an indication of the [[Reliability (statistics)|reliability]] of a measurement instrument. During this analysis, one inspects the variances of the items and the scales, the [[Cronbach's alpha|Cronbach's α]] of the scales, and the change in the Cronbach's alpha when an item would be deleted from a scale.<ref>Adèr, 2008, p. 341-3342.</ref>
*Confirmatory factor analysis
*Analysis of homogeneity ([[internal consistency]]), which gives an indication of the [[Reliability (statistics)|reliability]] of a measurement instrument.<ref>{{Cite journal|last1=Terwilliger|first1=James S.|last2=Lele|first2=Kaustubh|title=Some Relationships Among Internal Consistency, Reproducibility, and Homogeneity|date=June 1979|url=http://dx.doi.org/10.1111/j.1745-3984.1979.tb00091.x|journal=Journal of Educational Measurement|volume=16|issue=2|pages=101–108|doi=10.1111/j.1745-3984.1979.tb00091.x|issn=0022-0655}}</ref> During this analysis, one inspects the variances of the items and the scales, the [[Cronbach's alpha|Cronbach's α]] of the scales, and the change in the Cronbach's alpha when an item would be deleted from a scale{{sfn|Adèr|2008a|pp=341-342}}


====Initial transformations====
====Initial transformations====
After assessing the quality of the data and of the measurements, one might decide to impute missing data, or to perform initial transformations of one or more variables, although this can also be done during the main analysis phase.<ref>Adèr, 2008, p. 344.</ref><br />
After assessing the quality of the data and of the measurements, one might decide to impute missing data, or to perform initial transformations of one or more variables, although this can also be done during the main analysis phase.{{sfn|Adèr|2008a|p=344}}<br />
Possible transformations of variables are:<ref>Tabachnick & Fidell, 2007, p. 87-88.</ref>
Possible transformations of variables are:<ref>Tabachnick & Fidell, 2007, p. 87-88.</ref>
*Square root transformation (if the distribution differs moderately from normal)
*Square root transformation (if the distribution differs moderately from normal)
Line 289: Line 266:


====Did the implementation of the study fulfill the intentions of the research design?====
====Did the implementation of the study fulfill the intentions of the research design?====
One should check the success of the [[randomization]] procedure, for instance by checking whether background and substantive variables are equally distributed within and across groups.<ref>{{Cite journal|last=Tchakarova|first=Kalina|date=October 2020|title=2020/31 Comparing job descriptions is insufficient for checking whether work is equally valuable (BG)|url=http://dx.doi.org/10.5553/eelc/187791072020005003006|journal=European Employment Law Cases|volume=5|issue=3|pages=168–170|doi=10.5553/eelc/187791072020005003006|s2cid=229008899|issn=1877-9107}}</ref> <br />If the study did not need or use a randomization procedure, one should check the success of the non-random sampling, for instance by checking whether all subgroups of the population of interest are represented in sample.<ref>{{Citation|title=Random sampling and randomization procedures|url=http://dx.doi.org/10.3403/30137438|publisher=BSI British Standards|doi=10.3403/30137438|access-date=2021-06-03}}</ref><br />Other possible data distortions that should be checked are:
One should check the success of the [[randomization]] procedure, for instance by checking whether background and substantive variables are equally distributed within and across groups. <br />
If the study did not need or use a randomization procedure, one should check the success of the non-random sampling, for instance by checking whether all subgroups of the population of interest are represented in sample.<br />
Other possible data distortions that should be checked are:
*[[Dropout (electronics)|dropout]] (this should be identified during the initial data analysis phase)
*[[Dropout (electronics)|dropout]] (this should be identified during the initial data analysis phase)
*Item [[Response rate (survey)|nonresponse]] (whether this is random or not should be assessed during the initial data analysis phase)
*Item [[Response rate (survey)|non-response]] (whether this is random or not should be assessed during the initial data analysis phase)
*Treatment quality (using [[manipulation check]]s).<ref>Adèr, 2008, p. 344-345.</ref>
*Treatment quality (using [[manipulation check]]s).{{sfn|Adèr|2008a|pp=344-345}}


====Characteristics of data sample====
====Characteristics of data sample====
In any report or article, the structure of the sample must be accurately described. It is especially important to exactly determine the structure of the sample (and specifically the size of the subgroups) when subgroup analyses will be performed during the main analysis phase.<br />
In any report or article, the structure of the sample must be accurately described.<ref>{{Cite journal|last=Sandberg|first=Margareta|date=June 2006|title=Acupuncture Procedures Must be Accurately Described|url=http://dx.doi.org/10.1136/aim.24.2.92|journal=Acupuncture in Medicine|volume=24|issue=2|pages=92–94|doi=10.1136/aim.24.2.92|pmid=16783285|s2cid=30286074|issn=0964-5284}}</ref><ref>{{Cite book|last=Jaarsma|first=C.F.|title=Verkeer in een landelijk gebied: waarnemingen en analyse van het verkeer in zuidwest Friesland en ontwikkeling van een verkeersmodel|oclc=1016575584}}</ref> It is especially important to exactly determine the structure of the sample (and specifically the size of the subgroups) when subgroup analyses will be performed during the main analysis phase.<ref>{{Cite journal|title=Figure 4: Centroid size regression analyses for the main sample.|journal=PeerJ|date=18 January 2016|volume=4|pages=e1589|doi=10.7717/peerj.1589/fig-4|last1=Foth|first1=Christian|last2=Hedrick|first2=Brandon P.|last3=Ezcurra|first3=Martin D. |doi-access=free }}</ref><br />The characteristics of the data sample can be assessed by looking at:
The characteristics of the data sample can be assessed by looking at:
*Basic statistics of important variables
*Basic statistics of important variables
*Scatter plots
*Scatter plots
*Correlations and associations
*Correlations and associations
*Cross-tabulations<ref>Adèr, 2008, p. 345.</ref>
*Cross-tabulations{{sfn|Adèr|2008a|p=345}}


====Final stage of the initial data analysis====
====Final stage of the initial data analysis====
During the final stage, the findings of the initial data analysis are documented, and necessary, preferable, and possible corrective actions are taken.<ref>{{Citation|title=The Final Years (1975-84)|date=2018-06-18|url=http://dx.doi.org/10.2307/j.ctv6cfncp.26|work=The Road Not Taken|pages=853–922|publisher=Boydell & Brewer|doi=10.2307/j.ctv6cfncp.26|isbn=978-1-57647-332-0|s2cid=242072487|access-date=2021-06-03}}</ref><br />Also, the original plan for the main data analyses can and should be specified in more detail or rewritten.<ref>{{Cite book|first=Kathryn|last=Fitzmaurice|title=Destiny, rewritten|date=17 March 2015|publisher=HarperCollins |isbn=978-0-06-162503-9|oclc=905090570}}</ref> In order to do this, several decisions about the main data analyses can and should be made:
During the final stage, the findings of the initial data analysis are documented, and necessary, preferable, and possible corrective actions are taken.<br />
Also, the original plan for the main data analyses can and should be specified in more detail or rewritten.<br /> In order to do this, several decisions about the main data analyses can and should be made:
*In the case of non-[[Normal distribution|normal]]s: should one [[Data transformation (statistics)|transform]] variables; make variables categorical (ordinal/dichotomous); adapt the analysis method?
*In the case of non-[[Normal distribution|normal]]s: should one [[Data transformation (statistics)|transform]] variables; make variables categorical (ordinal/dichotomous); adapt the analysis method?
*In the case of [[missing data]]: should one neglect or impute the missing data; which imputation technique should be used?
*In the case of [[missing data]]: should one neglect or impute the missing data; which imputation technique should be used?
Line 312: Line 285:
*In case items do not fit the scale: should one adapt the measurement instrument by omitting items, or rather ensure comparability with other (uses of the) measurement instrument(s)?
*In case items do not fit the scale: should one adapt the measurement instrument by omitting items, or rather ensure comparability with other (uses of the) measurement instrument(s)?
*In the case of (too) small subgroups: should one drop the hypothesis about inter-group differences, or use small sample techniques, like exact tests or [[bootstrapping (statistics)|bootstrapping]]?
*In the case of (too) small subgroups: should one drop the hypothesis about inter-group differences, or use small sample techniques, like exact tests or [[bootstrapping (statistics)|bootstrapping]]?
*In case the [[randomization]] procedure seems to be defective: can and should one calculate [[Propensity score matching|propensity scores]] and include them as covariates in the main analyses?<ref>Adèr, 2008, p. 345-346.</ref>
*In case the [[randomization]] procedure seems to be defective: can and should one calculate [[Propensity score matching|propensity scores]] and include them as covariates in the main analyses?{{sfn|Adèr|2008a|pp=345-346}}


====Analysis====
====Analysis====
Several analyses can be used during the initial data analysis phase:<ref>Adèr, 2008, p. 346-347.</ref>
Several analyses can be used during the initial data analysis phase:{{sfn|Adèr|2008a|pp=346-347}}
*Univariate statistics (single variable)
*Univariate statistics (single variable)
*Bivariate associations (correlations)
*Bivariate associations (correlations)
*Graphical techniques (scatter plots)
*Graphical techniques (scatter plots)


It is important to take the measurement levels of the variables into account for the analyses, as special statistical techniques are available for each level:<ref>Adèr, 2008, p. 349-353.</ref>
It is important to take the measurement levels of the variables into account for the analyses, as special statistical techniques are available for each level:{{sfn|Adèr|2008a|pp=349-353}}


*Nominal and ordinal variables
*Nominal and ordinal variables
Line 337: Line 310:


====Nonlinear analysis====
====Nonlinear analysis====
Nonlinear analysis will be necessary when the data is recorded from a [[nonlinear system]]. Nonlinear systems can exhibit complex dynamic effects including [[bifurcation theory|bifurcations]], [[chaos theory|chaos]], [[harmonics]] and [[subharmonics]] that cannot be analyzed using simple linear methods. Nonlinear data analysis is closely related to [[nonlinear system identification]].<ref name="SAB1">Billings S.A. "Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains". Wiley, 2013</ref>
Nonlinear analysis is often necessary when the data is recorded from a [[nonlinear system]]. Nonlinear systems can exhibit complex dynamic effects including [[bifurcation theory|bifurcations]], [[chaos theory|chaos]], [[harmonics]] and [[subharmonics]] that cannot be analyzed using simple linear methods. Nonlinear data analysis is closely related to [[nonlinear system identification]].<ref name="SAB1">Billings S.A. "Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains". Wiley, 2013</ref>


===Main data analysis===
===Main data analysis===
In the main analysis phase analyses aimed at answering the research question are performed as well as any other relevant analysis needed to write the first draft of the research report.<ref>Adèr, 2008, p. 363.</ref>
In the main analysis phase, analyses aimed at answering the research question are performed as well as any other relevant analysis needed to write the first draft of the research report.{{sfn|Adèr|2008b|p=363}}


====Exploratory and confirmatory approaches====
====Exploratory and confirmatory approaches====
In the main analysis phase either an exploratory or confirmatory approach can be adopted. Usually the approach is decided before data is collected. In an exploratory analysis no clear hypothesis is stated before analysing the data, and the data is searched for models that describe the data well. In a confirmatory analysis clear hypotheses about the data are tested.
In the main analysis phase, either an exploratory or confirmatory approach can be adopted. Usually the approach is decided before data is collected.<ref>{{Citation|title=Exploratory Data Analysis|date=2017-10-13|url=http://dx.doi.org/10.1002/9781119126805.ch4|work=Python® for R Users|pages=119–138|place=Hoboken, NJ, USA|publisher=John Wiley & Sons, Inc.|doi=10.1002/9781119126805.ch4|hdl=11380/971504|isbn=978-1-119-12680-5|access-date=2021-06-03}}</ref> In an exploratory analysis no clear hypothesis is stated before analysing the data, and the data is searched for models that describe the data well.<ref>{{Citation|title=Engaging in Exploratory Data Analysis, Visualization, and Hypothesis Testing – Exploratory Data Analysis, Geovisualization, and Data|date=2015-07-28|url=http://dx.doi.org/10.1201/b18808-8|work=Spatial Analysis|pages=106–139|publisher=CRC Press|doi=10.1201/b18808-8|isbn=978-0-429-06936-9|s2cid=133412598 |access-date=2021-06-03}}</ref> In a confirmatory analysis clear hypotheses about the data are tested.<ref>{{Citation|title=Hypotheses About Categories|url=http://dx.doi.org/10.4135/9781446287873.n14|work=Starting Statistics: A Short, Clear Guide|year=2010|pages=138–151|location=London|publisher=SAGE Publications Ltd|doi=10.4135/9781446287873.n14|isbn=978-1-84920-098-1|access-date=2021-06-03}}</ref>


[[Exploratory data analysis]] should be interpreted carefully. When testing multiple models at once there is a high chance on finding at least one of them to be significant, but this can be due to a [[type 1 error]]. It is important to always adjust the significance level when testing multiple models with, for example, a [[Bonferroni correction]]. Also, one should not follow up an exploratory analysis with a confirmatory analysis in the same dataset. An exploratory analysis is used to find ideas for a theory, but not to test that theory as well. When a model is found exploratory in a dataset, then following up that analysis with a confirmatory analysis in the same dataset could simply mean that the results of the confirmatory analysis are due to the same [[type 1 error]] that resulted in the exploratory model in the first place. The confirmatory analysis therefore will not be more informative than the original exploratory analysis.<ref>Adèr, 2008, p. 361-362.</ref>
[[Exploratory data analysis]] should be interpreted carefully. When testing multiple models at once there is a high chance on finding at least one of them to be significant, but this can be due to a [[type 1 error]].<ref>{{Cite journal|last1=Sordo|first1=Rachele Del|last2=Sidoni|first2=Angelo|date=December 2008|title=MIB-1 Cell Membrane Reactivity: A Finding That Should be Interpreted Carefully|url=http://dx.doi.org/10.1097/pai.0b013e31817af2cf|journal=Applied Immunohistochemistry & Molecular Morphology|volume=16|issue=6|pages=568|doi=10.1097/pai.0b013e31817af2cf|pmid=18800001|issn=1541-2016}}</ref> It is important to always adjust the significance level when testing multiple models with, for example, a [[Bonferroni correction]].<ref>{{Cite journal|last1=Liquet|first1=Benoit|last2=Riou|first2=Jérémie|date=2013-06-08|title=Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models|journal=BMC Medical Research Methodology|volume=13|issue=1|page=75|doi=10.1186/1471-2288-13-75|pmid=23758852|pmc=3699399|issn=1471-2288 |doi-access=free }}</ref> Also, one should not follow up an exploratory analysis with a confirmatory analysis in the same dataset.<ref name="Mcardle 2008">{{Cite journal|last=Mcardle|first=John J.|date=2008|title=Some ethical issues in confirmatory versus exploratory analysis|url=http://dx.doi.org/10.1037/e503312008-001|access-date=2021-06-03|website=PsycEXTRA Dataset|doi=10.1037/e503312008-001}}</ref> An exploratory analysis is used to find ideas for a theory, but not to test that theory as well.<ref name="Mcardle 2008"/> When a model is found exploratory in a dataset, then following up that analysis with a confirmatory analysis in the same dataset could simply mean that the results of the confirmatory analysis are due to the same [[type 1 error]] that resulted in the exploratory model in the first place.<ref name="Mcardle 2008"/> The confirmatory analysis therefore will not be more informative than the original exploratory analysis.{{sfn|Adèr|2008b|pp=361-362}}


====Stability of results====
====Stability of results====
It is important to obtain some indication about how generalizable the results are.<ref>Adèr, 2008, p. 368-371.</ref> While this is hard to check, one can look at the stability of the results. Are the results reliable and reproducible? There are two main ways of doing this:
It is important to obtain some indication about how generalizable the results are.{{sfn|Adèr|2008b|pp=361-371}} While this is often difficult to check, one can look at the stability of the results. Are the results reliable and reproducible? There are two main ways of doing that.<ref>{{Citation|title=3 The Facelift: A Guide for Safe, Reliable, and Reproducible Results|date=2009|url=http://dx.doi.org/10.1055/b-0034-73436|work=Surgical Facial Rejuvenation|editor-last=Truswell IV|editor-first=William H.|place=Stuttgart|publisher=Georg Thieme Verlag|doi=10.1055/b-0034-73436|isbn=978-1-58890-491-1|access-date=2021-06-03}}</ref>
* [[Cross-validation (statistics)|Cross-validation]]: By splitting the data in multiple parts we can check if an analysis (like a fitted model) based on one part of the data generalizes to another part of the data as well.
* ''[[Cross-validation (statistics)|Cross-validation]]''. By splitting the data into multiple parts, we can check if an analysis (like a fitted model) based on one part of the data generalizes to another part of the data as well.<ref>{{cite journal
| last1 = Benson | first1 = Noah C
* [[Sensitivity analysis]]: A procedure to study the behavior of a system or model when global parameters are (systematically) varied. One way to do this is with bootstrapping.
| last2 = Winawer | first2 = Jonathan

| date = December 2018
====Statistical methods====
| doi = 10.7554/elife.40224
Many statistical methods have been used for statistical analyses. A very brief list of four of the more popular methods is:
| journal = eLife
* [[General linear model]]: A widely used model on which various methods are based (e.g. [[t test]], [[ANOVA]], [[ANCOVA]], [[MANOVA]]). Usable for assessing the effect of several predictors on one or more continuous dependent variables.
| title = Bayesian analysis of retinotopic maps
* [[Generalized linear model]]: An extension of the general linear model for discrete dependent variables.
| volume = 7| doi-access = free
* [[Structural equation modelling]]: Usable for assessing latent structures from measured manifest variables.
| pmid = 30520736
* [[Item response theory]]: Models for (mostly) assessing one latent variable from several binary measured variables (e.g. an exam).
| pmc = 6340702
}} Supplementary file 1. Cross-validation schema. {{doi|10.7554/elife.40224.014}}</ref> Cross-validation is generally inappropriate, though, if there are correlations within the data, e.g. with [[panel data]].<ref>{{Citation|last=Hsiao|first=Cheng|title=Cross-Sectionally Dependent Panel Data|url=http://dx.doi.org/10.1017/cbo9781139839327.012|work=Analysis of Panel Data|year=2014|pages=327–368|place=Cambridge|publisher=Cambridge University Press|doi=10.1017/cbo9781139839327.012|isbn=978-1-139-83932-7|access-date=2021-06-03}}</ref> Hence other methods of validation sometimes need to be used. For more on this topic, see [[statistical model validation]].<ref>{{Citation|last=Hjorth|first=J.S. Urban|title=Cross validation|date=2017-10-19|url=http://dx.doi.org/10.1201/9781315140056-3|work=Computer Intensive Statistical Methods|pages=24–56|publisher=Chapman and Hall/CRC|doi=10.1201/9781315140056-3|isbn=978-1-315-14005-6|access-date=2021-06-03}}</ref>
* ''[[Sensitivity analysis]]''. A procedure to study the behavior of a system or model when global parameters are (systematically) varied. One way to do that is via [[Bootstrapping (statistics)|bootstrapping]].<ref>{{Cite journal|last1=Sheikholeslami|first1=Razi|last2=Razavi|first2=Saman|last3=Haghnegahdar|first3=Amin|date=2019-10-10|title=What should we do when a model crashes? Recommendations for global sensitivity analysis of Earth and environmental systems models|journal=Geoscientific Model Development|volume=12|issue=10|pages=4275–4296|doi=10.5194/gmd-12-4275-2019|bibcode=2019GMD....12.4275S|s2cid=204900339|issn=1991-9603 |doi-access=free }}</ref>


==Free software for data analysis==
==Free software for data analysis==
<!--Free software in this list should be "notable" with a sourced Wikipedia article (see WP:GNG, WP:WTAF).-->
* [[DevInfo]] - a database system endorsed by the [[United Nations Development Group]] for monitoring and analyzing human development.
Notable free software for data analysis include:
* [[ELKI]] - data mining framework in Java with data mining oriented visualization functions.
* [[DevInfo]] – A database system endorsed by the [[United Nations Development Group]] for monitoring and analyzing human development.<ref>{{Cite book|chapter=Human development composite indices|title= Human Development Indices and Indicators 2018|pages=21–41|doi=10.18356/ce6f8e92-en|s2cid=240207510|author=United Nations Development Programme|date= 2018|publisher=United Nations }}</ref>
* [[KNIME]] - the Konstanz Information Miner, a user friendly and comprehensive data analytics framework.
* [[ELKI]] – Data mining framework in Java with data mining oriented visualization functions.
* [[Physics Analysis Workstation|PAW]] - FORTRAN/C data analysis framework developed at [[CERN]]
* [[KNIME]] – The Konstanz Information Miner, a user friendly and comprehensive data analytics framework.
* [[Orange (software)|Orange]] - A visual programming tool featuring [[interactive data visualization|interactive]] [[data visualization]] and methods for statistical data analysis, [[data mining]], and [[machine learning]].
* [[R (programming language)|R]] - a programming language and software environment for statistical computing and graphics.
* [[Orange (software)|Orange]] A visual programming tool featuring [[interactive data visualization]] and methods for statistical data analysis, [[data mining]], and [[machine learning]].
* [[ROOT]] - C++ data analysis framework developed at [[CERN]]
* [[Pandas (software)|Pandas]] Python library for data analysis.
* [[SciPy]] and [[Pandas (software)|Pandas]] - Python libraries for data analysis
* [[Physics Analysis Workstation|PAW]] FORTRAN/C data analysis framework developed at [[CERN]].
* [[R (programming language)|R]] – A programming language and software environment for statistical computing and graphics.<ref>{{Citation|last1=Wiley|first1=Matt|title=Multivariate Data Visualization|date=2019|url=http://dx.doi.org/10.1007/978-1-4842-2872-2_2|work=Advanced R Statistical Programming and Data Models|pages=33–59|place=Berkeley, CA|publisher=Apress|isbn=978-1-4842-2871-5|access-date=2021-06-03|last2=Wiley|first2=Joshua F.|doi=10.1007/978-1-4842-2872-2_2|s2cid=86629516}}</ref>
* [[ROOT]] – C++ data analysis framework developed at [[CERN]].
* [[SciPy]] – Python library for scientific computing.
* [[Julia (programming language)|Julia]] – A programming language well-suited for numerical analysis and computational science.

== Reproducible analysis ==
The typical data analysis workflow involves collecting data, running analyses through various scripts, creating visualizations, and writing reports. However, this workflow presents challenges, including a separation between analysis scripts and data, as well as a gap between analysis and documentation. Often, the correct order of running scripts is only described informally or resides in the data scientist's memory. The potential for losing this information creates issues for reproducibility. To address these challenges, it is essential to have analysis scripts written for automated, reproducible workflows. Additionally, dynamic documentation is crucial, providing reports that are understandable by both machines and humans, ensuring accurate representation of the analysis workflow even as scripts evolve.<ref>{{Cite book |last=Mailund |first=Thomas |title=Beginning Data Science in R 4: Data Analysis, Visualization, and Modelling for the Data Scientist |year=2022 |isbn=978-148428155-0 |edition=2nd}}</ref>

==International data analysis contests==
Different companies or organizations hold data analysis contests to encourage researchers to utilize their data or to solve a particular question using data analysis.<ref>{{Citation|last1=Orduna-Malea|first1=Enrique|title=A cybermetric analysis model to measure private companies|date=2018|url=http://dx.doi.org/10.1016/b978-0-08-101877-4.00003-x|work=Cybermetric Techniques to Evaluate Organizations Using Web-Based Data|pages=63–76|publisher=Elsevier|isbn=978-0-08-101877-4|access-date=2021-06-03|last2=Alonso-Arroyo|first2=Adolfo|doi=10.1016/b978-0-08-101877-4.00003-x}}</ref><ref>{{Cite book|last=Leen|first=A.R.|title=The consumer in Austrian economics and the Austrian perspective on consumer policy|publisher=Wageningen Universiteit |isbn=90-5808-102-8|oclc=1016689036}}</ref> A few examples of well-known international data analysis contests are as follows:<ref>{{Citation|chapter=Examples of Survival Data Analysis|date=2003-06-30|chapter-url=http://dx.doi.org/10.1002/0471458546.ch3|series=Wiley Series in Probability and Statistics|pages=19–63|place=Hoboken, NJ, USA|publisher=John Wiley & Sons, Inc.|doi=10.1002/0471458546.ch3|isbn=978-0-471-45854-8|access-date=2021-06-03|title=Statistical Methods for Survival Data Analysis}}</ref>
* Kaggle competition, which is held by [[Kaggle]].<ref>{{cite news|title=The machine learning community takes on the Higgs|url=http://www.symmetrymagazine.org/article/july-2014/the-machine-learning-community-takes-on-the-higgs/|access-date=14 January 2015|newspaper=Symmetry Magazine|date=July 15, 2014|archive-date=16 April 2021|archive-url=https://web.archive.org/web/20210416100455/https://www.symmetrymagazine.org/article/july-2014/the-machine-learning-community-takes-on-the-higgs|url-status=live}}</ref>
* [[LTPP International Data Analysis Contest|LTPP data analysis contest]] held by [[FHWA]] and [[ASCE]].<ref name="Nehme 2016-09-29">{{cite web |first = Jean |last = Nehme |date = September 29, 2016 |url = https://www.fhwa.dot.gov/research/tfhrc/programs/infrastructure/pavements/ltpp/2016_2017_asce_ltpp_contest_guidelines.cfm |title = LTPP International Data Analysis Contest |publisher = Federal Highway Administration |access-date = October 22, 2017 |archive-date = October 21, 2017 |archive-url = https://web.archive.org/web/20171021010012/https://www.fhwa.dot.gov/research/tfhrc/programs/infrastructure/pavements/ltpp/2016_2017_asce_ltpp_contest_guidelines.cfm |url-status = live }}</ref><ref>{{cite web |date = May 26, 2016 |url = https://www.fhwa.dot.gov/research/tfhrc/programs/infrastructure/pavements/ltpp/ |title = Data.Gov:Long-Term Pavement Performance (LTPP) |access-date = November 10, 2017 |archive-date = November 1, 2017 |archive-url = https://web.archive.org/web/20171101191727/https://www.fhwa.dot.gov/research/tfhrc/programs/infrastructure/pavements/ltpp/ |url-status = live }}</ref>


==See also==
==See also==
{{Div col|colwidth=20em}}
{{Portal|statistics}}
*[[Actuarial science]]
{{colbegin||20em}}
*[[Analytics]]
*[[Analytics]]
*[[Augmented Analytics]]
*[[Big data]]
*[[Business intelligence]]
*[[Business intelligence]]
*[[Censoring (statistics)]]
*[[Censoring (statistics)]]
*[[Computational biology]]
*[[Computational physics]]
*[[Computational physics]]
*[[Computational science]]
*[[Cross-industry standard process for data mining]]
*[[Data acquisition]]
*[[Data acquisition]]
*[[Data blending]]
*[[Data governance]]
*[[Data governance]]
*[[Data mining]]
*[[Data mining]]
*[[Data Presentation Architecture]]
*[[Data presentation architecture]]
*[[Data science]]
*[[Data science]]
*[[Digital signal processing]]
*[[Digital signal processing]]
*[[Dimension reduction]]
*[[Dimensionality reduction]]
*[[Early case assessment]]
*[[Early case assessment]]
*[[Exploratory data analysis]]
*[[Exploratory data analysis]]
Line 391: Line 385:
*[[Multiway data analysis]]
*[[Multiway data analysis]]
*[[Nearest neighbor search]]
*[[Nearest neighbor search]]
*[[nonlinear system identification]]
*[[Nonlinear system identification]]
*[[Predictive analytics]]
*[[Predictive analytics]]
*[[Principal component analysis]]
*[[Principal component analysis]]
*[[Qualitative research]]
*[[Qualitative research]]
*[[Scientific computing]]
*[[Structured data analysis (statistics)]]
*[[Structured data analysis (statistics)]]
*[[System identification]]
*[[System identification]]
*[[Test method]]
*[[Test method]]
*[[Text analytics]]
*[[Text mining]]
*[[Unstructured data]]
*[[Unstructured data]]
*[[Wavelet]]
*[[Wavelet]]
*[[List of big data companies]]
{{colend}}
*[[List of datasets for machine-learning research]]
{{Div col end}}


==References==
==References==


===Citations===
===Citations===
{{Reflist|2}}
{{Reflist}}


===Bibliography===
===Bibliography===
*{{cite book |author-link1=Herman J. Adèr |first1=Herman J. |last1=Adèr |editor-first1=Herman J. |editor-last1=Adèr |editor-link2=Gideon J. Mellenbergh |editor-first2=Gideon J. |editor-last2=Mellenbergh |editor-link3=David Hand (statistician) |editor-first3=David J |editor-last3=Hand |title=Advising on research methods : a consultant's companion |publisher=Johannes van Kessel Pub |location=Huizen, Netherlands |year=2008a |isbn=9789079418015 |oclc=905799857 |chapter=Chapter 14: Phases and initial steps in data analysis |pages=333–356 }}
* [[Adèr, H.J.]] (2008). Chapter 14: Phases and initial steps in data analysis. In H.J. Adèr & G.J. Mellenbergh (Eds.) (with contributions by D.J. Hand), Advising on Research Methods: A consultant's companion (pp.&nbsp;333–356). Huizen, the Netherlands: Johannes van Kessel Publishing.
*{{cite book |author-link1=Herman J. Adèr |first1=Herman J. |last1=Adèr |editor-first1=Herman J. |editor-last1=Adèr |editor-link2=Gideon J. Mellenbergh |editor-first2=Gideon J. |editor-last2=Mellenbergh |editor-link3=David Hand (statistician) |editor-first3=David J |editor-last3=Hand |title=Advising on research methods : a consultant's companion |publisher=Johannes van Kessel Pub |location=Huizen, Netherlands |year=2008b |isbn=9789079418015 |oclc=905799857 |chapter=Chapter 15: The main analysis phase |pages=357–386 }}
* [[Adèr, H.J.]] (2008). Chapter 15: The main analysis phase. In H.J. Adèr & G.J. Mellenbergh (Eds.) (with contributions by D.J. Hand), Advising on Research Methods: A consultant's companion (pp.&nbsp;333–356). Huizen, the Netherlands: Johannes van Kessel Publishing.
* Tabachnick, B.G. & Fidell, L.S. (2007). Chapter 4: Cleaning up your act. Screening data prior to analysis. In B.G. Tabachnick & L.S. Fidell (Eds.), Using Multivariate Statistics, Fifth Edition (pp.&nbsp;60–116). Boston: Pearson Education, Inc. / Allyn and Bacon.
*Tabachnick, B.G. & Fidell, L.S. (2007). Chapter 4: Cleaning up your act. Screening data prior to analysis. In B.G. Tabachnick & L.S. Fidell (Eds.), Using Multivariate Statistics, Fifth Edition (pp.&nbsp;60–116). Boston: Pearson Education, Inc. / Allyn and Bacon.


==Further reading==
==Further reading==
{{wikiversity}}
{{wikiversity}}
* Fandango, Armando (2008). Python Data Analysis, Second Edition. Packt Publishers.
* [[Adèr, H.J.]] & [[Gideon J. Mellenbergh|Mellenbergh, G.J.]] (with contributions by D.J. Hand) (2008). Advising on Research Methods: A consultant's companion. Huizen, the Netherlands: Johannes van Kessel Publishing.
* [[Adèr, H.J.]] & [[Gideon J. Mellenbergh|Mellenbergh, G.J.]] (with contributions by D.J. Hand) (2008). ''Advising on Research Methods: A Consultant's Companion''. Huizen, the Netherlands: Johannes van Kessel Publishing. {{ISBN|978-90-79418-01-5}}
* Chambers, John M.; Cleveland, William S.; Kleiner, Beat; Tukey, Paul A. (1983). ''Graphical Methods for Data Analysis'', Wadsworth/Duxbury Press. {{ISBN|0-534-98052-X}}
* [[ASTM International]] (2002). ''Manual on Presentation of Data and Control Chart Analysis'', MNL 7A, ISBN 0-8031-2093-1
* S.Chekanov. "Numeric Computation and Statistical Data Analysis on the Java Platform", Springer 2016. ISBN 978-3-319-28531-3
* Fandango, Armando (2017). ''Python Data Analysis, 2nd Edition''. Packt Publishers. {{ISBN|978-1787127487}}
* Juran, Joseph M.; Godfrey, A. Blanton (1999). ''Juran's Quality Handbook''. 5th ed. New York: McGraw Hill. ISBN 0-07-034003-X
* Juran, Joseph M.; Godfrey, A. Blanton (1999). ''Juran's Quality Handbook, 5th Edition.'' New York: McGraw Hill. {{ISBN|0-07-034003-X}}
* Lewis-Beck, Michael S. (1995). ''Data Analysis: an Introduction'', Sage Publications Inc, ISBN 0-8039-5772-6
* Lewis-Beck, Michael S. (1995). ''Data Analysis: an Introduction'', Sage Publications Inc, {{ISBN|0-8039-5772-6}}
* NIST/SEMATEK (2008) [http://www.itl.nist.gov/div898/handbook/ ''Handbook of Statistical Methods''],
* NIST/SEMATECH (2008) [http://www.itl.nist.gov/div898/handbook/ ''Handbook of Statistical Methods''],
* Pyzdek, T, (2003). ''Quality Engineering Handbook'', ISBN 0-8247-4614-7
* Pyzdek, T, (2003). ''Quality Engineering Handbook'', {{ISBN|0-8247-4614-7}}
* [[Richard Veryard]] (1984). ''Pragmatic data analysis''. Oxford : Blackwell Scientific Publications. ISBN 0-632-01311-7
* [[Richard Veryard]] (1984). ''Pragmatic Data Analysis''. Oxford : Blackwell Scientific Publications. {{ISBN|0-632-01311-7}}
* Tabachnick, B.G. & Fidell, L.S. (2007). Using Multivariate Statistics, Fifth Edition. Boston: Pearson Education, Inc. / Allyn and Bacon, ISBN 978-0-205-45938-4
* Tabachnick, B.G.; Fidell, L.S. (2007). ''Using Multivariate Statistics, 5th Edition''. Boston: Pearson Education, Inc. / Allyn and Bacon, {{ISBN|978-0-205-45938-4}}
* {{cite news|url=http://www.businessweek.com/magazine/data-analytics-crunching-the-future-09082011.html|title=Data Analytics: Crunching the Future|last=Vance|date=September 8, 2011|publisher=Bloomberg Businessweek|accessdate=26 September 2011}}
*Hair, Joseph (2008). Marketing Research 4th ed. McGraw Hill. [http://answers.mheducation.com/marketing/marketing-research/data-analysis-testing-association ''Data Analysis: Testing for Association''] ISBN 0-07-340470-5


{{Authority control}}
{{data}}
{{data}}
{{Authority control}}


[[Category:Data analysis| ]]
[[Category:Data analysis| ]]
[[Category:Data processing]]
[[Category:Scientific method]]
[[Category:Scientific method]]
[[Category:Particle physics]]
[[Category:Computational fields of study]]
[[Category:Big data]]
[[Category:Data management]]

Latest revision as of 06:42, 11 December 2024

Data analysis is the process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making.[1] Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and is used in different business, science, and social science domains.[2] In today's business world, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively.[3]

Data mining is a particular data analysis technique that focuses on statistical modeling and knowledge discovery for predictive rather than purely descriptive purposes, while business intelligence covers data analysis that relies heavily on aggregation, focusing mainly on business information.[4] In statistical applications, data analysis can be divided into descriptive statistics, exploratory data analysis (EDA), and confirmatory data analysis (CDA).[5] EDA focuses on discovering new features in the data while CDA focuses on confirming or falsifying existing hypotheses.[6][7] Predictive analytics focuses on the application of statistical models for predictive forecasting or classification, while text analytics applies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a species of unstructured data. All of the above are varieties of data analysis.[8]

Data integration is a precursor to data analysis, and data analysis is closely linked to data visualization and data dissemination.[9]

Data analysis process

[edit]
Data science process flowchart from Doing Data Science, by Schutt & O'Neil (2013)

Analysis refers to dividing a whole into its separate components for individual examination.[10] Data analysis is a process for obtaining raw data, and subsequently converting it into information useful for decision-making by users.[1] Data is collected and analyzed to answer questions, test hypotheses, or disprove theories.[11]

Statistician John Tukey, defined data analysis in 1961, as:

"Procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data."[12]

There are several phases that can be distinguished, described below. The phases are iterative, in that feedback from later phases may result in additional work in earlier phases.[13] The CRISP framework, used in data mining, has similar steps.

Data requirements

[edit]

The data is necessary as inputs to the analysis, which is specified based upon the requirements of those directing the analytics (or customers, who will use the finished product of the analysis).[14][15] The general type of entity upon which the data will be collected is referred to as an experimental unit (e.g., a person or population of people). Specific variables regarding a population (e.g., age and income) may be specified and obtained. Data may be numerical or categorical (i.e., a text label for numbers).[13]

Data collection

[edit]

Data is collected from a variety of sources.[16][17] A list of data sources are available for study & research. The requirements may be communicated by analysts to custodians of the data; such as, Information Technology personnel within an organization.[18] Data collection or data gathering is the process of gathering and measuring information on targeted variables in an established system, which then enables one to answer relevant questions and evaluate outcomes. The data may also be collected from sensors in the environment, including traffic cameras, satellites, recording devices, etc. It may also be obtained through interviews, downloads from online sources, or reading documentation.[13]

Data processing

[edit]
The phases of the intelligence cycle used to convert raw information into actionable intelligence or knowledge are conceptually similar to the phases in data analysis.

Data, when initially obtained, must be processed or organized for analysis.[19][20] For instance, these may involve placing data into rows and columns in a table format (known as structured data) for further analysis, often through the use of spreadsheet(excel) or statistical software.[13]

Data cleaning

[edit]

Once processed and organized, the data may be incomplete, contain duplicates, or contain errors.[21][22] The need for data cleaning will arise from problems in the way that the datum are entered and stored.[21] Data cleaning is the process of preventing and correcting these errors. Common tasks include record matching, identifying inaccuracy of data, overall quality of existing data, deduplication, and column segmentation.[23] Such data problems can also be identified through a variety of analytical techniques. For example; with financial information, the totals for particular variables may be compared against separately published numbers that are believed to be reliable.[24][25] Unusual amounts, above or below predetermined thresholds, may also be reviewed. There are several types of data cleaning, that are dependent upon the type of data in the set; this could be phone numbers, email addresses, employers, or other values.[26][27] Quantitative data methods for outlier detection, can be used to get rid of data that appears to have a higher likelihood of being input incorrectly.[28] Textual data spell checkers can be used to lessen the amount of mistyped words. However, it is harder to tell if the words themselves are correct.[29]

Exploratory data analysis

[edit]

Once the datasets are cleaned, they can then be analyzed. Analysts may apply a variety of techniques, referred to as exploratory data analysis, to begin understanding the messages contained within the obtained data.[30] The process of data exploration may result in additional data cleaning or additional requests for data; thus, the initialization of the iterative phases mentioned in the lead paragraph of this section.[31] Descriptive statistics, such as, the average or median, can be generated to aid in understanding the data.[32][33] Data visualization is also a technique used, in which the analyst is able to examine the data in a graphical format in order to obtain additional insights, regarding the messages within the data.[13]

Modeling and algorithms

[edit]

Mathematical formulas or models (also known as algorithms), may be applied to the data in order to identify relationships among the variables; for example, using correlation or causation.[34][35] In general terms, models may be developed to evaluate a specific variable based on other variable(s) contained within the dataset, with some residual error depending on the implemented model's accuracy (e.g., Data = Model + Error).[36][11]

Inferential statistics includes utilizing techniques that measure the relationships between particular variables.[37] For example, regression analysis may be used to model whether a change in advertising (independent variable X), provides an explanation for the variation in sales (dependent variable Y).[38] In mathematical terms, Y (sales) is a function of X (advertising).[39] It may be described as (Y = aX + b + error), where the model is designed such that (a) and (b) minimize the error when the model predicts Y for a given range of values of X.[40] Analysts may also attempt to build models that are descriptive of the data, in an aim to simplify analysis and communicate results.[11]

Data product

[edit]

A data product is a computer application that takes data inputs and generates outputs, feeding them back into the environment.[41] It may be based on a model or algorithm. For instance, an application that analyzes data about customer purchase history, and uses the results to recommend other purchases the customer might enjoy.[42][13]

Communication

[edit]
Data visualization is used to help understand the results after data is analyzed.[43]

Once data is analyzed, it may be reported in many formats to the users of the analysis to support their requirements.[44] The users may have feedback, which results in additional analysis. As such, much of the analytical cycle is iterative.[13]

When determining how to communicate the results, the analyst may consider implementing a variety of data visualization techniques to help communicate the message more clearly and efficiently to the audience.[45] Data visualization uses information displays (graphics such as, tables and charts) to help communicate key messages contained in the data.[46] Tables are a valuable tool by enabling the ability of a user to query and focus on specific numbers; while charts (e.g., bar charts or line charts), may help explain the quantitative messages contained in the data.[47]

Quantitative messages

[edit]
A time series illustrated with a line chart demonstrating trends in U.S. federal spending and revenue over time.
A scatterplot illustrating the correlation between two variables (inflation and unemployment) measured at points in time.

Stephen Few described eight types of quantitative messages that users may attempt to understand or communicate from a set of data and the associated graphs used to help communicate the message.[48] Customers specifying requirements and analysts performing the data analysis may consider these messages during the course of the process.[49]

  1. Time-series: A single variable is captured over a period of time, such as the unemployment rate over a 10-year period. A line chart may be used to demonstrate the trend.[50]
  2. Ranking: Categorical subdivisions are ranked in ascending or descending order, such as a ranking of sales performance (the measure) by salespersons (the category, with each salesperson a categorical subdivision) during a single period.[51] A bar chart may be used to show the comparison across the salespersons.[52]
  3. Part-to-whole: Categorical subdivisions are measured as a ratio to the whole (i.e., a percentage out of 100%). A pie chart or bar chart can show the comparison of ratios, such as the market share represented by competitors in a market.[53]
  4. Deviation: Categorical subdivisions are compared against a reference, such as a comparison of actual vs. budget expenses for several departments of a business for a given time period. A bar chart can show the comparison of the actual versus the reference amount.[54]
  5. Frequency distribution: Shows the number of observations of a particular variable for a given interval, such as the number of years in which the stock market return is between intervals such as 0–10%, 11–20%, etc. A histogram, a type of bar chart, may be used for this analysis.[55]
  6. Correlation: Comparison between observations represented by two variables (X,Y) to determine if they tend to move in the same or opposite directions. For example, plotting unemployment (X) and inflation (Y) for a sample of months. A scatter plot is typically used for this message.[56]
  7. Nominal comparison: Comparing categorical subdivisions in no particular order, such as the sales volume by product code. A bar chart may be used for this comparison.[57]
  8. Geographic or geospatial: Comparison of a variable across a map or layout, such as the unemployment rate by state or the number of persons on the various floors of a building. A cartogram is a typical graphic used.[58][59]

Analyzing quantitative data

[edit]

Author Jonathan Koomey has recommended a series of best practices for understanding quantitative data.[60] These include:

  • Check raw data for anomalies prior to performing an analysis;
  • Re-perform important calculations, such as verifying columns of data that are formula driven;
  • Confirm main totals are the sum of subtotals;
  • Check relationships between numbers that should be related in a predictable way, such as ratios over time;
  • Normalize numbers to make comparisons easier, such as analyzing amounts per person or relative to GDP or as an index value relative to a base year;
  • Break problems into component parts by analyzing factors that led to the results, such as DuPont analysis of return on equity.[25]

For the variables under examination, analysts typically obtain descriptive statistics for them, such as the mean (average), median, and standard deviation.[61] They may also analyze the distribution of the key variables to see how the individual values cluster around the mean.[62]

An illustration of the MECE principle used for data analysis.

The consultants at McKinsey and Company named a technique for breaking a quantitative problem down into its component parts called the MECE principle.[63] Each layer can be broken down into its components; each of the sub-components must be mutually exclusive of each other and collectively add up to the layer above them.[64] The relationship is referred to as "Mutually Exclusive and Collectively Exhaustive" or MECE. For example, profit by definition can be broken down into total revenue and total cost.[65] In turn, total revenue can be analyzed by its components, such as the revenue of divisions A, B, and C (which are mutually exclusive of each other) and should add to the total revenue (collectively exhaustive).[66]

Analysts may use robust statistical measurements to solve certain analytical problems.[67] Hypothesis testing is used when a particular hypothesis about the true state of affairs is made by the analyst and data is gathered to determine whether that state of affairs is true or false.[68][69] For example, the hypothesis might be that "Unemployment has no effect on inflation", which relates to an economics concept called the Phillips Curve.[70] Hypothesis testing involves considering the likelihood of Type I and type II errors, which relate to whether the data supports accepting or rejecting the hypothesis.[71][72]

Regression analysis may be used when the analyst is trying to determine the extent to which independent variable X affects dependent variable Y (e.g., "To what extent do changes in the unemployment rate (X) affect the inflation rate (Y)?").[73] This is an attempt to model or fit an equation line or curve to the data, such that Y is a function of X.[74][75]

Necessary condition analysis (NCA) may be used when the analyst is trying to determine the extent to which independent variable X allows variable Y (e.g., "To what extent is a certain unemployment rate (X) necessary for a certain inflation rate (Y)?").[73] Whereas (multiple) regression analysis uses additive logic where each X-variable can produce the outcome and the X's can compensate for each other (they are sufficient but not necessary),[76] necessary condition analysis (NCA) uses necessity logic, where one or more X-variables allow the outcome to exist, but may not produce it (they are necessary but not sufficient). Each single necessary condition must be present and compensation is not possible.[77]

Analytical activities of data users

[edit]
Analytic activities of data visualization users

Users may have particular data points of interest within a data set, as opposed to the general messaging outlined above. Such low-level user analytic activities are presented in the following table. The taxonomy can also be organized by three poles of activities: retrieving values, finding data points, and arranging data points.[78][79][80][81]

# Task General
Description
Pro Forma
Abstract
Examples
1 Retrieve Value Given a set of specific cases, find attributes of those cases. What are the values of attributes {X, Y, Z, ...} in the data cases {A, B, C, ...}? - What is the mileage per gallon of the Ford Mondeo?

- How long is the movie Gone with the Wind?

2 Filter Given some concrete conditions on attribute values, find data cases satisfying those conditions. Which data cases satisfy conditions {A, B, C...}? - What Kellogg's cereals have high fiber?

- What comedies have won awards?

- Which funds underperformed the SP-500?

3 Compute Derived Value Given a set of data cases, compute an aggregate numeric representation of those data cases. What is the value of aggregation function F over a given set S of data cases? - What is the average calorie content of Post cereals?

- What is the gross income of all stores combined?

- How many manufacturers of cars are there?

4 Find Extremum Find data cases possessing an extreme value of an attribute over its range within the data set. What are the top/bottom N data cases with respect to attribute A? - What is the car with the highest MPG?

- What director/film has won the most awards?

- What Marvel Studios film has the most recent release date?

5 Sort Given a set of data cases, rank them according to some ordinal metric. What is the sorted order of a set S of data cases according to their value of attribute A? - Order the cars by weight.

- Rank the cereals by calories.

6 Determine Range Given a set of data cases and an attribute of interest, find the span of values within the set. What is the range of values of attribute A in a set S of data cases? - What is the range of film lengths?

- What is the range of car horsepowers?

- What actresses are in the data set?

7 Characterize Distribution Given a set of data cases and a quantitative attribute of interest, characterize the distribution of that attribute's values over the set. What is the distribution of values of attribute A in a set S of data cases? - What is the distribution of carbohydrates in cereals?

- What is the age distribution of shoppers?

8 Find Anomalies Identify any anomalies within a given set of data cases with respect to a given relationship or expectation, e.g. statistical outliers. Which data cases in a set S of data cases have unexpected/exceptional values? - Are there exceptions to the relationship between horsepower and acceleration?

- Are there any outliers in protein?

9 Cluster Given a set of data cases, find clusters of similar attribute values. Which data cases in a set S of data cases are similar in value for attributes {X, Y, Z, ...}? - Are there groups of cereals w/ similar fat/calories/sugar?

- Is there a cluster of typical film lengths?

10 Correlate Given a set of data cases and two attributes, determine useful relationships between the values of those attributes. What is the correlation between attributes X and Y over a given set S of data cases? - Is there a correlation between carbohydrates and fat?

- Is there a correlation between country of origin and MPG?

- Do different genders have a preferred payment method?

- Is there a trend of increasing film length over the years?

11 Contextualization[81] Given a set of data cases, find contextual relevancy of the data to the users. Which data cases in a set S of data cases are relevant to the current users' context? - Are there groups of restaurants that have foods based on my current caloric intake?

Barriers to effective analysis

[edit]

Barriers to effective analysis may exist among the analysts performing the data analysis or among the audience. Distinguishing fact from opinion, cognitive biases, and innumeracy are all challenges to sound data analysis.[82]

Confusing fact and opinion

[edit]

You are entitled to your own opinion, but you are not entitled to your own facts.

Effective analysis requires obtaining relevant facts to answer questions, support a conclusion or formal opinion, or test hypotheses.[83][84] Facts by definition are irrefutable, meaning that any person involved in the analysis should be able to agree upon them.[85] For example, in August 2010, the Congressional Budget Office (CBO) estimated that extending the Bush tax cuts of 2001 and 2003 for the 2011–2020 time period would add approximately $3.3 trillion to the national debt.[86] Everyone should be able to agree that indeed this is what CBO reported; they can all examine the report. This makes it a fact. Whether persons agree or disagree with the CBO is their own opinion.[87]

As another example, the auditor of a public company must arrive at a formal opinion on whether financial statements of publicly traded corporations are "fairly stated, in all material respects".[88] This requires extensive analysis of factual data and evidence to support their opinion. When making the leap from facts to opinions, there is always the possibility that the opinion is erroneous.[89]

Cognitive biases

[edit]

There are a variety of cognitive biases that can adversely affect analysis. For example, confirmation bias is the tendency to search for or interpret information in a way that confirms one's preconceptions.[90] In addition, individuals may discredit information that does not support their views.[91]

Analysts may be trained specifically to be aware of these biases and how to overcome them.[92] In his book Psychology of Intelligence Analysis, retired CIA analyst Richards Heuer wrote that analysts should clearly delineate their assumptions and chains of inference and specify the degree and source of the uncertainty involved in the conclusions.[93] He emphasized procedures to help surface and debate alternative points of view.[94]

Innumeracy

[edit]

Effective analysts are generally adept with a variety of numerical techniques. However, audiences may not have such literacy with numbers or numeracy; they are said to be innumerate.[95] Persons communicating the data may also be attempting to mislead or misinform, deliberately using bad numerical techniques.[96]

For example, whether a number is rising or falling may not be the key factor. More important may be the number relative to another number, such as the size of government revenue or spending relative to the size of the economy (GDP) or the amount of cost relative to revenue in corporate financial statements.[97] This numerical technique is referred to as normalization[25] or common-sizing. There are many such techniques employed by analysts, whether adjusting for inflation (i.e., comparing real vs. nominal data) or considering population increases, demographics, etc.[98] Analysts apply a variety of techniques to address the various quantitative messages described in the section above.[99]

Analysts may also analyze data under different assumptions or scenario. For example, when analysts perform financial statement analysis, they will often recast the financial statements under different assumptions to help arrive at an estimate of future cash flow, which they then discount to present value based on some interest rate, to determine the valuation of the company or its stock.[100][101] Similarly, the CBO analyzes the effects of various policy options on the government's revenue, outlays and deficits, creating alternative future scenarios for key measures.[102]

Other topics

[edit]

Smart buildings

[edit]

A data analytics approach can be used in order to predict energy consumption in buildings.[103] The different steps of the data analysis process are carried out in order to realise smart buildings, where the building management and control operations including heating, ventilation, air conditioning, lighting and security are realised automatically by miming the needs of the building users and optimising resources like energy and time.[104]

Analytics and business intelligence

[edit]

Analytics is the "extensive use of data, statistical and quantitative analysis, explanatory and predictive models, and fact-based management to drive decisions and actions." It is a subset of business intelligence, which is a set of technologies and processes that uses data to understand and analyze business performance to drive decision-making .[105]

Education

[edit]

In education, most educators have access to a data system for the purpose of analyzing student data.[106] These data systems present data to educators in an over-the-counter data format (embedding labels, supplemental documentation, and a help system and making key package/display and content decisions) to improve the accuracy of educators' data analyses.[107]

Practitioner notes

[edit]

This section contains rather technical explanations that may assist practitioners but are beyond the typical scope of a Wikipedia article.[108]

Initial data analysis

[edit]

The most important distinction between the initial data analysis phase and the main analysis phase, is that during initial data analysis one refrains from any analysis that is aimed at answering the original research question.[109] The initial data analysis phase is guided by the following four questions:[110]

Quality of data

[edit]

The quality of the data should be checked as early as possible. Data quality can be assessed in several ways, using different types of analysis: frequency counts, descriptive statistics (mean, standard deviation, median), normality (skewness, kurtosis, frequency histograms), normal imputation is needed.[111]

  • Analysis of extreme observations: outlying observations in the data are analyzed to see if they seem to disturb the distribution.[112]
  • Comparison and correction of differences in coding schemes: variables are compared with coding schemes of variables external to the data set, and possibly corrected if coding schemes are not comparable.[113]
  • Test for common-method variance.

The choice of analyses to assess the data quality during the initial data analysis phase depends on the analyses that will be conducted in the main analysis phase.[114]

Quality of measurements

[edit]

The quality of the measurement instruments should only be checked during the initial data analysis phase when this is not the focus or research question of the study.[115][116] One should check whether structure of measurement instruments corresponds to structure reported in the literature.

There are two ways to assess measurement quality:

  • Confirmatory factor analysis
  • Analysis of homogeneity (internal consistency), which gives an indication of the reliability of a measurement instrument.[117] During this analysis, one inspects the variances of the items and the scales, the Cronbach's α of the scales, and the change in the Cronbach's alpha when an item would be deleted from a scale[118]

Initial transformations

[edit]

After assessing the quality of the data and of the measurements, one might decide to impute missing data, or to perform initial transformations of one or more variables, although this can also be done during the main analysis phase.[119]
Possible transformations of variables are:[120]

  • Square root transformation (if the distribution differs moderately from normal)
  • Log-transformation (if the distribution differs substantially from normal)
  • Inverse transformation (if the distribution differs severely from normal)
  • Make categorical (ordinal / dichotomous) (if the distribution differs severely from normal, and no transformations help)

Did the implementation of the study fulfill the intentions of the research design?

[edit]

One should check the success of the randomization procedure, for instance by checking whether background and substantive variables are equally distributed within and across groups.[121]
If the study did not need or use a randomization procedure, one should check the success of the non-random sampling, for instance by checking whether all subgroups of the population of interest are represented in sample.[122]
Other possible data distortions that should be checked are:

  • dropout (this should be identified during the initial data analysis phase)
  • Item non-response (whether this is random or not should be assessed during the initial data analysis phase)
  • Treatment quality (using manipulation checks).[123]

Characteristics of data sample

[edit]

In any report or article, the structure of the sample must be accurately described.[124][125] It is especially important to exactly determine the structure of the sample (and specifically the size of the subgroups) when subgroup analyses will be performed during the main analysis phase.[126]
The characteristics of the data sample can be assessed by looking at:

  • Basic statistics of important variables
  • Scatter plots
  • Correlations and associations
  • Cross-tabulations[127]

Final stage of the initial data analysis

[edit]

During the final stage, the findings of the initial data analysis are documented, and necessary, preferable, and possible corrective actions are taken.[128]
Also, the original plan for the main data analyses can and should be specified in more detail or rewritten.[129] In order to do this, several decisions about the main data analyses can and should be made:

  • In the case of non-normals: should one transform variables; make variables categorical (ordinal/dichotomous); adapt the analysis method?
  • In the case of missing data: should one neglect or impute the missing data; which imputation technique should be used?
  • In the case of outliers: should one use robust analysis techniques?
  • In case items do not fit the scale: should one adapt the measurement instrument by omitting items, or rather ensure comparability with other (uses of the) measurement instrument(s)?
  • In the case of (too) small subgroups: should one drop the hypothesis about inter-group differences, or use small sample techniques, like exact tests or bootstrapping?
  • In case the randomization procedure seems to be defective: can and should one calculate propensity scores and include them as covariates in the main analyses?[130]

Analysis

[edit]

Several analyses can be used during the initial data analysis phase:[131]

  • Univariate statistics (single variable)
  • Bivariate associations (correlations)
  • Graphical techniques (scatter plots)

It is important to take the measurement levels of the variables into account for the analyses, as special statistical techniques are available for each level:[132]

  • Nominal and ordinal variables
    • Frequency counts (numbers and percentages)
    • Associations
      • circumambulations (crosstabulations)
      • hierarchical loglinear analysis (restricted to a maximum of 8 variables)
      • loglinear analysis (to identify relevant/important variables and possible confounders)
    • Exact tests or bootstrapping (in case subgroups are small)
    • Computation of new variables
  • Continuous variables
    • Distribution
      • Statistics (M, SD, variance, skewness, kurtosis)
      • Stem-and-leaf displays
      • Box plots

Nonlinear analysis

[edit]

Nonlinear analysis is often necessary when the data is recorded from a nonlinear system. Nonlinear systems can exhibit complex dynamic effects including bifurcations, chaos, harmonics and subharmonics that cannot be analyzed using simple linear methods. Nonlinear data analysis is closely related to nonlinear system identification.[133]

Main data analysis

[edit]

In the main analysis phase, analyses aimed at answering the research question are performed as well as any other relevant analysis needed to write the first draft of the research report.[134]

Exploratory and confirmatory approaches

[edit]

In the main analysis phase, either an exploratory or confirmatory approach can be adopted. Usually the approach is decided before data is collected.[135] In an exploratory analysis no clear hypothesis is stated before analysing the data, and the data is searched for models that describe the data well.[136] In a confirmatory analysis clear hypotheses about the data are tested.[137]

Exploratory data analysis should be interpreted carefully. When testing multiple models at once there is a high chance on finding at least one of them to be significant, but this can be due to a type 1 error.[138] It is important to always adjust the significance level when testing multiple models with, for example, a Bonferroni correction.[139] Also, one should not follow up an exploratory analysis with a confirmatory analysis in the same dataset.[140] An exploratory analysis is used to find ideas for a theory, but not to test that theory as well.[140] When a model is found exploratory in a dataset, then following up that analysis with a confirmatory analysis in the same dataset could simply mean that the results of the confirmatory analysis are due to the same type 1 error that resulted in the exploratory model in the first place.[140] The confirmatory analysis therefore will not be more informative than the original exploratory analysis.[141]

Stability of results

[edit]

It is important to obtain some indication about how generalizable the results are.[142] While this is often difficult to check, one can look at the stability of the results. Are the results reliable and reproducible? There are two main ways of doing that.[143]

  • Cross-validation. By splitting the data into multiple parts, we can check if an analysis (like a fitted model) based on one part of the data generalizes to another part of the data as well.[144] Cross-validation is generally inappropriate, though, if there are correlations within the data, e.g. with panel data.[145] Hence other methods of validation sometimes need to be used. For more on this topic, see statistical model validation.[146]
  • Sensitivity analysis. A procedure to study the behavior of a system or model when global parameters are (systematically) varied. One way to do that is via bootstrapping.[147]

Free software for data analysis

[edit]

Notable free software for data analysis include:

  • DevInfo – A database system endorsed by the United Nations Development Group for monitoring and analyzing human development.[148]
  • ELKI – Data mining framework in Java with data mining oriented visualization functions.
  • KNIME – The Konstanz Information Miner, a user friendly and comprehensive data analytics framework.
  • Orange – A visual programming tool featuring interactive data visualization and methods for statistical data analysis, data mining, and machine learning.
  • Pandas – Python library for data analysis.
  • PAW – FORTRAN/C data analysis framework developed at CERN.
  • R – A programming language and software environment for statistical computing and graphics.[149]
  • ROOT – C++ data analysis framework developed at CERN.
  • SciPy – Python library for scientific computing.
  • Julia – A programming language well-suited for numerical analysis and computational science.

Reproducible analysis

[edit]

The typical data analysis workflow involves collecting data, running analyses through various scripts, creating visualizations, and writing reports. However, this workflow presents challenges, including a separation between analysis scripts and data, as well as a gap between analysis and documentation. Often, the correct order of running scripts is only described informally or resides in the data scientist's memory. The potential for losing this information creates issues for reproducibility. To address these challenges, it is essential to have analysis scripts written for automated, reproducible workflows. Additionally, dynamic documentation is crucial, providing reports that are understandable by both machines and humans, ensuring accurate representation of the analysis workflow even as scripts evolve.[150]

International data analysis contests

[edit]

Different companies or organizations hold data analysis contests to encourage researchers to utilize their data or to solve a particular question using data analysis.[151][152] A few examples of well-known international data analysis contests are as follows:[153]

See also

[edit]

References

[edit]

Citations

[edit]
  1. ^ a b "Transforming Unstructured Data into Useful Information", Big Data, Mining, and Analytics, Auerbach Publications, pp. 227–246, 2014-03-12, doi:10.1201/b16666-14, ISBN 978-0-429-09529-0, retrieved 2021-05-29
  2. ^ "The Multiple Facets of Correlation Functions", Data Analysis Techniques for Physical Scientists, Cambridge University Press, pp. 526–576, 2017, doi:10.1017/9781108241922.013, ISBN 978-1-108-41678-8, retrieved 2021-05-29
  3. ^ Xia, B. S., & Gong, P. (2015). Review of business intelligence through data analysis. Benchmarking, 21(2), 300-311. doi:10.1108/BIJ-08-2012-0050
  4. ^ Exploring Data Analysis
  5. ^ "Data Coding and Exploratory Analysis (EDA) Rules for Data Coding Exploratory Data Analysis (EDA) Statistical Assumptions", SPSS for Intermediate Statistics, Routledge, pp. 42–67, 2004-08-16, doi:10.4324/9781410611420-6, ISBN 978-1-4106-1142-0, retrieved 2021-05-29
  6. ^ Spie (2014-10-01). "New European ICT call focuses on PICs, lasers, data transfer". SPIE Professional. doi:10.1117/2.4201410.10. ISSN 1994-4403.
  7. ^ Samandar, Petersson; Svantesson, Sofia (2017). Skapandet av förtroende inom eWOM : En studie av profilbildens effekt ur ett könsperspektiv. Högskolan i Gävle, Företagsekonomi. OCLC 1233454128.
  8. ^ Goodnight, James (2011-01-13). "The forecast for predictive analytics: hot and getting hotter". Statistical Analysis and Data Mining: The ASA Data Science Journal. 4 (1): 9–10. doi:10.1002/sam.10106. ISSN 1932-1864. S2CID 38571193.
  9. ^ Sherman, Rick (4 November 2014). Business intelligence guidebook: from data integration to analytics. Amsterdam. ISBN 978-0-12-411528-6. OCLC 894555128.{{cite book}}: CS1 maint: location missing publisher (link)
  10. ^ Field, John (2009), "Dividing listening into its components", Listening in the Language Classroom, Cambridge: Cambridge University Press, pp. 96–109, doi:10.1017/cbo9780511575945.008, ISBN 978-0-511-57594-5, retrieved 2021-05-29
  11. ^ a b c Judd, Charles; McCleland, Gary (1989). Data Analysis. Harcourt Brace Jovanovich. ISBN 0-15-516765-0.
  12. ^ Tukey, John W. (March 1962). "John Tukey-The Future of Data Analysis-July 1961". The Annals of Mathematical Statistics. 33 (1): 1–67. doi:10.1214/aoms/1177704711. Archived from the original on 2020-01-26. Retrieved 2015-01-01.
  13. ^ a b c d e f g Schutt, Rachel; O'Neil, Cathy (2013). Doing Data Science. O'Reilly Media. ISBN 978-1-449-35865-5.
  14. ^ "USE OF THE DATA", Handbook of Petroleum Product Analysis, Hoboken, NJ: John Wiley & Sons, Inc, pp. 296–303, 2015-02-06, doi:10.1002/9781118986370.ch18, ISBN 978-1-118-98637-0, retrieved 2021-05-29
  15. ^ Ainsworth, Penne (20 May 2019). Introduction to accounting : an integrated approach. John Wiley & Sons. ISBN 978-1-119-60014-5. OCLC 1097366032.
  16. ^ Margo, Robert A. (2000). Wages and labor markets in the United States, 1820-1860. University of Chicago Press. ISBN 0-226-50507-3. OCLC 41285104.
  17. ^ Olusola, Johnson Adedeji; Shote, Adebola Adekunle; Ouigmane, Abdellah; Isaifan, Rima J. (7 May 2021). "Table 1: Data type and sources of data collected for this research". PeerJ. 9: e11387. doi:10.7717/peerj.11387/table-1.
  18. ^ MacPherson, Derek (2019-10-16), "Information Technology Analysts' Perspectives", Data Strategy in Colleges and Universities, Routledge, pp. 168–183, doi:10.4324/9780429437564-12, ISBN 978-0-429-43756-4, S2CID 211738958, retrieved 2021-05-29
  19. ^ Nelson, Stephen L. (2014). Excel data analysis for dummies. Wiley. ISBN 978-1-118-89810-9. OCLC 877772392.
  20. ^ "Figure 3—source data 1. Raw and processed values obtained through qPCR". 30 August 2017. doi:10.7554/elife.28468.029. {{cite journal}}: Cite journal requires |journal= (help)
  21. ^ a b Bohannon, John (2016-02-24). "Many surveys, about one in five, may contain fraudulent data". Science. doi:10.1126/science.aaf4104. ISSN 0036-8075.
  22. ^ Jeannie Scruggs, Garber; Gross, Monty; Slonim, Anthony D. (2010). Avoiding common nursing errors. Wolters Kluwer Health/Lippincott Williams & Wilkins. ISBN 978-1-60547-087-0. OCLC 338288678.
  23. ^ "Data Cleaning". Microsoft Research. Archived from the original on 29 October 2013. Retrieved 26 October 2013.
  24. ^ Hancock, R.G.V.; Carter, Tristan (February 2010). "How reliable are our published archaeometric analyses? Effects of analytical techniques through time on the elemental analysis of obsidians". Journal of Archaeological Science. 37 (2): 243–250. Bibcode:2010JArSc..37..243H. doi:10.1016/j.jas.2009.10.004. ISSN 0305-4403.
  25. ^ a b c "Perceptual Edge-Jonathan Koomey-Best practices for understanding quantitative data-February 14, 2006" (PDF). Archived (PDF) from the original on October 5, 2014. Retrieved November 12, 2014.
  26. ^ Peleg, Roni; Avdalimov, Angelika; Freud, Tamar (2011-03-23). "Providing cell phone numbers and email addresses to Patients: the physician's perspective". BMC Research Notes. 4 (1): 76. doi:10.1186/1756-0500-4-76. ISSN 1756-0500. PMC 3076270. PMID 21426591.
  27. ^ Goodman, Lenn Evan (1998). Judaism, human rights, and human values. Oxford University Press. ISBN 0-585-24568-1. OCLC 45733915.
  28. ^ Hanzo, Lajos. "Blind joint maximum likelihood channel estimation and data detection for single-input multiple-output systems". doi:10.1049/iet-tv.44.786. Retrieved 2021-05-29. {{cite journal}}: Cite journal requires |journal= (help)
  29. ^ Hellerstein, Joseph (27 February 2008). "Quantitative Data Cleaning for Large Databases" (PDF). EECS Computer Science Division: 3. Archived (PDF) from the original on 13 October 2013. Retrieved 26 October 2013.
  30. ^ Davis, Steve; Pettengill, James B.; Luo, Yan; Payne, Justin; Shpuntoff, Al; Rand, Hugh; Strain, Errol (26 August 2015). "CFSAN SNP Pipeline: An automated method for constructing SNP matrices from next-generation sequence data". PeerJ Computer Science. 1: e20. doi:10.7717/peerj-cs.20/supp-1.
  31. ^ "FTC requests additional data". Pump Industry Analyst. 1999 (48): 12. December 1999. doi:10.1016/s1359-6128(99)90509-8. ISSN 1359-6128.
  32. ^ "Exploring your Data with Data Visualization & Descriptive Statistics: Common Descriptive Statistics for Quantitative Data". 2017. doi:10.4135/9781529732795. {{cite journal}}: Cite journal requires |journal= (help)
  33. ^ Murray, Daniel G. (2013). Tableau your data! : fast and easy visual analysis with Tableau Software. J. Wiley & Sons. ISBN 978-1-118-61204-0. OCLC 873810654.
  34. ^ Ben-Ari, Mordechai (2012), "First-Order Logic: Formulas, Models, Tableaux", Mathematical Logic for Computer Science, London: Springer London, pp. 131–154, doi:10.1007/978-1-4471-4129-7_7, ISBN 978-1-4471-4128-0, retrieved 2021-05-31
  35. ^ Sosa, Ernest (2011). Causation. Oxford Univ. Press. ISBN 978-0-19-875094-9. OCLC 767569031.
  36. ^ Evans, Michelle V.; Dallas, Tad A.; Han, Barbara A.; Murdock, Courtney C.; Drake, John M. (28 February 2017). Brady, Oliver (ed.). "Figure 2. Variable importance by permutation, averaged over 25 models". eLife. 6: e22053. doi:10.7554/elife.22053.004.
  37. ^ Watson, Kevin; Halperin, Israel; Aguilera-Castells, Joan; Iacono, Antonio Dello (12 November 2020). "Table 3: Descriptive (mean ± SD), inferential (95% CI) and qualitative statistics (ES) of all variables between self-selected and predetermined conditions". PeerJ. 8: e10361. doi:10.7717/peerj.10361/table-3.
  38. ^ Cortés-Molino, Álvaro; Aulló-Maestro, Isabel; Fernandez-Luque, Ismael; Flores-Moya, Antonio; Carreira, José A.; Salvo, A. Enrique (22 October 2020). "Table 3: Best regression models between LIDAR data (independent variable) and field-based Forestereo data (dependent variable), used to map spatial distribution of the main forest structure variables". PeerJ. 8: e10158. doi:10.7717/peerj.10158/table-3.
  39. ^ International Sales Terms, Beck/Hart, 2014, doi:10.5040/9781472561671.ch-003, ISBN 978-1-4725-6167-1, retrieved 2021-05-31
  40. ^ Nwabueze, JC (2008-05-21). "Performances of estimators of linear model with auto-correlated error terms when the independent variable is normal". Journal of the Nigerian Association of Mathematical Physics. 9 (1). doi:10.4314/jonamp.v9i1.40071. ISSN 1116-4336.
  41. ^ Conway, Steve (2012-07-04). "A Cautionary Note on Data Inputs and Visual Outputs in Social Network Analysis". British Journal of Management. 25 (1): 102–117. doi:10.1111/j.1467-8551.2012.00835.x. hdl:2381/36068. ISSN 1045-3172. S2CID 154347514.
  42. ^ "Customer Purchases and Other Repeated Events", Data Analysis Using SQL and Excel®, Indianapolis, Indiana: John Wiley & Sons, Inc., pp. 367–420, 2016-01-29, doi:10.1002/9781119183419.ch8, ISBN 978-1-119-18341-9, retrieved 2021-05-31
  43. ^ Grandjean, Martin (2014). "La connaissance est un réseau" (PDF). Les Cahiers du Numérique. 10 (3): 37–54. doi:10.3166/lcn.10.3.37-54. Archived (PDF) from the original on 2015-09-27. Retrieved 2015-05-05.
  44. ^ Data requirements for semiconductor die. Exchange data formats and data dictionary, BSI British Standards, doi:10.3403/02271298, retrieved 2021-05-31
  45. ^ Yee, D. (1985-04-01). "How to Communicate Your Message to an Audience Effectively". The Gerontologist. 25 (2): 209. doi:10.1093/geront/25.2.209. ISSN 0016-9013.
  46. ^ Bemowska-Kałabun, Olga; Wąsowicz, Paweł; Napora-Rutkowski, Łukasz; Nowak-Życzyńska, Zuzanna; Wierzbicka, Małgorzata (11 June 2019). "Supplemental Information 1: Raw data for charts and tables". doi:10.7287/peerj.preprints.27793v1/supp-1. {{cite journal}}: Cite journal requires |journal= (help)
  47. ^ Visualizing Data About UK Museums: Bar Charts, Line Charts and Heat Maps. 2021. doi:10.4135/9781529768749. ISBN 9781529768749. S2CID 240967380.
  48. ^ Tunqui Neira, José Manuel (2019-09-19). "Thank you for your review. Please find in the attached pdf file a detailed response to the points you raised". doi:10.5194/hess-2019-325-ac2. S2CID 241041810. {{cite journal}}: Cite journal requires |journal= (help)
  49. ^ Brackett, John W. (1989), "Performing Requirements Analysis Project Courses for External Customers", Issues in Software Engineering Education, New York, NY: Springer New York, pp. 276–285, doi:10.1007/978-1-4613-9614-7_20, ISBN 978-1-4613-9616-1, retrieved 2021-06-03
  50. ^ Wyckhuys, Kris A. G.; Wongtiem, Prapit; Rauf, Aunu; Thancharoen, Anchana; Heimpel, George E.; Le, Nhung T. T.; Fanani, Muhammad Zainal; Gurr, Geoff M.; Lundgren, Jonathan G.; Burra, Dharani D.; Palao, Leo K.; Hyman, Glenn; Graziosi, Ignazio; Le, Vi X.; Cock, Matthew J. W.; Tscharntke, Teja; Wratten, Steve D.; Nguyen, Liem V.; You, Minsheng; Lu, Yanhui; Ketelaar, Johannes W.; Goergen, Georg; Neuenschwander, Peter (19 October 2018). "Figure 2: Bi-monthly mealybug population fluctuations in southern Vietnam, over a 2-year time period". PeerJ. 6: e5796. doi:10.7717/peerj.5796/fig-2.
  51. ^ Riehl, Emily (2014), "A sampling of 2-categorical aspects of quasi-category theory", Categorical Homotopy Theory, Cambridge: Cambridge University Press, pp. 318–336, doi:10.1017/cbo9781107261457.019, ISBN 978-1-107-26145-7, retrieved 2021-06-03
  52. ^ Swamidass, P. M. (2000). "X-Bar Chart". Encyclopedia of Production and Manufacturing Management. p. 841. doi:10.1007/1-4020-0612-8_1063. ISBN 978-0-7923-8630-8.
  53. ^ "Chart C5.3. Percentage of 15-19 year-olds not in education, by labour market status (2012)". doi:10.1787/888933119055. Retrieved 2021-06-03. {{cite journal}}: Cite journal requires |journal= (help)
  54. ^ "Chart 7: Households: final consumption expenditure versus actual individual consumption". doi:10.1787/665527077310. Retrieved 2021-06-03. {{cite journal}}: Cite journal requires |journal= (help)
  55. ^ Chao, Luke H.; Jang, Jaebong; Johnson, Adam; Nguyen, Anthony; Gray, Nathanael S.; Yang, Priscilla L.; Harrison, Stephen C. (12 July 2018). Jahn, Reinhard; Schekman, Randy (eds.). "Figure 4. Frequency of hemifusion (measured as DiD fluorescence dequenching) as a function of number of bound Alexa-fluor-555/3-110-22 molecules". eLife. 7: e36461. doi:10.7554/elife.36461.006.
  56. ^ Garnier, Elodie M.; Fouret, Nastasia; Descoins, Médéric (3 February 2020). "Table 2: Graph comparison between Scatter plot, Violin + Scatter plot, Heatmap and ViSiElse graph". PeerJ. 8: e8341. doi:10.7717/peerj.8341/table-2.
  57. ^ "Product comparison chart: Wearables". PsycEXTRA Dataset. 2009. doi:10.1037/e539162010-006. Retrieved 2021-06-03.
  58. ^ "Stephen Few-Perceptual Edge-Selecting the Right Graph for Your Message-2004" (PDF). Archived (PDF) from the original on 2014-10-05. Retrieved 2014-10-29.
  59. ^ "Stephen Few-Perceptual Edge-Graph Selection Matrix" (PDF). Archived (PDF) from the original on 2014-10-05. Retrieved 2014-10-29.
  60. ^ "Recommended Best Practices". 2008-10-01. doi:10.14217/9781848590151-8-en. Retrieved 2021-06-03. {{cite journal}}: Cite journal requires |journal= (help)
  61. ^ Hobold, Edilson; Pires-Lopes, Vitor; Gómez-Campos, Rossana; Arruda, Miguel de; Andruske, Cynthia Lee; Pacheco-Carrillo, Jaime; Cossio-Bolaños, Marco Antonio (30 November 2017). "Table 1: Descriptive statistics (mean ± standard-deviation) for somatic variables and physical fitness ítems for males and females". PeerJ. 5: e4032. doi:10.7717/peerj.4032/table-1.
  62. ^ Ablin, Jacob N.; Zohar, Ada H.; Zaraya-Blum, Reut; Buskila, Dan (13 September 2016). "Table 2: Cluster analysis presenting mean values of psychological variables per cluster group". PeerJ. 4: e2421. doi:10.7717/peerj.2421/table-2.
  63. ^ "Consultants Employed by McKinsey & Company", Organizational Behavior 5, Routledge, pp. 77–82, 2008-07-30, doi:10.4324/9781315701974-15, ISBN 978-1-315-70197-4, retrieved 2021-06-03
  64. ^ Antiphanes (2007), Olson, S. Douglas (ed.), "H6 Antiphanes fr.172.1-4, from Women Who Looked Like Each Other or Men Who Looked Like Each Other", Broken Laughter: Select Fragments of Greek Comedy, Oxford University Press, doi:10.1093/oseo/instance.00232915, ISBN 978-0-19-928785-7, retrieved 2021-06-03
  65. ^ Carey, Malachy (November 1981). "On Mutually Exclusive and Collectively Exhaustive Properties of Demand Functions". Economica. 48 (192): 407–415. doi:10.2307/2553697. ISSN 0013-0427. JSTOR 2553697.
  66. ^ "Total tax revenue". doi:10.1787/352874835867. Retrieved 2021-06-03. {{cite journal}}: Cite journal requires |journal= (help)
  67. ^ "Dual-use car may solve transportation problems". Chemical & Engineering News Archive. 46 (24): 44. 1968-06-03. doi:10.1021/cen-v046n024.p044. ISSN 0009-2347.
  68. ^ Heckman (1978). "Simple Statistical Models for Discrete Panel Data Developed and Applied to Test the Hypothesis of True State Dependence against the Hypothesis of Spurious State Dependence". Annales de l'inséé (30/31): 227–269. doi:10.2307/20075292. ISSN 0019-0209. JSTOR 20075292.
  69. ^ Koontz, Dean (2017). False Memory. Headline Book Publishing. ISBN 978-1-4722-4830-5. OCLC 966253202.
  70. ^ Munday, Stephen C. R. (1996), "Unemployment, Inflation and the Phillips Curve", Current Developments in Economics, London: Macmillan Education UK, pp. 186–218, doi:10.1007/978-1-349-24986-2_11, ISBN 978-0-333-64444-7, retrieved 2021-06-03
  71. ^ Louangrath, Paul I. (2013). "Alpha and Beta Tests for Type I and Type II Inferential Errors Determination in Hypothesis Testing". SSRN Electronic Journal. doi:10.2139/ssrn.2332756. ISSN 1556-5068.
  72. ^ Walko, Ann M. (2006). Rejecting the second generation hypothesis : maintaining Estonian ethnicity in Lakewood, New Jersey. AMS Press. ISBN 0-404-19454-0. OCLC 467107876.
  73. ^ a b Yanamandra, Venkataramana (September 2015). "Exchange rate changes and inflation in India: What is the extent of exchange rate pass-through to imports?". Economic Analysis and Policy. 47: 57–68. doi:10.1016/j.eap.2015.07.004. ISSN 0313-5926.
  74. ^ Mudiyanselage, Nawarathna; Nawarathna, Pubudu Manoj. Characterization of epigenetic changes and their connection to gene expression abnormalities in clear cell renal cell carcinoma. OCLC 1190697848.
  75. ^ Moreno Delgado, David; Møller, Thor C.; Ster, Jeanne; Giraldo, Jesús; Maurel, Damien; Rovira, Xavier; Scholler, Pauline; Zwier, Jurriaan M.; Perroy, Julie; Durroux, Thierry; Trinquet, Eric; Prezeau, Laurent; Rondard, Philippe; Pin, Jean-Philippe (29 June 2017). Chao, Moses V (ed.). "Appendix 1—figure 5. Curve data included in Appendix 1—table 4 (solid points) and the theoretical curve by using the Hill equation parameters of Appendix 1—table 5 (curve line)". eLife. 6: e25233. doi:10.7554/elife.25233.027.
  76. ^ Feinmann, Jane. "How Can Engineers and Journalists Help Each Other?" (Video). The Institute of Engineering & Technology. doi:10.1049/iet-tv.48.859. Retrieved 2021-06-03.
  77. ^ Dul, Jan (2015). "Necessary Condition Analysis (NCA): Logic and Methodology of 'Necessary But Not Sufficient' Causality". SSRN Electronic Journal. doi:10.2139/ssrn.2588480. hdl:1765/77890. ISSN 1556-5068. S2CID 219380122.
  78. ^ Robert Amar, James Eagan, and John Stasko (2005) "Low-Level Components of Analytic Activity in Information Visualization" Archived 2015-02-13 at the Wayback Machine
  79. ^ William Newman (1994) "A Preliminary Analysis of the Products of HCI Research, Using Pro Forma Abstracts" Archived 2016-03-03 at the Wayback Machine
  80. ^ Mary Shaw (2002) "What Makes Good Research in Software Engineering?" Archived 2018-11-05 at the Wayback Machine
  81. ^ a b Yavari, Ali; Jayaraman, Prem Prakash; Georgakopoulos, Dimitrios; Nepal, Surya (2017). ConTaaS: An Approach to Internet-Scale Contextualisation for Developing Efficient Internet of Things Applications. Proceedings of the 50th Hawaii International Conference on System Sciences (HICSS50 2017). University of Hawaiʻi at Mānoa. doi:10.24251/HICSS.2017.715. hdl:10125/41879. ISBN 9780998133102.
  82. ^ "Connectivity tool transfers data among database and statistical products". Computational Statistics & Data Analysis. 8 (2): 224. July 1989. doi:10.1016/0167-9473(89)90021-2. ISSN 0167-9473.
  83. ^ "Information relevant to your job", Obtaining Information for Effective Management, Routledge, pp. 48–54, 2007-07-11, doi:10.4324/9780080544304-16, ISBN 978-0-08-054430-4, retrieved 2021-06-03
  84. ^ Lehmann, E. L. (2010). Testing statistical hypotheses. Springer. ISBN 978-1-4419-3178-8. OCLC 757477004.
  85. ^ Fielding, Henry (2008-08-14), "Consisting partly of facts, and partly of observations upon them", Tom Jones, Oxford University Press, doi:10.1093/owc/9780199536993.003.0193, ISBN 978-0-19-953699-3, retrieved 2021-06-03
  86. ^ "Congressional Budget Office-The Budget and Economic Outlook-August 2010-Table 1.7 on Page 24". 18 August 2010. Archived from the original on 2012-02-27. Retrieved 2011-03-31.
  87. ^ "Students' sense of belonging, by immigrant background". PISA 2015 Results (Volume III). PISA. 2017-04-19. doi:10.1787/9789264273856-table125-en. ISBN 9789264273818. ISSN 1996-3777.
  88. ^ Gordon, Roger (March 1990). "Do Publicly Traded Corporations Act in the Public Interest?". National Bureau of Economic Research Working Papers. Cambridge, MA. doi:10.3386/w3303.
  89. ^ Minardi, Margot (2010-09-24), "Facts and Opinion", Making Slavery History, Oxford University Press, pp. 13–42, doi:10.1093/acprof:oso/9780195379372.003.0003, ISBN 978-0-19-537937-2, retrieved 2021-06-03
  90. ^ Rivard, Jillian R (2014). Confirmation bias in witness interviewing: Can interviewers ignore their preconceptions? (Thesis). Florida International University. doi:10.25148/etd.fi14071109.
  91. ^ Papineau, David (1988), "Does the Sociology of Science Discredit Science?", Relativism and Realism in Science, Dordrecht: Springer Netherlands, pp. 37–57, doi:10.1007/978-94-009-2877-0_2, ISBN 978-94-010-7795-8, retrieved 2021-06-03
  92. ^ Bromme, Rainer; Hesse, Friedrich W.; Spada, Hans, eds. (2005). Barriers and Biases in Computer-Mediated Knowledge Communication. doi:10.1007/b105100. ISBN 978-0-387-24317-7.
  93. ^ Heuer, Richards (2019-06-10). Heuer, Richards J (ed.). Quantitative Approaches to Political Intelligence. doi:10.4324/9780429303647. ISBN 9780429303647. S2CID 145675822.
  94. ^ "Introduction" (PDF). Central Intelligence Agency. Archived (PDF) from the original on 2021-10-25. Retrieved 2021-10-25.
  95. ^ "Figure 6.7. Differences in literacy scores across OECD countries generally mirror those in numeracy". doi:10.1787/888934081549. Retrieved 2021-06-03. {{cite journal}}: Cite journal requires |journal= (help)
  96. ^ "Bloomberg-Barry Ritholz-Bad Math that Passes for Insight-October 28, 2014". Archived from the original on 2014-10-29. Retrieved 2014-10-29.
  97. ^ Gusnaini, Nuriska; Andesto, Rony; Ermawati (2020-12-15). "The Effect of Regional Government Size, Legislative Size, Number of Population, and Intergovernmental Revenue on The Financial Statements Disclosure". European Journal of Business and Management Research. 5 (6). doi:10.24018/ejbmr.2020.5.6.651. ISSN 2507-1076. S2CID 231675715.
  98. ^ Linsey, Julie S.; Becker, Blake (2011), "Effectiveness of Brainwriting Techniques: Comparing Nominal Groups to Real Teams", Design Creativity 2010, London: Springer London, pp. 165–171, doi:10.1007/978-0-85729-224-7_22, ISBN 978-0-85729-223-0, retrieved 2021-06-03
  99. ^ Lyon, J. (April 2006). "Purported Responsible Address in E-Mail Messages". doi:10.17487/rfc4407. {{cite journal}}: Cite journal requires |journal= (help)
  100. ^ Stock, Eugene (10 June 2017). The History of the Church Missionary Society Its Environment, its Men and its Work. Hansebooks GmbH. ISBN 978-3-337-18120-8. OCLC 1189626777.
  101. ^ Gross, William H. (July 1979). "Coupon Valuation and Interest Rate Cycles". Financial Analysts Journal. 35 (4): 68–71. doi:10.2469/faj.v35.n4.68. ISSN 0015-198X.
  102. ^ "25. General government total outlays". doi:10.1787/888932348795. Retrieved 2021-06-03. {{cite journal}}: Cite journal requires |journal= (help)
  103. ^ González-Vidal, Aurora; Moreno-Cano, Victoria (2016). "Towards energy efficiency smart buildings models based on intelligent data analytics". Procedia Computer Science. 83 (Elsevier): 994–999. doi:10.1016/j.procs.2016.04.213.
  104. ^ "Low-Energy Air Conditioning and Lighting Control", Building Energy Management Systems, Routledge, pp. 406–439, 2013-07-04, doi:10.4324/9780203477342-18, ISBN 978-0-203-47734-2, retrieved 2021-06-03
  105. ^ Davenport, Thomas; Harris, Jeanne (2007). Competing on Analytics. O'Reilly. ISBN 978-1-4221-0332-6.
  106. ^ Aarons, D. (2009). Report finds states on course to build pupil-data systems. Education Week, 29(13), 6.
  107. ^ Rankin, J. (2013, March 28). How data Systems & reports can either fight or propagate the data analysis error epidemic, and how educator leaders can help. Archived 2019-03-26 at the Wayback Machine Presentation conducted from Technology Information Center for Administrative Leadership (TICAL) School Leadership Summit.
  108. ^ Brödermann, Eckart J. (2018), "Article 2.2.1 (Scope of the Section)", Commercial Law, Nomos Verlagsgesellschaft mbH & Co. KG, p. 525, doi:10.5771/9783845276564-525, ISBN 978-3-8452-7656-4, retrieved 2021-06-03
  109. ^ Jaech, J.L. (1960-04-21). "Analysis of dimensional distortion data from initial 24 quality certification tubes". doi:10.2172/10170345. S2CID 110058009. {{cite journal}}: Cite journal requires |journal= (help)
  110. ^ Adèr 2008a, p. 337.
  111. ^ Kjell, Oscar N. E.; Thompson, Sam (19 December 2013). "Descriptive statistics indicating the mean, standard deviation and frequency of missing values for each condition (N = number of participants), and for the dependent variables (DV)". PeerJ. 1: e231. doi:10.7717/peerj.231/table-1.
  112. ^ Practice for Dealing With Outlying Observations, ASTM International, doi:10.1520/e0178-16a, retrieved 2021-06-03
  113. ^ "Alternative Coding Schemes for Dummy Variables", Regression with Dummy Variables, Newbury Park, CA: SAGE Publications, Inc., pp. 64–75, 1993, doi:10.4135/9781412985628.n5, ISBN 978-0-8039-5128-0, retrieved 2021-06-03
  114. ^ Adèr 2008a, pp. 338–341.
  115. ^ Danilyuk, P. M. (July 1960). "Computing the displacement of the initial contour of gears when they are checked by means of balls". Measurement Techniques. 3 (7): 585–587. Bibcode:1960MeasT...3..585D. doi:10.1007/bf00977716. ISSN 0543-1972. S2CID 121058145.
  116. ^ Newman, Isadore (1998). Qualitative-quantitative research methodology : exploring the interactive continuum. Southern Illinois University Press. ISBN 0-585-17889-5. OCLC 44962443.
  117. ^ Terwilliger, James S.; Lele, Kaustubh (June 1979). "Some Relationships Among Internal Consistency, Reproducibility, and Homogeneity". Journal of Educational Measurement. 16 (2): 101–108. doi:10.1111/j.1745-3984.1979.tb00091.x. ISSN 0022-0655.
  118. ^ Adèr 2008a, pp. 341–342.
  119. ^ Adèr 2008a, p. 344.
  120. ^ Tabachnick & Fidell, 2007, p. 87-88.
  121. ^ Tchakarova, Kalina (October 2020). "2020/31 Comparing job descriptions is insufficient for checking whether work is equally valuable (BG)". European Employment Law Cases. 5 (3): 168–170. doi:10.5553/eelc/187791072020005003006. ISSN 1877-9107. S2CID 229008899.
  122. ^ Random sampling and randomization procedures, BSI British Standards, doi:10.3403/30137438, retrieved 2021-06-03
  123. ^ Adèr 2008a, pp. 344–345.
  124. ^ Sandberg, Margareta (June 2006). "Acupuncture Procedures Must be Accurately Described". Acupuncture in Medicine. 24 (2): 92–94. doi:10.1136/aim.24.2.92. ISSN 0964-5284. PMID 16783285. S2CID 30286074.
  125. ^ Jaarsma, C.F. Verkeer in een landelijk gebied: waarnemingen en analyse van het verkeer in zuidwest Friesland en ontwikkeling van een verkeersmodel. OCLC 1016575584.
  126. ^ Foth, Christian; Hedrick, Brandon P.; Ezcurra, Martin D. (18 January 2016). "Figure 4: Centroid size regression analyses for the main sample". PeerJ. 4: e1589. doi:10.7717/peerj.1589/fig-4.
  127. ^ Adèr 2008a, p. 345.
  128. ^ "The Final Years (1975-84)", The Road Not Taken, Boydell & Brewer, pp. 853–922, 2018-06-18, doi:10.2307/j.ctv6cfncp.26, ISBN 978-1-57647-332-0, S2CID 242072487, retrieved 2021-06-03
  129. ^ Fitzmaurice, Kathryn (17 March 2015). Destiny, rewritten. HarperCollins. ISBN 978-0-06-162503-9. OCLC 905090570.
  130. ^ Adèr 2008a, pp. 345–346.
  131. ^ Adèr 2008a, pp. 346–347.
  132. ^ Adèr 2008a, pp. 349–353.
  133. ^ Billings S.A. "Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains". Wiley, 2013
  134. ^ Adèr 2008b, p. 363.
  135. ^ "Exploratory Data Analysis", Python® for R Users, Hoboken, NJ, USA: John Wiley & Sons, Inc., pp. 119–138, 2017-10-13, doi:10.1002/9781119126805.ch4, hdl:11380/971504, ISBN 978-1-119-12680-5, retrieved 2021-06-03
  136. ^ "Engaging in Exploratory Data Analysis, Visualization, and Hypothesis Testing – Exploratory Data Analysis, Geovisualization, and Data", Spatial Analysis, CRC Press, pp. 106–139, 2015-07-28, doi:10.1201/b18808-8, ISBN 978-0-429-06936-9, S2CID 133412598, retrieved 2021-06-03
  137. ^ "Hypotheses About Categories", Starting Statistics: A Short, Clear Guide, London: SAGE Publications Ltd, pp. 138–151, 2010, doi:10.4135/9781446287873.n14, ISBN 978-1-84920-098-1, retrieved 2021-06-03
  138. ^ Sordo, Rachele Del; Sidoni, Angelo (December 2008). "MIB-1 Cell Membrane Reactivity: A Finding That Should be Interpreted Carefully". Applied Immunohistochemistry & Molecular Morphology. 16 (6): 568. doi:10.1097/pai.0b013e31817af2cf. ISSN 1541-2016. PMID 18800001.
  139. ^ Liquet, Benoit; Riou, Jérémie (2013-06-08). "Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models". BMC Medical Research Methodology. 13 (1): 75. doi:10.1186/1471-2288-13-75. ISSN 1471-2288. PMC 3699399. PMID 23758852.
  140. ^ a b c Mcardle, John J. (2008). "Some ethical issues in confirmatory versus exploratory analysis". PsycEXTRA Dataset. doi:10.1037/e503312008-001. Retrieved 2021-06-03.
  141. ^ Adèr 2008b, pp. 361–362.
  142. ^ Adèr 2008b, pp. 361–371.
  143. ^ Truswell IV, William H., ed. (2009), "3 The Facelift: A Guide for Safe, Reliable, and Reproducible Results", Surgical Facial Rejuvenation, Stuttgart: Georg Thieme Verlag, doi:10.1055/b-0034-73436, ISBN 978-1-58890-491-1, retrieved 2021-06-03
  144. ^ Benson, Noah C; Winawer, Jonathan (December 2018). "Bayesian analysis of retinotopic maps". eLife. 7. doi:10.7554/elife.40224. PMC 6340702. PMID 30520736. Supplementary file 1. Cross-validation schema. doi:10.7554/elife.40224.014
  145. ^ Hsiao, Cheng (2014), "Cross-Sectionally Dependent Panel Data", Analysis of Panel Data, Cambridge: Cambridge University Press, pp. 327–368, doi:10.1017/cbo9781139839327.012, ISBN 978-1-139-83932-7, retrieved 2021-06-03
  146. ^ Hjorth, J.S. Urban (2017-10-19), "Cross validation", Computer Intensive Statistical Methods, Chapman and Hall/CRC, pp. 24–56, doi:10.1201/9781315140056-3, ISBN 978-1-315-14005-6, retrieved 2021-06-03
  147. ^ Sheikholeslami, Razi; Razavi, Saman; Haghnegahdar, Amin (2019-10-10). "What should we do when a model crashes? Recommendations for global sensitivity analysis of Earth and environmental systems models". Geoscientific Model Development. 12 (10): 4275–4296. Bibcode:2019GMD....12.4275S. doi:10.5194/gmd-12-4275-2019. ISSN 1991-9603. S2CID 204900339.
  148. ^ United Nations Development Programme (2018). "Human development composite indices". Human Development Indices and Indicators 2018. United Nations. pp. 21–41. doi:10.18356/ce6f8e92-en. S2CID 240207510.
  149. ^ Wiley, Matt; Wiley, Joshua F. (2019), "Multivariate Data Visualization", Advanced R Statistical Programming and Data Models, Berkeley, CA: Apress, pp. 33–59, doi:10.1007/978-1-4842-2872-2_2, ISBN 978-1-4842-2871-5, S2CID 86629516, retrieved 2021-06-03
  150. ^ Mailund, Thomas (2022). Beginning Data Science in R 4: Data Analysis, Visualization, and Modelling for the Data Scientist (2nd ed.). ISBN 978-148428155-0.
  151. ^ Orduna-Malea, Enrique; Alonso-Arroyo, Adolfo (2018), "A cybermetric analysis model to measure private companies", Cybermetric Techniques to Evaluate Organizations Using Web-Based Data, Elsevier, pp. 63–76, doi:10.1016/b978-0-08-101877-4.00003-x, ISBN 978-0-08-101877-4, retrieved 2021-06-03
  152. ^ Leen, A.R. The consumer in Austrian economics and the Austrian perspective on consumer policy. Wageningen Universiteit. ISBN 90-5808-102-8. OCLC 1016689036.
  153. ^ "Examples of Survival Data Analysis", Statistical Methods for Survival Data Analysis, Wiley Series in Probability and Statistics, Hoboken, NJ, USA: John Wiley & Sons, Inc., 2003-06-30, pp. 19–63, doi:10.1002/0471458546.ch3, ISBN 978-0-471-45854-8, retrieved 2021-06-03
  154. ^ "The machine learning community takes on the Higgs". Symmetry Magazine. July 15, 2014. Archived from the original on 16 April 2021. Retrieved 14 January 2015.
  155. ^ Nehme, Jean (September 29, 2016). "LTPP International Data Analysis Contest". Federal Highway Administration. Archived from the original on October 21, 2017. Retrieved October 22, 2017.
  156. ^ "Data.Gov:Long-Term Pavement Performance (LTPP)". May 26, 2016. Archived from the original on November 1, 2017. Retrieved November 10, 2017.

Bibliography

[edit]
  • Adèr, Herman J. (2008a). "Chapter 14: Phases and initial steps in data analysis". In Adèr, Herman J.; Mellenbergh, Gideon J.; Hand, David J (eds.). Advising on research methods : a consultant's companion. Huizen, Netherlands: Johannes van Kessel Pub. pp. 333–356. ISBN 9789079418015. OCLC 905799857.
  • Adèr, Herman J. (2008b). "Chapter 15: The main analysis phase". In Adèr, Herman J.; Mellenbergh, Gideon J.; Hand, David J (eds.). Advising on research methods : a consultant's companion. Huizen, Netherlands: Johannes van Kessel Pub. pp. 357–386. ISBN 9789079418015. OCLC 905799857.
  • Tabachnick, B.G. & Fidell, L.S. (2007). Chapter 4: Cleaning up your act. Screening data prior to analysis. In B.G. Tabachnick & L.S. Fidell (Eds.), Using Multivariate Statistics, Fifth Edition (pp. 60–116). Boston: Pearson Education, Inc. / Allyn and Bacon.

Further reading

[edit]
  • Adèr, H.J. & Mellenbergh, G.J. (with contributions by D.J. Hand) (2008). Advising on Research Methods: A Consultant's Companion. Huizen, the Netherlands: Johannes van Kessel Publishing. ISBN 978-90-79418-01-5
  • Chambers, John M.; Cleveland, William S.; Kleiner, Beat; Tukey, Paul A. (1983). Graphical Methods for Data Analysis, Wadsworth/Duxbury Press. ISBN 0-534-98052-X
  • Fandango, Armando (2017). Python Data Analysis, 2nd Edition. Packt Publishers. ISBN 978-1787127487
  • Juran, Joseph M.; Godfrey, A. Blanton (1999). Juran's Quality Handbook, 5th Edition. New York: McGraw Hill. ISBN 0-07-034003-X
  • Lewis-Beck, Michael S. (1995). Data Analysis: an Introduction, Sage Publications Inc, ISBN 0-8039-5772-6
  • NIST/SEMATECH (2008) Handbook of Statistical Methods,
  • Pyzdek, T, (2003). Quality Engineering Handbook, ISBN 0-8247-4614-7
  • Richard Veryard (1984). Pragmatic Data Analysis. Oxford : Blackwell Scientific Publications. ISBN 0-632-01311-7
  • Tabachnick, B.G.; Fidell, L.S. (2007). Using Multivariate Statistics, 5th Edition. Boston: Pearson Education, Inc. / Allyn and Bacon, ISBN 978-0-205-45938-4