Jump to content

Data warehouse: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
KValk (talk | contribs)
Related systems (data mart, OLAP, OLTP, predictive analytics): Linked "...include dependent..." to Data_mart#Dependent_data_mart
m Fixed plural vs singular sentence structuring issue
 
(215 intermediate revisions by more than 100 users not shown)
Line 1: Line 1:
{{Short description|Centralized storage of knowledge}}
[[File:Data warehouse overview.JPG|thumb|upright=1.5|Data warehouse overview]]
[[File:Data Warehouse & Data-Marts overview.svg|400px|thumb|right|alt=Data Warehouse and Data-Marts overview|Data Warehouse and [[Data mart]] overview, with Data Marts shown in the top right.]]

In [[computing]], a '''data warehouse''' ('''DW''' or '''DWH'''), also known as an '''enterprise data warehouse''' ('''EDW'''), is a system used for [[Business intelligence|reporting]] and [[data analysis]] and is a core component of [[business intelligence]].<ref>{{cite conference|last1=Dedić|first1=Nedim|last2=Stanier|first2=Clare|year=2016|editor1-last=Hammoudi|editor1-first=Slimane|editor2-last=Maciaszek|editor2-first=Leszek|editor3-last=Missikoff|editor3-first=Michele M. Missikoff|editor4-last=Camp|editor4-first=Olivier|editor5-last=Cordeiro|editor5-first=José|title=An Evaluation of the Challenges of Multilingualism in Data Warehouse Development|url=http://eprints.staffs.ac.uk/2770/|journal=Proceedings of the 18th International Conference on Enterprise Information Systems (ICEIS 2016)|publisher=SciTePress|volume=1|pages=196–206|conference=International Conference on Enterprise Information Systems, 25–28 April 2016, Rome, Italy|conference-url=https://eprints.staffs.ac.uk/2770/1/ICEIS_2016_Volume_1.pdf |archive-url=https://web.archive.org/web/20180522180940/https://eprints.staffs.ac.uk/2770/1/ICEIS_2016_Volume_1.pdf |archive-date=2018-05-22 |url-status=live|doi=10.5220/0005858401960206|isbn=978-989-758-187-8|doi-access=free}}</ref> Data warehouses are central [[Repository (version control)|repositories]] of data integrated from disparate sources. They store current and historical data organized so as to make it easy to create reports, query and get insights from the data.<ref>{{Cite web |title=What is a Data Warehouse? {{!}} Key Concepts {{!}} Amazon Web Services |url=https://aws.amazon.com/data-warehouse/ |access-date=2023-02-13 |website=Amazon Web Services, Inc. |language=en-US}}</ref> Unlike [[databases]], they are intended to be used by analysts and managers to help make organizational decisions.<ref name="rainer2012">{{cite book |last1=Rainer |first1=R. Kelly |url=https://archive.org/details/introductiontoin00rain_274 |title=Introduction to Information Systems: Enabling and Transforming Business, 4th Edition |last2=Cegielski |first2=Casey G. |date=2012-05-01 |publisher=Wiley |isbn=978-1118129401 |edition=Kindle |pages=[https://archive.org/details/introductiontoin00rain_274/page/n138 127], 128, 130, 131, 133 |url-access=limited}}</ref>
[[File:Data warehouse architecture.jpg|thumb|upright=1.5|The basic architecture of a data warehouse]]
[[File:Data warehouse architecture.jpg|thumb|upright=1.5|The basic architecture of a data warehouse]]


The data stored in the warehouse is [[upload]]ed from [[operational system]]s (such as marketing or sales). The data may pass through an [[operational data store]] and may require [[data cleansing]] for additional operations to ensure [[data quality]] before it is used in the data warehouse for reporting.
In [[computing]], a '''data warehouse''' ('''DW''' or '''DWH'''), also known as an '''enterprise data warehouse''' ('''EDW'''), is a system used for [[Business reporting|reporting]] and [[data analysis]], and is considered a core component of [[business intelligence]].<ref>{{cite conference|last1=Dedić|first1=Nedim|last2=Stanier|first2=Clare|year=2016|editor1-last=Hammoudi|editor1-first=Slimane|editor2-last=Maciaszek|editor2-first=Leszek|editor3-last=Missikoff|editor3-first=Michele M. Missikoff|editor4-last=Camp|editor4-first=Olivier|editor5-last=Cordeiro|editor5-first=José|title=An Evaluation of the Challenges of Multilingualism in Data Warehouse Development|url=http://eprints.staffs.ac.uk/2770/|journal=Proceedings of the 18th International Conference on Enterprise Information Systems (ICEIS 2016)|publisher=SciTePress|volume=1|pages=196–206|conference=International Conference on Enterprise Information Systems, 25–28 April 2016, Rome, Italy|conferenceurl=https://eprints.staffs.ac.uk/2770/1/ICEIS_2016_Volume_1.pdf|doi=10.5220/0005858401960206|isbn=978-989-758-187-8}}</ref> DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place<ref name="rjmetrics">{{cite web|url=https://blog.rjmetrics.com/2014/12/04/10-common-mistakes-when-building-a-data-warehouse/|publisher=blog.rjmetrics.com|title=9 Reasons Data Warehouse Projects Fail|accessdate=2017-04-30}}</ref> that are used for creating analytical reports for workers throughout the enterprise.<ref name="spotlessdata">{{cite web|url=https://web.archive.org/web/20180726071809/https://spotlessdata.com/blog/exploring-data-warehouses-and-data-quality|publisher=spotlessdata.com|title=Exploring Data Warehouses and Data Quality|accessdate=2017-04-30}}</ref>


The two main approaches for building a data warehouse system are [[extract, transform, load]] (ETL) and [[extract, load, transform]] (ELT).
The data stored in the warehouse is [[upload]]ed from the [[operational system]]s (such as marketing or sales). The data may pass through an [[operational data store]] and may require [[data cleansing]]<ref name="rjmetrics"/> for additional operations to ensure [[data quality]] before it is used in the DW for reporting.


== Components ==
The typical [[extract, transform, load]] (ETL)-based data warehouse<ref name="spotlessdata2">{{cite web|url=https://web.archive.org/web/20170217144032/https://spotlessdata.com/what-big-data|publisher=spotlessdata.com|title=What is Big Data?|accessdate=2017-04-30}}</ref> uses [[Staging (data)|staging]], [[data integration]], and access layers to house its key functions. The staging layer or staging database stores raw data extracted from each of the disparate source data systems. The integration layer integrates the disparate data sets by transforming the data from the staging layer often storing this transformed data in an [[operational data store]] (ODS) database. The integrated data are then moved to yet another database, often called the data warehouse database, where the data is arranged into hierarchical groups, often called dimensions, and into facts and aggregate facts. The combination of facts and dimensions is sometimes called a [[star schema]]. The access layer helps users retrieve data.<ref name=IJCA96Patil>{{cite journal |url=http://www.ijcaonline.org/proceedings/icwet/number9/2131-db195 |author1=Patil, Preeti S. |author2=Srikantha Rao |author3=Suryakant B. Patil |title=Optimization of Data Warehousing System: Simplification in Reporting and Analysis |work=IJCA Proceedings on International Conference and workshop on Emerging Trends in Technology (ICWET) |year=2011 |volume=9 |issue=6 |pages=33–37 |publisher=Foundation of Computer Science}}</ref>

The main source of the data is [[data cleansing|cleansed]], transformed, catalogued, and made available for use by managers and other business professionals for [[data mining]], [[OLAP|online analytical processing]], [[market research]] and [[decision support]].<ref>Marakas & O'Brien 2009</ref> However, the means to retrieve and analyze data, to extract, transform, and load data, and to manage the [[data dictionary]] are also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition for data warehousing includes [[business intelligence tools]], tools to extract, transform, and load data into the repository, and tools to manage and retrieve [[metadata]].

==Benefits==
A data warehouse maintains a copy of information from the source transaction systems. This architectural complexity provides the opportunity to:
* Integrate data from multiple sources into a single database and data model. More congregation of data to single database so a single query engine can be used to present data in an ODS.
* Mitigate the problem of database isolation level lock contention in transaction processing systems caused by attempts to run large, long-running, analysis queries in transaction processing databases.
* Maintain [[Provenance#Data provenance|data history]], even if the source transaction systems do not.
* Integrate data from multiple source systems, enabling a central view across the enterprise. This benefit is always valuable, but particularly so when the organization has grown by merger.
* Improve [[data quality]], by providing consistent codes and descriptions, flagging or even fixing bad data.
* Present the organization's information consistently.
* Provide a single common data model for all data of interest regardless of the data's source.
* Restructure the data so that it makes sense to the business users.
* Restructure the data so that it delivers excellent query performance, even for complex analytic queries, without impacting the [[operational system]]s.
* Add value to operational business applications, notably [[customer relationship management]] (CRM) systems.
*Make decision–support queries easier to write.
*Organize and disambiguate repetitive data

==Generic==
The environment for data warehouses and marts includes the following:
The environment for data warehouses and marts includes the following:


* Source systems that provide data to the warehouse or mart;
* Source systems of data (often, the company's operational databases, such as relational databases<ref name="rainer2012" />);
* Data integration technology and processes that are needed to prepare the data for use;
* Data integration technology and processes to extract data from source systems, transform them, and load them into a data mart or warehouse;<ref name="rainer2012" />
* Different architectures for storing data in an organization's data warehouse or data marts;
* Architectures to store data in the warehouse or marts;
*Different tools and applications for the variety of users;
* Tools and applications for varied users;
*Metadata, data quality, and governance processes must be in place to ensure that the warehouse or mart meets its purposes.
* Metadata, data quality, and governance processes. Metadata includes data sources (database, table, and column names), refresh schedules and data usage measures.<ref name="rainer2012" />

== Related systems ==


=== Operational databases ===
In regards to source systems listed above, R. Kelly Rainer states, "A common source for the data in data warehouses is the company's operational databases, which can be relational databases".<ref name=rainer2012>{{cite book|last=Rainer|first=R. Kelly|first2=Casey G. |last2=Cegielski |title=Introduction to Information Systems: Enabling and Transforming Business, 4th Edition (Kindle Edition)|date=2012-05-01|publisher=Wiley|pages=127, 128, 130, 131, 133 |isbn=978-1118129401}}</ref>
Operational databases are optimized for the preservation of [[data integrity]] and speed of recording of business transactions through use of [[database normalization]] and an [[entity–relationship model]]. Operational system designers generally follow [[Codd's 12 rules]] of [[database normalization]] to ensure data integrity. Fully normalized database designs (that is, those satisfying all Codd rules) often result in information from a business transaction being stored in dozens to hundreds of tables. [[Relational database]]s are efficient at managing the relationships between these tables. The databases have very fast insert/update performance because only a small amount of data in those tables is affected by each transaction. To improve performance, older data are periodically purged.


Data warehouses are optimized for analytic access patterns, which usually involve selecting specific fields rather than all fields as is common in operational databases. Because of these differences in access, operational databases (loosely, OLTP) benefit from the use of a row-oriented database management system (DBMS), whereas analytics databases (loosely, OLAP) benefit from the use of a [[column-oriented DBMS]]. Operational systems maintain a snapshot of the business, while warehouses maintain historic data through ETL processes that periodically migrate data from the operational systems to the warehouse.
Regarding data integration, Rainer states, "It is necessary to extract data from source systems, transform them, and load them into a data mart or warehouse".<ref name=rainer2012/>


[[Online analytical processing]] (OLAP) is characterized by a low rate of transactions and complex queries that involve aggregations. Response time is an effective performance measure of OLAP systems. OLAP applications are widely used for [[Data Mining|data mining]]. OLAP databases store aggregated, historical data in multi-dimensional schemas (usually [[star schema]]s). OLAP systems typically have a data latency of a few hours, while data mart latency is closer to one day. The OLAP approach is used to analyze multidimensional data from multiple sources and perspectives. The three basic operations in OLAP are roll-up (consolidation), drill-down, and slicing & dicing.
Rainer discusses storing data in an organization's data warehouse or data marts.<ref name=rainer2012 />


[[Online transaction processing]] (OLTP) is characterized by a large numbers of short online transactions (INSERT, UPDATE, DELETE). OLTP systems emphasize fast query processing and maintaining [[data integrity]] in multi-access environments. For OLTP systems, performance is the number of transactions per second. OLTP databases contain detailed and current data. The schema used to store transactional databases is the entity model (usually [[Third normal form|3NF]]).{{citation needed|date=November 2024}} Normalization is the norm for data modeling techniques in this system.
Metadata is data about data. "IT personnel need information about data sources; database, table, and column names; refresh schedules; and data usage measures".<ref name=rainer2012 />


[[Predictive analytics]] is about [[pattern recognition|finding]] and quantifying hidden patterns in the data using complex mathematical models and to [[prediction|predict]] future outcomes. By contrast, OLAP focuses on historical data analysis and is reactive. Predictive systems are also used for [[customer relationship management]] (CRM).
Today, the most successful companies are those that can respond quickly and flexibly to market changes and opportunities. A key to this response is the effective and efficient use of data and information by analysts and managers.<ref name=rainer2012 /> A "data warehouse" is a repository of historical data that is organized by subject to support decision makers in the organization.<ref name=rainer2012 /> Once data is stored in a data mart or warehouse, it can be accessed.


=== Data marts ===
==Related systems (data mart, OLAP, OLTP, predictive analytics)==
A [[data mart]] is a simple form of a data warehouse that is focused on a single subject (or functional area), hence they draw data from a limited number of sources such as sales, finance or marketing. Data marts are often built and controlled by a single department within an organization. The sources could be internal operational systems, a central data warehouse, or external data.<ref>{{cite web |url=http://docs.oracle.com/html/E10312_01/dm_concepts.htm |title=Data Mart Concepts |publisher=Oracle |year=2007}}</ref> Denormalization is the norm for data modeling techniques in this system. Given that data marts generally cover only a subset of the data contained in a data warehouse, they are often easier and faster to implement.
A [[data mart]] is a simple data warehouse focused on a single subject or functional area. Hence it draws data from a limited number of sources such as sales, finance or marketing. Data marts are often built and controlled by a single department in an organization. The sources could be internal operational systems, a central data warehouse, or external data.<ref>{{cite web |year=2007 |title=Data Mart Concepts |url=http://docs.oracle.com/html/E10312_01/dm_concepts.htm |publisher=Oracle}}</ref> As with warehouses, stored data is usually not normalized.


{| class="wikitable"
{| class="wikitable"
Line 55: Line 42:
|-
|-
! style="text-align: left" | Scope of the data
! style="text-align: left" | Scope of the data
| enterprise-wide
| enterprise
| department-wide
| department
|-
|-
! style="text-align: left" | Number of subject areas
! style="text-align: left" | Number of subject areas
Line 66: Line 53:
| easy
| easy
|-
|-
! style="text-align: left" | How much time takes to build
! style="text-align: left" | Memory required
| more
| less
|-
! style="text-align: left" | Amount of memory
| larger
| larger
| limited
| limited
Line 77: Line 60:
Types of data marts include [[Data_mart#Dependent_data_mart|dependent]], independent, and hybrid data marts.{{clarify |date=March 2017 |reason= }}
Types of data marts include [[Data_mart#Dependent_data_mart|dependent]], independent, and hybrid data marts.{{clarify |date=March 2017 |reason= }}


==Variants==
[[Online analytical processing]] (OLAP) is characterized by a relatively low volume of transactions. Queries are often very complex and involve aggregations. For OLAP systems, response time is an effectiveness measure. OLAP applications are widely used by [[Data Mining]] techniques. OLAP databases store aggregated, historical data in multi-dimensional schemas (usually [[star schema]]s). OLAP systems typically have data latency of a few hours, as opposed to data marts, where latency is expected to be closer to one day. The OLAP approach is used to analyze multidimensional data from multiple sources and perspectives. The three basic operations in OLAP are : Roll-up (Consolidation), Drill-down and Slicing & Dicing.
===ETL===
The typical [[extract, transform, load]] (ETL)-based data warehouse uses [[Staging (data)|staging]], [[data integration]], and access layers to house its key functions. The staging layer or staging database stores raw data extracted from each of the disparate source data systems. The integration layer integrates disparate data sets by transforming the data from the staging layer, often storing this transformed data in an [[operational data store]] (ODS) database. The integrated data are then moved to yet another database, often called the data warehouse database, where the data is arranged into hierarchical groups, often called dimensions, and into [[#Facts|facts]] and aggregate facts. The combination of facts and dimensions is sometimes called a [[star schema]]. The access layer helps users retrieve data.<ref name=IJCA96Patil>{{cite journal |url=http://www.ijcaonline.org/proceedings/icwet/number9/2131-db195 |author1=Patil, Preeti S. |author2=Srikantha Rao |author3=Suryakant B. Patil |title=Optimization of Data Warehousing System: Simplification in Reporting and Analysis |journal=IJCA Proceedings on International Conference and Workshop on Emerging Trends in Technology (ICWET) |year=2011 |volume=9 |issue=6 |pages=33–37 |publisher=Foundation of Computer Science}}</ref>


The main source of the data is [[data cleansing|cleansed]], transformed, catalogued, and made available for use by managers and other business professionals for [[data mining]], [[OLAP|online analytical processing]], [[market research]] and [[decision support]].<ref>Marakas & O'Brien 2009</ref> However, the means to retrieve and analyze data, to extract, transform, and load data, and to manage the [[data dictionary]] are also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition of data warehousing includes [[business intelligence tools]], tools to extract, transform, and load data into the repository, and tools to manage and retrieve [[metadata]].
[[Online transaction processing]] (OLTP) is characterized by a large number of short on-line transactions (INSERT, UPDATE, DELETE). OLTP systems emphasize very fast query processing and maintaining [[data integrity]] in multi-access environments. For OLTP systems, effectiveness is measured by the number of transactions per second. OLTP databases contain detailed and current data. The schema used to store transactional databases is the entity model (usually [[Third normal form|3NF]]).<ref>{{cite web |url=http://datawarehouse4u.info/OLTP-vs-OLAP.html |title=OLTP vs. OLAP |year=2009 |website=Datawarehouse4u.Info |quote=We can divide IT systems into transactional (OLTP) and analytical (OLAP). In general we can assume that OLTP systems provide source data to data warehouses, whereas OLAP systems help to analyze it.}}</ref> Normalization is the norm for data modeling techniques in this system.


===ELT===
[[Predictive analytics]] is about [[pattern recognition|finding]] and quantifying hidden patterns in the data using complex mathematical models that can be used to [[prediction|predict]] future outcomes. Predictive analysis is different from OLAP in that OLAP focuses on historical data analysis and is reactive in nature, while predictive analysis focuses on the future. These systems are also used for [[customer relationship management]] (CRM).
[[File:ELT Diagram.png|thumb|244x244px|[[Extract, load, transform|ELT]]-based data warehouse architecture]]

[[Extract, load, transform|ELT]]-based data warehousing gets rid of a separate [[Extract, transform, load|ETL]] tool for data transformation. Instead, it maintains a staging area inside the data warehouse itself. In this approach, data gets extracted from heterogeneous source systems and are then directly loaded into the data warehouse, before any transformation occurs. All necessary transformations are then handled inside the data warehouse itself. Finally, the manipulated data gets loaded into target tables in the same data warehouse.

==Benefits==
A data warehouse maintains a copy of information from the source transaction systems. This architectural complexity provides the opportunity to:
* Integrate data from multiple sources into a single database and data model. More congregation of data to single database so a single query engine can be used to present data in an [[Operational Data Store|operational data store]].
* Mitigate the problem of isolation-level lock contention in [[transaction processing]] systems caused by long-running analysis queries in transaction processing databases.
* Maintain [[Provenance#Data provenance|data history]], even if the source transaction systems do not.
* Integrate data from multiple source systems, enabling a central view across the enterprise. This benefit is always valuable, but particularly so when the organization grows via merging.
* Improve [[data quality]], by providing consistent codes and descriptions, flagging or even fixing bad data.
* Present the organization's information consistently.
* Provide a single [[common data model]] for all data of interest regardless of data source.
* Restructure the data so that it makes sense to the business users.
* Restructure the data so that it delivers excellent query performance, even for complex analytic queries, without impacting the [[operational system]]s.
* Add value to operational business applications, notably [[customer relationship management]] (CRM) systems.
* Make decision–support queries easier to write.
* Organize and disambiguate repetitive data.


==History==
==History==
The concept of data warehousing dates back to the late 1980s<ref>{{cite web |url=http://www.computerworld.com/databasetopics/data/story/0,10801,70102,00.html |title=The Story So Far |date=2002-04-15 |accessdate=2008-09-21 |deadurl=yes |archiveurl=https://web.archive.org/web/20080708182105/http://www.computerworld.com/databasetopics/data/story/0%2C10801%2C70102%2C00.html |archivedate=2008-07-08 |df= }}</ref> when IBM researchers Barry Devlin and Paul Murphy developed the "business data warehouse". In essence, the data warehousing concept was intended to provide an architectural model for the flow of data from operational systems to [[decision support system|decision support environments]]. The concept attempted to address the various problems associated with this flow, mainly the high costs associated with it. In the absence of a data warehousing architecture, an enormous amount of redundancy was required to support multiple decision support environments. In larger corporations, it was typical for multiple decision support environments to operate independently. Though each environment served different users, they often required much of the same stored data. The process of gathering, cleaning and integrating data from various sources, usually from long-term existing operational systems (usually referred to as [[legacy system]]s), was typically in part replicated for each environment. Moreover, the operational systems were frequently reexamined as new decision support requirements emerged. Often new requirements necessitated gathering, cleaning and integrating new data from "[[data mart]]s" that was tailored for ready access by users.
The concept of data warehousing dates back to the late 1980s<ref>{{cite web |url=http://www.computerworld.com/databasetopics/data/story/0,10801,70102,00.html |title=The Story So Far |date=2002-04-15 |access-date=2008-09-21 |url-status=dead |archive-url=https://web.archive.org/web/20080708182105/http://www.computerworld.com/databasetopics/data/story/0%2C10801%2C70102%2C00.html |archive-date=2008-07-08 }}</ref> when IBM researchers Barry Devlin and Paul Murphy developed the "business data warehouse". In essence, the data warehousing concept was intended to provide an architectural model for the flow of data from operational systems to [[decision support system|decision support environments]]. The concept attempted to address the various problems associated with this flow, mainly the high costs associated with it. In the absence of a data warehousing architecture, an enormous amount of redundancy was required to support multiple decision support environments. In larger corporations, it was typical for multiple decision support environments to operate independently. Though each environment served different users, they often required much of the same stored data. The process of gathering, cleaning and integrating data from various sources, usually from long-term existing operational systems (usually referred to as [[legacy system]]s), was typically in part replicated for each environment. Moreover, the operational systems were frequently reexamined as new decision support requirements emerged. Often new requirements necessitated gathering, cleaning and integrating new data from "[[data mart]]s" that was tailored for ready access by users.

Additionally, with the publication of The IRM Imperative (Wiley & Sons, 1991) by James M. Kerr, the idea of managing and putting a dollar value on an organization's data resources and then reporting that value as an asset on a balance sheet became popular. In the book, Kerr described a way to populate subject-area databases from data derived from transaction-driven systems to create a storage area where summary data could be further leveraged to inform executive decision-making. This concept served to promote further thinking of how a data warehouse could be developed and managed in a practical way within any enterprise.


Key developments in early years of data warehousing:
Key developments in early years of data warehousing:


* 1960s – [[General Mills]] and [[Dartmouth College]], in a joint research project, develop the terms ''dimensions'' and ''facts''.<ref name="kimball16">Kimball 2002, pg. 16</ref>
* 1960s – [[General Mills]] and [[Dartmouth College]], in a joint research project, develop the terms ''dimensions'' and ''facts''.<ref name="kimball16">Kimball 2013, pg. 15</ref>
* 1970s – [[ACNielsen]] and IRI provide dimensional data marts for retail sales.<ref name="kimball16" />
* 1970s – [[ACNielsen]] and IRI provide dimensional data marts for retail sales.<ref name="kimball16" />
* 1970s – [[Bill Inmon]] begins to define and discuss the term Data Warehouse.<ref>{{Cite web|title=The audit of the Data Warehouse Framework|url=http://ceur-ws.org/Vol-19/paper14.pdf |archive-url=https://web.archive.org/web/20120512064024/http://ceur-ws.org/Vol-19/paper14.pdf |archive-date=2012-05-12 |url-status=live}}</ref><ref>{{Cite web |last=Kempe |first=Shannon |date=2012-08-23 |title=A Short History of Data Warehousing |url=https://www.dataversity.net/a-short-history-of-data-warehousing/ |access-date=2024-05-10 |website=DATAVERSITY |language=en-US}}</ref><ref>{{Cite web |title=Data Warehouse – What It Is & Why It Matters |url=https://www.sas.com/en_gb/insights/data-management/data-warehouse.html |access-date=2024-05-10 |website=www.sas.com |language=en-GB}}</ref>
* 1970s – [[Bill Inmon]] begins to define and discuss the term Data Warehouse.{{citation needed|date=June 2014}}
* 1975 – [[Sperry Univac]] introduces [[MAPPER]] (MAintain, Prepare, and Produce Executive Reports) is a database management and reporting system that includes the world's first [[Fourth-generation programming language|4GL]]. It is the first platform designed for building Information Centers (a forerunner of contemporary data warehouse technology).
* 1975 – [[Sperry Univac]] introduces [[MAPPER]] (MAintain, Prepare, and Produce Executive Reports), a database management and reporting system that includes the world's first [[Fourth-generation programming language|4GL]]. It is the first platform designed for building Information Centers (a forerunner of contemporary data warehouse technology).
* 1983 – [[Teradata]] introduces the [[DBC 1012|DBC/1012]] database computer specifically designed for decision support.<ref>{{Cite news |title= Will Teradata revive a market? |author= Paul Gillin |pages= 43, 48 |work= Computer World |date= February 20, 1984 |url= https://books.google.com/books?id=5pw6ePUC8YYC&pg=PA48 |accessdate= 2017-03-13 }}</ref>
* 1983 – [[Teradata]] introduces the [[DBC 1012|DBC/1012]] database computer specifically designed for decision support.<ref>{{Cite news |title= Will Teradata revive a market? |author= Paul Gillin |pages= 43, 48 |work= Computer World |date= February 20, 1984 |url= https://books.google.com/books?id=5pw6ePUC8YYC&pg=PA48 |access-date= 2017-03-13 }}</ref>
* 1984 – [[Metaphor Computer Systems]], founded by [[David Liddle]] and Don Massaro, releases a hardware/software package and GUI for business users to create a database management and analytic system.
* 1984 – [[Metaphor Computer Systems]], founded by [[David Liddle]] and Don Massaro, releases a hardware/software package and GUI for business users to create a database management and analytic system.
* 1985 - [[Sperry Corporation]] publishes an article (Martyn Jones and Philip Newman) on information centers, where they introduce the term MAPPER data warehouse in the context of information centers.
* 1988 Barry Devlin and Paul Murphy publish the article "An architecture for a business and information system" where they introduce the term "business data warehouse".<ref>{{cite journal|title=An architecture for a business and information system|journal=IBM Systems Journal | doi=10.1147/sj.271.0060|volume=27|pages=60–80|year=1988|last1=Devlin|first1=B. A.|last2=Murphy|first2=P. T.}}</ref>
* 1988 – Barry Devlin and Paul Murphy publish the article ''An architecture for a business and information system'' where they introduce the term "business data warehouse".<ref>{{cite journal|url=http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5387658|title=An architecture for a business and information system|journal=IBM Systems Journal | doi=10.1147/sj.271.0060|volume=27|pages=60–80}}</ref>
* 1990 – Red Brick Systems, founded by [[Ralph Kimball]], introduces Red Brick Warehouse, a database management system specifically for data warehousing.
* 1990 – Red Brick Systems, founded by [[Ralph Kimball]], introduces Red Brick Warehouse, a database management system specifically for data warehousing.
* 1991 – James M. Kerr authors The IRM Imperative, which suggests data resources could be reported as an asset on a balance sheet, furthering commercial interest in the establishment of data warehouses.
* 1991 – Prism Solutions, founded by [[Bill Inmon]], introduces Prism Warehouse Manager, software for developing a data warehouse.
* 1991 – Prism Solutions, founded by [[Bill Inmon]], introduces Prism Warehouse Manager, software for developing a data warehouse.
* 1992 – [[Bill Inmon]] publishes the book ''Building the Data Warehouse''.<ref>{{cite book|last=Inmon|first=Bill|title=Building the Data Warehouse|year=1992|publisher=Wiley|isbn=0-471-56960-7}}</ref>
* 1992 – [[Bill Inmon]] publishes the book ''Building the Data Warehouse''.<ref>{{cite book|last=Inmon|first=Bill|title=Building the Data Warehouse|year=1992|publisher=Wiley|isbn=0-471-56960-7|url=https://archive.org/details/buildingdataware00inmo_1}}</ref>
* 1995 – The Data Warehousing Institute, a for-profit organization that promotes data warehousing, is founded.
* 1995 – The Data Warehousing Institute, a for-profit organization that promotes data warehousing, is founded.
* 1996 – [[Ralph Kimball]] publishes the book ''The Data Warehouse Toolkit''.<ref name=":0">{{cite book|title=The Data Warehouse Toolkit|last=Kimball|first=Ralph|publisher=Wiley|year=2011|isbn=978-0-470-14977-5|page=237}}</ref>
* 1996 – [[Ralph Kimball]] publishes the book ''The Data Warehouse Toolkit''.<ref name=":0">{{cite book|title=The Data Warehouse Toolkit|last=Kimball|first=Ralph|publisher=Wiley|year=2011|isbn=978-0-470-14977-5|page=237}}</ref>
* 1998 – Focal modeling is implemented as an ensemble (hybrid) data warehouse modeling approach, with Patrik Lager as one of the main drivers.<ref>[https://topofminds.se/wp/wp-content/uploads/Focal-Introduction-to-Focal-implementation.pdf Introduction to the focal framework]</ref><ref>[https://www.youtube.com/watch?v=C2y92n0sPok Data Modeling Meetup Munich: An Introduction to Focal with Patrik Lager - YouTube]</ref>
* 2000 – [[Dan Linstedt]] releases in the public domain the [[Data vault modeling]] conceived in 1990 as an alternative to Inmon and Kimball to provide long-term historical storage of data coming in from multiple operational systems, with emphasis on tracing, auditing and resiliance to change of the source data model.
* 2000 – [[Dan Linstedt]] releases in the public domain the [[Data vault modeling]], conceived in 1990 as an alternative to Inmon and Kimball to provide long-term historical storage of data coming in from multiple operational systems, with emphasis on tracing, auditing and resilience to change of the source data model.
* 2008 – [[Bill Inmon]], along with Derek Strauss and Genia Neushloss, publishes "DW 2.0: The Architecture for the Next Generation of Data Warehousing", explaining his top-down approach to data warehousing and coining the term, data-warehousing 2.0.
* 2008 – [[Anchor modeling]] was formalized in a paper presented at the International Conference on Conceptual Modeling, and won the best paper award<ref>{{Cite journal |last1=Regardt |first1=Olle |last2=Rönnbäck |first2=Lars |last3=Bergholtz |first3=Maria |last4=Johannesson |first4=Paul |last5=Wohed |first5=Petia |title=Anchor Modeling |journal=Proceedings of the 28th International Conference on Conceptual Modeling |series=ER '09 | year=2009 |isbn=978-3-642-04839-5 |location=Gramado, Brazil |pages=234–250 |publisher=Springer-Verlag}}</ref>
* 2012 – [[Bill Inmon]] develops and makes public technology known as "textual disambiguation". Textual disambiguation applies context to raw text and reformats the raw text and context into a standard data base format. Once raw text is passed through textual disambiguation, it can easily and efficiently be accessed and analyzed by standard business intelligence technology. Textual disambiguation is accomplished through the execution of textual ETL. Textual disambiguation is useful wherever raw text is found, such as in documents, Hadoop, email, and so forth.
* 2012 – [[Bill Inmon]] develops and makes public technology known as "textual disambiguation". Textual disambiguation applies context to raw text and reformats the raw text and context into a standard data base format. Once raw text is passed through textual disambiguation, it can easily and efficiently be accessed and analyzed by standard business intelligence technology. Textual disambiguation is accomplished through the execution of textual ETL. Textual disambiguation is useful wherever raw text is found, such as in documents, Hadoop, email, and so forth.
* 2013 – Data vault 2.0 was released,<ref>[[#dvos2|A short intro to #datavault 2.0]]</ref><ref>[[#dvspec2|Data Vault 2.0 Being Announced]]</ref> having some minor changes to the modeling method, as well as integration with best practices from other methodologies, architectures and implementations including agile and CMMI principles


==Data organization==
==Information storage==


===Facts===
===Facts===
A fact is a value, or measurement, which represents a fact about the managed entity or system.
A fact is a value or measurement in the system being managed.


Facts, as reported by the reporting entity, are said to be at raw level; e.g., in a mobile telephone system, if a BTS ([[base transceiver station]]) receives 1,000 requests for traffic channel allocation, allocates for 820, and rejects the remaining, it would report three '''facts''' or measurements to a management system:
Raw facts are ones reported by the reporting entity. For example, in a mobile telephone system, if a [[base transceiver station]] (BTS) receives 1,000 requests for traffic channel allocation, allocates for 820, and rejects the rest, it could report three facts to a management system:
* tch_req_total = 1000
* {{code|tch_req_total {{=}} 1000}}
* tch_req_success = 820
* {{code|tch_req_success {{=}} 820}}
* tch_req_fail = 180
* {{code|tch_req_fail {{=}} 180}}


Facts at the raw level are further aggregated to higher levels in various [[Dimension (data warehouse)|dimensions]] to extract more service or business-relevant information from it. These are called aggregates or summaries or aggregated facts.
Raw facts are aggregated to higher levels in various [[Dimension (data warehouse)|dimensions]] to extract information more relevant to the service or business. These are called aggregated facts or summaries.


For instance, if there are three BTS in a city, then the facts above can be aggregated from the BTS to the city level in the network dimension. For example:
For example, if there are three BTSs in a city, then the facts above can be aggregated to the city level in the network dimension. For example:


* tch_req_success_city = tch_req_success_bts1 + tch_req_success_bts2 + tch_req_success_bts3
* {{code|tch_req_success_city {{=}} tch_req_success_bts1 + tch_req_success_bts2 + tch_req_success_bts3}}
* avg_tch_req_success_city = (tch_req_success_bts1 + tch_req_success_bts2 + tch_req_success_bts3) / 3
* {{code|avg_tch_req_success_city {{=}} (tch_req_success_bts1 + tch_req_success_bts2 + tch_req_success_bts3) / 3}}


===Dimensional versus normalized approach for storage of data===
===Dimensional versus normalized approach for storage of data===
The two most important approaches to store data in a warehouse are dimensional and normalized. The dimensional approach uses a [[star schema]] as proposed by [[Ralph Kimball]]. The normalized approach, also called the [[third normal form]] (3NF) is an entity-relational normalized model proposed by Bill Inmon.<ref>{{Cite journal|last1=Golfarelli|first1=Matteo|last2=Maio|first2=Dario|last3=Rizzi|first3=Stefano|date=1998-06-01|title=The dimensional fact model: a conceptual model for data warehouses|url=https://www.worldscientific.com/doi/abs/10.1142/S0218843098000118|journal=International Journal of Cooperative Information Systems|volume=07|issue=2n03|pages=215–247|doi=10.1142/S0218843098000118|issn=0218-8430}}</ref>
There are three or more leading approaches to storing data in a data warehouse&nbsp;— the most important approaches are the dimensional approach and the normalized approach.


====Dimensional approach====
====Dimensional approach====
In a [[Star schema|dimensional approach]], [[transaction data]] is partitioned into "facts", which are usually numeric transaction data, and "[[dimension (data warehouse)|dimensions]]", which are the reference information that gives context to the facts. For example, a sales transaction can be broken up into facts such as the number of products ordered and the total price paid for the products, and into dimensions such as order date, customer name, product number, order ship-to and bill-to locations, and salesperson responsible for receiving the order.
The dimensional approach refers to [[Ralph Kimball]]'s approach in which it is stated that the data warehouse should be modeled using a Dimensional Model/[[star schema]]. The normalized approach, also called the [[Third normal form|3NF]] model (Third Normal Form) refers to Bill Inmon's approach in which it is stated that the data warehouse should be modeled using an E-R model/normalized model.


This dimensional approach makes data easier to understand and speeds up data retrieval.<ref name=":0" /> Dimensional structures are easy for business users to understand because the structure is divided into measurements/facts and context/dimensions. Facts are related to the organization's business processes and operational system, and dimensions are the context about them (Kimball, Ralph 2008). Another advantage is that the dimensional model does not involve a relational database every time. Thus, this type of modeling technique is very useful for end-user queries in data warehouse.
In a [[Star schema|dimensional approach]], [[transaction data]] are partitioned into "facts", which are generally numeric transaction data, and "[[dimension (data warehouse)|dimensions]]", which are the reference information that gives context to the facts. For example, a sales transaction can be broken up into facts such as the number of products ordered and the total price paid for the products, and into dimensions such as order date, customer name, product number, order ship-to and bill-to locations, and salesperson responsible for receiving the order.


The model of facts and dimensions can also be understood as a [[data cube]],<ref>{{cite web| url = http://www2.cs.uregina.ca/~dbd/cs831/notes/dcubes/dcubes.html| title = Introduction to Data Cubes}}</ref> where dimensions are the categorical coordinates in a multi-dimensional cube, the fact is a value corresponding to the coordinates.
A key advantage of a dimensional approach is that the data warehouse is easier for the user to understand and to use. Also, the retrieval of data from the data warehouse tends to operate very quickly.<ref name=":0" /> Dimensional structures are easy to understand for business users, because the structure is divided into measurements/facts and context/dimensions. Facts are related to the organization's business processes and operational system whereas the dimensions surrounding them contain context about the measurement (Kimball, Ralph 2008). Another advantage offered by dimensional model is that it does not involve a relational database every time. Thus, this type of modeling technique is very useful for end-user queries in data warehouse.


The main disadvantages of the dimensional approach are:
The model of facts and dimensions can also be understood as [http://www2.cs.uregina.ca/~dbd/cs831/notes/dcubes/dcubes.html data cube]. Where the dimensions are the categorical coordinates in a multi-dimensional cube, while the fact is a value corresponding to the coordinates.
# It is complicated to maintain the integrity of facts and dimensions, loading the data warehouse with data from different operational systems

# It is difficult to modify the warehouse structure if the organization changes the way it does business.
The main disadvantages of the dimensional approach are the following:
# To maintain the integrity of facts and dimensions, loading the data warehouse with data from different operational systems is complicated.
# It is difficult to modify the data warehouse structure if the organization adopting the dimensional approach changes the way in which it does business.


====Normalized approach====
====Normalized approach====
In the normalized approach, the data in the data warehouse are stored following, to a degree, [[database normalization]] rules. Tables are grouped together by ''subject areas'' that reflect general data categories (e.g., data on customers, products, finance, etc.). The normalized structure divides data into entities, which creates several tables in a relational database. When applied in large enterprises the result is dozens of tables that are linked together by a web of joins. Furthermore, each of the created entities is converted into separate physical tables when the database is implemented (Kimball, Ralph 2008).
In the normalized approach, the data in the warehouse are stored following, to a degree, [[database normalization]] rules. Normalized relational database tables are grouped into ''subject areas'' (for example, customers, products and finance). When used in large enterprises, the result is dozens of tables linked by a web of joins.(Kimball, Ralph 2008).

The main advantage of this approach is that it is straightforward to add information into the database. Some disadvantages of this approach are that, because of the number of tables involved, it can be difficult for users to join data from different sources into meaningful information and to access the information without a precise understanding of the sources of data and of the [[data structure]] of the data warehouse.
The main advantage of this approach is that it is straightforward to add information into the database. Disadvantages include that, because of the large number of tables, it can be difficult for users to join data from different sources into meaningful information and access the information without a precise understanding of the date sources and the [[data structure]] of the data warehouse.


Both normalized and dimensional models can be represented in entity-relationship diagrams as both contain joined relational tables. The difference between the two models is the degree of normalization (also known as [[Database normalization#Normal forms|Normal Forms]]). These approaches are not mutually exclusive, and there are other approaches. Dimensional approaches can involve normalizing data to a degree (Kimball, Ralph 2008).
Both normalized and dimensional models can be represented in entity–relationship diagrams because both contain joined relational tables. The difference between them is the degree of normalization. These approaches are not mutually exclusive, and there are other approaches. Dimensional approaches can involve normalizing data to a degree (Kimball, Ralph 2008).


In ''Information-Driven Business'',<ref>{{cite book|last=Hillard|first=Robert|title=Information-Driven Business|year=2010|publisher=Wiley|isbn=978-0-470-62577-4}}</ref> Robert Hillard proposes an approach to comparing the two approaches based on the information needs of the business problem. The technique shows that normalized models hold far more information than their dimensional equivalents (even when the same fields are used in both models) but this extra information comes at the cost of usability. The technique measures information quantity in terms of [[Entropy (information theory)|information entropy]] and usability in terms of the Small Worlds data transformation measure.<ref>{{cite web|url=http://mike2.openmethodology.org/wiki/Small_Worlds_Data_Transformation_Measure |title=Information Theory & Business Intelligence Strategy - Small Worlds Data Transformation Measure - MIKE2.0, the open source methodology for Information Development |publisher=Mike2.openmethodology.org |date= |accessdate=2013-06-14}}</ref>
In ''Information-Driven Business'',<ref>{{cite book|last=Hillard|first=Robert|title=Information-Driven Business|year=2010|publisher=Wiley|isbn=978-0-470-62577-4}}</ref> [[Robert Hillard (writer)|Robert Hillard]] compares the two approaches based on the information needs of the business problem. He concludes that normalized models hold far more information than their dimensional equivalents (even when the same fields are used in both models) but at the cost of usability. The technique measures information quantity in terms of [[Entropy (information theory)|information entropy]] and usability in terms of the Small Worlds data transformation measure.<ref>{{cite web|url=http://mike2.openmethodology.org/wiki/Small_Worlds_Data_Transformation_Measure |title=Information Theory & Business Intelligence Strategy - Small Worlds Data Transformation Measure - MIKE2.0, the open source methodology for Information Development |publisher=Mike2.openmethodology.org |access-date=2013-06-14}}</ref>


==Design methods==
==Design methods==
Line 149: Line 157:


===Bottom-up design===
===Bottom-up design===
In the ''bottom-up'' approach, [[data mart]]s are first created to provide reporting and analytical capabilities for specific [[business process]]es. These data marts can then be integrated to create a comprehensive data warehouse. The data warehouse bus architecture is primarily an implementation of "the bus", a collection of [[Dimension (data warehouse)#Types|conformed dimension]]s and [[Facts (data warehouse)#Types|conformed fact]]s, which are dimensions that are shared (in a specific way) between facts in two or more data marts.<ref>{{Cite web|url=http://decisionworks.com/2003/09/the-bottom-up-misnomer/|title=The Bottom-Up Misnomer - DecisionWorks Consulting|website=DecisionWorks Consulting|language=en-US|accessdate=2016-03-06}}</ref>
In the ''bottom-up'' approach, [[data mart]]s are first created to provide reporting and analytical capabilities for specific [[business process]]es. These data marts can then be integrated to create a comprehensive data warehouse. The data warehouse bus architecture is primarily an implementation of "the bus", a collection of [[Dimension (data warehouse)#Types|conformed dimension]]s and [[Facts (data warehouse)#Types|conformed fact]]s, which are dimensions that are shared (in a specific way) between facts in two or more data marts.<ref>{{Cite web|url=http://decisionworks.com/2003/09/the-bottom-up-misnomer/|title=The Bottom-Up Misnomer - DecisionWorks Consulting|website=DecisionWorks Consulting|date=17 September 2003|language=en-US|access-date=2016-03-06}}</ref>


===Top-down design===
===Top-down design===
Line 155: Line 163:


===Hybrid design===
===Hybrid design===
Data warehouses (DW) often resemble the [[hub and spokes architecture]]. [[Legacy system]]s feeding the warehouse often include [[customer relationship management]] and [[enterprise resource planning]], generating large amounts of data. To consolidate these various data models, and facilitate the [[extract transform load]] process, data warehouses often make use of an [[operational data store]], the information from which is parsed into the actual DW. To reduce data redundancy, larger systems often store the data in a normalized way. Data marts for specific reports can then be built on top of the data warehouse.
Data warehouses often resemble the [[hub and spokes architecture]]. [[Legacy system]]s feeding the warehouse often include [[customer relationship management]] and [[enterprise resource planning]], generating large amounts of data. To consolidate these various data models, and facilitate the [[extract transform load]] process, data warehouses often make use of an [[operational data store]], the information from which is parsed into the actual data warehouse. To reduce data redundancy, larger systems often store the data in a normalized way. Data marts for specific reports can then be built on top of the data warehouse.


A hybrid DW database is kept on [[third normal form]] to eliminate [[data redundancy]]. A normal relational database, however, is not efficient for business intelligence reports where dimensional modelling is prevalent. Small data marts can shop for data from the consolidated warehouse and use the filtered, specific data for the fact tables and dimensions required. The DW provides a single source of information from which the data marts can read, providing a wide range of business information. The hybrid architecture allows a DW to be replaced with a [[master data management]] repository where operational, not static information could reside.
A hybrid (also called ensemble) data warehouse database is kept on [[third normal form]] to eliminate [[data redundancy]]. A normal relational database, however, is not efficient for business intelligence reports where dimensional modelling is prevalent. Small data marts can shop for data from the consolidated warehouse and use the filtered, specific data for the fact tables and dimensions required. The data warehouse provides a single source of information from which the data marts can read, providing a wide range of business information. The hybrid architecture allows a data warehouse to be replaced with a [[master data management]] repository where operational (not static) information could reside.


The [[data vault modeling]] components follow hub and spokes architecture. This modeling style is a hybrid design, consisting of the best practices from both third normal form and [[star schema]]. The data vault model is not a true third normal form, and breaks some of its rules, but it is a top-down architecture with a bottom up design. The data vault model is geared to be strictly a data warehouse. It is not geared to be end-user accessible, which when built, still requires the use of a data mart or star schema based release area for business purposes.
The [[data vault modeling]] components follow hub and spokes architecture. This modeling style is a hybrid design, consisting of the best practices from both third normal form and [[star schema]]. The data vault model is not a true third normal form, and breaks some of its rules, but it is a top-down architecture with a bottom up design. The data vault model is geared to be strictly a data warehouse. It is not geared to be end-user accessible, which, when built, still requires the use of a data mart or star schema-based release area for business purposes.


==Characteristics==
==Data warehouse characteristics==
There are basic features that define the data in the data warehouse that include subject orientation, data integration, time-variant, nonvolatile data, and data granularity.
There are basic features that define the data in the data warehouse that include subject orientation, data integration, time-variant, nonvolatile data, and data granularity.


===Subject-Oriented===
===Subject-oriented===
Unlike the operational systems, the data in the data warehouse revolves around subjects of the enterprise ([[database normalization]]). Subject orientation can be really useful for decision making.
Unlike the operational systems, the data in the data warehouse revolves around the subjects of the enterprise. Subject orientation is not [[database normalization]]. Subject orientation can be really useful for decision-making.
Gathering the required objects is called subject oriented.
Gathering the required objects is called subject-oriented.


===Integrated===
===Integrated===
Line 172: Line 180:


===Time-variant===
===Time-variant===
While operational systems reflect current values as they support day-to-day operations, data warehouse data represents data over a long time horizon (up to 10 years) which means it stores historical data. It is mainly meant for data mining and forecasting, If a user is searching for a buying pattern of a specific customer, the user needs to look at data on the current and past purchases.<ref name=":1">{{Cite book|url=https://www.worldcat.org/oclc/662453070|title=Data warehousing fundamentals for IT professionals|last=Paulraj.|first=Ponniah,|date=2010|publisher=John Wiley & Sons|others=Ponniah, Paulraj.|isbn=9780470462072|edition= 2nd |location=Hoboken, N.J.|oclc=662453070}}</ref>
While operational systems reflect current values as they support day-to-day operations, data warehouse data represents a long time horizon (up to 10 years) which means it stores mostly historical data. It is mainly meant for data mining and forecasting. (E.g. if a user is searching for a buying pattern of a specific customer, the user needs to look at data on the current and past purchases.)<ref name=":1">{{Cite book|title=Data warehousing fundamentals for IT professionals|last=Paulraj.|first=Ponniah|date=2010|publisher=John Wiley & Sons|others=Ponniah, Paulraj.|isbn=9780470462072|edition= 2nd |location=Hoboken, N.J.|oclc=662453070}}</ref>


===Nonvolatile===
===Nonvolatile===
The data in the data warehouse is read-only which means it cannot be updated, created, or deleted.<ref>{{Cite book|url=https://www.worldcat.org/oclc/61762085|title=Building the data warehouse|last=H.|first=Inmon, William|date=2005|publisher=Wiley Pub|isbn=9780764599446|edition= 4th |location=Indianapolis, IN|oclc=61762085}}</ref>
The data in the data warehouse is read-only, which means it cannot be updated, created, or deleted (unless there is a regulatory or statutory obligation to do so).<ref>{{Cite book|title=Building the data warehouse|last=Inmon |first=William H. |date=2005|publisher=Wiley Pub|isbn=9780764599446|edition= 4th |location=Indianapolis, IN|oclc=61762085}}</ref>


===Summarized===
==Options==
===Aggregation===
In the data warehouse, data is summarized at different levels.The user may start looking at the total sale units of a product in an entire region. Then the user looks at the states in that region. Finally, they may examine the individual stores in a certain state. Therefore, typically, the analysis starts at a higher level and moves down to lower levels of details.<ref name=":1" />
In the data warehouse process, data can be aggregated in data marts at different levels of abstraction. The user may start looking at the total sale units of a product in an entire region. Then the user looks at the states in that region. Finally, they may examine the individual stores in a certain state. Therefore, typically, the analysis starts at a higher level and drills down to lower levels of details.<ref name=":1" />
===Virtualization===
With [[data virtualization]], the data used remains in its original locations and real-time access is established to allow analytics across multiple sources creating a virtual data warehouse. This can aid in resolving some technical difficulties such as compatibility problems when combining data from various platforms, lowering the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding the creation of a new database containing personal information can make it easier to comply with privacy regulations. However, with data virtualization, the connection to all necessary data sources must be operational as there is no local copy of the data, which is one of the main drawbacks of the approach.<ref name="Paiho">{{cite journal | doi=10.1049/smc2.12044 | title=Opportunities of collected city data for smart cities | year=2022 | last1=Paiho | first1=Satu | last2=Tuominen | first2=Pekka | last3=Rökman | first3=Jyri | last4=Ylikerälä | first4=Markus | last5=Pajula | first5=Juha | last6=Siikavirta | first6=Hanne | journal=IET Smart Cities | volume=4 | issue=4 | pages=275–291 | s2cid=253467923 | doi-access=free }}</ref>


==Architecture==
==Data warehouse architecture==
The different methods used to construct/organize a data warehouse specified by an organization are numerous. The hardware utilized, software created and data resources specifically required for the correct functionality of a data warehouse are the main components of the data warehouse architecture. All data warehouses have multiple phases in which the requirements of the organization are modified and fine tuned.<ref>{{cite book|last1=Gupta|first1=Satinder Bal|last2=Mittal|first2=Aditya|title=Introduction to Database Management System|year=2009|publisher=Laxmi Publications|url=https://books.google.com/books/about/Introduction_to_Database_Management_Syst.html?id=fyQTae6c9l4C}}</ref>
The different methods used to construct/organize a data warehouse specified by an organization are numerous. The hardware utilized, software created and data resources specifically required for the correct functionality of a data warehouse are the main components of the data warehouse architecture. All data warehouses have multiple phases in which the requirements of the organization are modified and fine-tuned.<ref>{{cite book|last1=Gupta|first1=Satinder Bal|last2=Mittal|first2=Aditya|title=Introduction to Database Management System|year=2009|publisher=Laxmi Publications|url=https://books.google.com/books?id=fyQTae6c9l4C|isbn=9788131807248}}</ref>

==Versus operational system==
Operational systems are optimized for preservation of [[data integrity]] and speed of recording of business transactions through use of [[database normalization]] and an [[entity-relationship model]]. Operational system designers generally follow [[Codd's 12 rules]] of [[database normalization]] to ensure data integrity. Fully normalized database designs (that is, those satisfying all Codd rules) often result in information from a business transaction being stored in dozens to hundreds of tables. [[Relational database]]s are efficient at managing the relationships between these tables. The databases have very fast insert/update performance because only a small amount of data in those tables is affected each time a transaction is processed. To improve performance, older data are usually periodically purged from operational systems.

Data warehouses are optimized for analytic access patterns. Analytic access patterns generally involve selecting specific fields and rarely if ever 'select *' as is more common in operational databases. Because of these differences in access patterns, operational databases (loosely, OLTP) benefit from the use of a row-oriented DBMS whereas analytics databases (loosely, OLAP) benefit from the use of a [[column-oriented DBMS]]. Unlike operational systems which maintain a snapshot of the business, data warehouses generally maintain an infinite history which is implemented through ETL processes that periodically migrate data from the operational systems over to the data warehouse.


==Evolution in organization use==
==Evolution in organization use==
These terms refer to the level of sophistication of a data warehouse:
These terms refer to the level of sophistication of a data warehouse:


; Offline operational data warehouse: Data warehouses in this stage of evolution are updated on a regular time cycle (usually daily, weekly or monthly) from the operational systems and the data is stored in an integrated reporting-oriented data
; Offline operational data warehouse: Data warehouses in this stage of evolution are updated on a regular time cycle (usually daily, weekly or monthly) from the operational systems and the data is stored in an integrated reporting-oriented database.
; Offline data warehouse: Data warehouses at this stage are updated from data in the operational systems on a regular basis and the data warehouse data are stored in a data structure designed to facilitate reporting.
; Offline data warehouse: Data warehouses at this stage are updated from data in the operational systems on a regular basis and the data warehouse data are stored in a data structure designed to facilitate reporting.
; On time data warehouse: Online Integrated Data Warehousing represent the real time Data warehouses stage data in the warehouse is updated for every transaction performed on the source data
; On-time data warehouse: Online Integrated Data Warehousing represent the real-time Data warehouses stage data in the warehouse is updated for every transaction performed on the source data
; Integrated data warehouse: These data warehouses assemble data from different areas of business, so users can look up the information they need across other systems.<ref>{{cite web |url=http://www.tech-faq.com/data-warehouse.html |title=Data Warehouse }}</ref>
; Integrated data warehouse: These data warehouses assemble data from different areas of business, so users can look up the information they need across other systems.<ref>{{cite web |url=http://www.tech-faq.com/data-warehouse.html |title=Data Warehouse |date=6 April 2019 }}</ref>

==See also==
{{Wikitionary|data warehouse}}
* [[Business intelligence software|List of business intelligence software]]
* {{annotated link|Data lake}}
* {{annotated link|Data mesh}}


==References==
==References==
Line 200: Line 212:


==Further reading==
==Further reading==
{{Portal|Data warehouses}}
* [[Thomas H. Davenport|Davenport, Thomas H.]] and Harris, Jeanne G. ''Competing on Analytics: The New Science of Winning'' (2007) Harvard Business School Press. {{ISBN|978-1-4221-0332-6}}
* [[Thomas H. Davenport|Davenport, Thomas H.]] and Harris, Jeanne G. ''Competing on Analytics: The New Science of Winning'' (2007) Harvard Business School Press. {{ISBN|978-1-4221-0332-6}}
* Ganczarski, Joe. ''Data Warehouse Implementations: Critical Implementation Factors Study'' (2009) [[VDM Verlag]] {{ISBN|3-639-18589-7}} {{ISBN|978-3-639-18589-8}}
* Ganczarski, Joe. ''Data Warehouse Implementations: Critical Implementation Factors Study'' (2009) [[VDM Verlag]] {{ISBN|3-639-18589-7}} {{ISBN|978-3-639-18589-8}}
Line 213: Line 224:


{{DEFAULTSORT:Data Warehouse}}
{{DEFAULTSORT:Data Warehouse}}
[[Category:Business intelligence]]
[[Category:Data engineering]]
[[Category:Data management]]
[[Category:Data warehousing| ]]
[[Category:Data warehousing| ]]
[[Category:Information technology management]]

Latest revision as of 02:43, 28 November 2024

Data Warehouse and Data-Marts overview
Data Warehouse and Data mart overview, with Data Marts shown in the top right.

In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis and is a core component of business intelligence.[1] Data warehouses are central repositories of data integrated from disparate sources. They store current and historical data organized so as to make it easy to create reports, query and get insights from the data.[2] Unlike databases, they are intended to be used by analysts and managers to help make organizational decisions.[3]

The basic architecture of a data warehouse

The data stored in the warehouse is uploaded from operational systems (such as marketing or sales). The data may pass through an operational data store and may require data cleansing for additional operations to ensure data quality before it is used in the data warehouse for reporting.

The two main approaches for building a data warehouse system are extract, transform, load (ETL) and extract, load, transform (ELT).

Components

[edit]

The environment for data warehouses and marts includes the following:

  • Source systems of data (often, the company's operational databases, such as relational databases[3]);
  • Data integration technology and processes to extract data from source systems, transform them, and load them into a data mart or warehouse;[3]
  • Architectures to store data in the warehouse or marts;
  • Tools and applications for varied users;
  • Metadata, data quality, and governance processes. Metadata includes data sources (database, table, and column names), refresh schedules and data usage measures.[3]
[edit]

Operational databases

[edit]

Operational databases are optimized for the preservation of data integrity and speed of recording of business transactions through use of database normalization and an entity–relationship model. Operational system designers generally follow Codd's 12 rules of database normalization to ensure data integrity. Fully normalized database designs (that is, those satisfying all Codd rules) often result in information from a business transaction being stored in dozens to hundreds of tables. Relational databases are efficient at managing the relationships between these tables. The databases have very fast insert/update performance because only a small amount of data in those tables is affected by each transaction. To improve performance, older data are periodically purged.

Data warehouses are optimized for analytic access patterns, which usually involve selecting specific fields rather than all fields as is common in operational databases. Because of these differences in access, operational databases (loosely, OLTP) benefit from the use of a row-oriented database management system (DBMS), whereas analytics databases (loosely, OLAP) benefit from the use of a column-oriented DBMS. Operational systems maintain a snapshot of the business, while warehouses maintain historic data through ETL processes that periodically migrate data from the operational systems to the warehouse.

Online analytical processing (OLAP) is characterized by a low rate of transactions and complex queries that involve aggregations. Response time is an effective performance measure of OLAP systems. OLAP applications are widely used for data mining. OLAP databases store aggregated, historical data in multi-dimensional schemas (usually star schemas). OLAP systems typically have a data latency of a few hours, while data mart latency is closer to one day. The OLAP approach is used to analyze multidimensional data from multiple sources and perspectives. The three basic operations in OLAP are roll-up (consolidation), drill-down, and slicing & dicing.

Online transaction processing (OLTP) is characterized by a large numbers of short online transactions (INSERT, UPDATE, DELETE). OLTP systems emphasize fast query processing and maintaining data integrity in multi-access environments. For OLTP systems, performance is the number of transactions per second. OLTP databases contain detailed and current data. The schema used to store transactional databases is the entity model (usually 3NF).[citation needed] Normalization is the norm for data modeling techniques in this system.

Predictive analytics is about finding and quantifying hidden patterns in the data using complex mathematical models and to predict future outcomes. By contrast, OLAP focuses on historical data analysis and is reactive. Predictive systems are also used for customer relationship management (CRM).

Data marts

[edit]

A data mart is a simple data warehouse focused on a single subject or functional area. Hence it draws data from a limited number of sources such as sales, finance or marketing. Data marts are often built and controlled by a single department in an organization. The sources could be internal operational systems, a central data warehouse, or external data.[4] As with warehouses, stored data is usually not normalized.

Difference between data warehouse and data mart
Attribute Data warehouse Data mart
Scope of the data enterprise department
Number of subject areas multiple single
How difficult to build difficult easy
Memory required larger limited

Types of data marts include dependent, independent, and hybrid data marts.[clarification needed]

Variants

[edit]

ETL

[edit]

The typical extract, transform, load (ETL)-based data warehouse uses staging, data integration, and access layers to house its key functions. The staging layer or staging database stores raw data extracted from each of the disparate source data systems. The integration layer integrates disparate data sets by transforming the data from the staging layer, often storing this transformed data in an operational data store (ODS) database. The integrated data are then moved to yet another database, often called the data warehouse database, where the data is arranged into hierarchical groups, often called dimensions, and into facts and aggregate facts. The combination of facts and dimensions is sometimes called a star schema. The access layer helps users retrieve data.[5]

The main source of the data is cleansed, transformed, catalogued, and made available for use by managers and other business professionals for data mining, online analytical processing, market research and decision support.[6] However, the means to retrieve and analyze data, to extract, transform, and load data, and to manage the data dictionary are also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition of data warehousing includes business intelligence tools, tools to extract, transform, and load data into the repository, and tools to manage and retrieve metadata.

ELT

[edit]
ELT-based data warehouse architecture

ELT-based data warehousing gets rid of a separate ETL tool for data transformation. Instead, it maintains a staging area inside the data warehouse itself. In this approach, data gets extracted from heterogeneous source systems and are then directly loaded into the data warehouse, before any transformation occurs. All necessary transformations are then handled inside the data warehouse itself. Finally, the manipulated data gets loaded into target tables in the same data warehouse.

Benefits

[edit]

A data warehouse maintains a copy of information from the source transaction systems. This architectural complexity provides the opportunity to:

  • Integrate data from multiple sources into a single database and data model. More congregation of data to single database so a single query engine can be used to present data in an operational data store.
  • Mitigate the problem of isolation-level lock contention in transaction processing systems caused by long-running analysis queries in transaction processing databases.
  • Maintain data history, even if the source transaction systems do not.
  • Integrate data from multiple source systems, enabling a central view across the enterprise. This benefit is always valuable, but particularly so when the organization grows via merging.
  • Improve data quality, by providing consistent codes and descriptions, flagging or even fixing bad data.
  • Present the organization's information consistently.
  • Provide a single common data model for all data of interest regardless of data source.
  • Restructure the data so that it makes sense to the business users.
  • Restructure the data so that it delivers excellent query performance, even for complex analytic queries, without impacting the operational systems.
  • Add value to operational business applications, notably customer relationship management (CRM) systems.
  • Make decision–support queries easier to write.
  • Organize and disambiguate repetitive data.

History

[edit]

The concept of data warehousing dates back to the late 1980s[7] when IBM researchers Barry Devlin and Paul Murphy developed the "business data warehouse". In essence, the data warehousing concept was intended to provide an architectural model for the flow of data from operational systems to decision support environments. The concept attempted to address the various problems associated with this flow, mainly the high costs associated with it. In the absence of a data warehousing architecture, an enormous amount of redundancy was required to support multiple decision support environments. In larger corporations, it was typical for multiple decision support environments to operate independently. Though each environment served different users, they often required much of the same stored data. The process of gathering, cleaning and integrating data from various sources, usually from long-term existing operational systems (usually referred to as legacy systems), was typically in part replicated for each environment. Moreover, the operational systems were frequently reexamined as new decision support requirements emerged. Often new requirements necessitated gathering, cleaning and integrating new data from "data marts" that was tailored for ready access by users.

Additionally, with the publication of The IRM Imperative (Wiley & Sons, 1991) by James M. Kerr, the idea of managing and putting a dollar value on an organization's data resources and then reporting that value as an asset on a balance sheet became popular. In the book, Kerr described a way to populate subject-area databases from data derived from transaction-driven systems to create a storage area where summary data could be further leveraged to inform executive decision-making. This concept served to promote further thinking of how a data warehouse could be developed and managed in a practical way within any enterprise.

Key developments in early years of data warehousing:

  • 1960s – General Mills and Dartmouth College, in a joint research project, develop the terms dimensions and facts.[8]
  • 1970s – ACNielsen and IRI provide dimensional data marts for retail sales.[8]
  • 1970s – Bill Inmon begins to define and discuss the term Data Warehouse.[9][10][11]
  • 1975 – Sperry Univac introduces MAPPER (MAintain, Prepare, and Produce Executive Reports), a database management and reporting system that includes the world's first 4GL. It is the first platform designed for building Information Centers (a forerunner of contemporary data warehouse technology).
  • 1983 – Teradata introduces the DBC/1012 database computer specifically designed for decision support.[12]
  • 1984 – Metaphor Computer Systems, founded by David Liddle and Don Massaro, releases a hardware/software package and GUI for business users to create a database management and analytic system.
  • 1988 – Barry Devlin and Paul Murphy publish the article "An architecture for a business and information system" where they introduce the term "business data warehouse".[13]
  • 1990 – Red Brick Systems, founded by Ralph Kimball, introduces Red Brick Warehouse, a database management system specifically for data warehousing.
  • 1991 – James M. Kerr authors The IRM Imperative, which suggests data resources could be reported as an asset on a balance sheet, furthering commercial interest in the establishment of data warehouses.
  • 1991 – Prism Solutions, founded by Bill Inmon, introduces Prism Warehouse Manager, software for developing a data warehouse.
  • 1992 – Bill Inmon publishes the book Building the Data Warehouse.[14]
  • 1995 – The Data Warehousing Institute, a for-profit organization that promotes data warehousing, is founded.
  • 1996 – Ralph Kimball publishes the book The Data Warehouse Toolkit.[15]
  • 1998 – Focal modeling is implemented as an ensemble (hybrid) data warehouse modeling approach, with Patrik Lager as one of the main drivers.[16][17]
  • 2000 – Dan Linstedt releases in the public domain the Data vault modeling, conceived in 1990 as an alternative to Inmon and Kimball to provide long-term historical storage of data coming in from multiple operational systems, with emphasis on tracing, auditing and resilience to change of the source data model.
  • 2008 – Bill Inmon, along with Derek Strauss and Genia Neushloss, publishes "DW 2.0: The Architecture for the Next Generation of Data Warehousing", explaining his top-down approach to data warehousing and coining the term, data-warehousing 2.0.
  • 2008 – Anchor modeling was formalized in a paper presented at the International Conference on Conceptual Modeling, and won the best paper award[18]
  • 2012 – Bill Inmon develops and makes public technology known as "textual disambiguation". Textual disambiguation applies context to raw text and reformats the raw text and context into a standard data base format. Once raw text is passed through textual disambiguation, it can easily and efficiently be accessed and analyzed by standard business intelligence technology. Textual disambiguation is accomplished through the execution of textual ETL. Textual disambiguation is useful wherever raw text is found, such as in documents, Hadoop, email, and so forth.
  • 2013 – Data vault 2.0 was released,[19][20] having some minor changes to the modeling method, as well as integration with best practices from other methodologies, architectures and implementations including agile and CMMI principles

Data organization

[edit]

Facts

[edit]

A fact is a value or measurement in the system being managed.

Raw facts are ones reported by the reporting entity. For example, in a mobile telephone system, if a base transceiver station (BTS) receives 1,000 requests for traffic channel allocation, allocates for 820, and rejects the rest, it could report three facts to a management system:

  • tch_req_total = 1000
  • tch_req_success = 820
  • tch_req_fail = 180

Raw facts are aggregated to higher levels in various dimensions to extract information more relevant to the service or business. These are called aggregated facts or summaries.

For example, if there are three BTSs in a city, then the facts above can be aggregated to the city level in the network dimension. For example:

  • tch_req_success_city = tch_req_success_bts1 + tch_req_success_bts2 + tch_req_success_bts3
  • avg_tch_req_success_city = (tch_req_success_bts1 + tch_req_success_bts2 + tch_req_success_bts3) / 3

Dimensional versus normalized approach for storage of data

[edit]

The two most important approaches to store data in a warehouse are dimensional and normalized. The dimensional approach uses a star schema as proposed by Ralph Kimball. The normalized approach, also called the third normal form (3NF) is an entity-relational normalized model proposed by Bill Inmon.[21]

Dimensional approach

[edit]

In a dimensional approach, transaction data is partitioned into "facts", which are usually numeric transaction data, and "dimensions", which are the reference information that gives context to the facts. For example, a sales transaction can be broken up into facts such as the number of products ordered and the total price paid for the products, and into dimensions such as order date, customer name, product number, order ship-to and bill-to locations, and salesperson responsible for receiving the order.

This dimensional approach makes data easier to understand and speeds up data retrieval.[15] Dimensional structures are easy for business users to understand because the structure is divided into measurements/facts and context/dimensions. Facts are related to the organization's business processes and operational system, and dimensions are the context about them (Kimball, Ralph 2008). Another advantage is that the dimensional model does not involve a relational database every time. Thus, this type of modeling technique is very useful for end-user queries in data warehouse.

The model of facts and dimensions can also be understood as a data cube,[22] where dimensions are the categorical coordinates in a multi-dimensional cube, the fact is a value corresponding to the coordinates.

The main disadvantages of the dimensional approach are:

  1. It is complicated to maintain the integrity of facts and dimensions, loading the data warehouse with data from different operational systems
  2. It is difficult to modify the warehouse structure if the organization changes the way it does business.

Normalized approach

[edit]

In the normalized approach, the data in the warehouse are stored following, to a degree, database normalization rules. Normalized relational database tables are grouped into subject areas (for example, customers, products and finance). When used in large enterprises, the result is dozens of tables linked by a web of joins.(Kimball, Ralph 2008).

The main advantage of this approach is that it is straightforward to add information into the database. Disadvantages include that, because of the large number of tables, it can be difficult for users to join data from different sources into meaningful information and access the information without a precise understanding of the date sources and the data structure of the data warehouse.

Both normalized and dimensional models can be represented in entity–relationship diagrams because both contain joined relational tables. The difference between them is the degree of normalization. These approaches are not mutually exclusive, and there are other approaches. Dimensional approaches can involve normalizing data to a degree (Kimball, Ralph 2008).

In Information-Driven Business,[23] Robert Hillard compares the two approaches based on the information needs of the business problem. He concludes that normalized models hold far more information than their dimensional equivalents (even when the same fields are used in both models) but at the cost of usability. The technique measures information quantity in terms of information entropy and usability in terms of the Small Worlds data transformation measure.[24]

Design methods

[edit]

Bottom-up design

[edit]

In the bottom-up approach, data marts are first created to provide reporting and analytical capabilities for specific business processes. These data marts can then be integrated to create a comprehensive data warehouse. The data warehouse bus architecture is primarily an implementation of "the bus", a collection of conformed dimensions and conformed facts, which are dimensions that are shared (in a specific way) between facts in two or more data marts.[25]

Top-down design

[edit]

The top-down approach is designed using a normalized enterprise data model. "Atomic" data, that is, data at the greatest level of detail, are stored in the data warehouse. Dimensional data marts containing data needed for specific business processes or specific departments are created from the data warehouse.[26]

Hybrid design

[edit]

Data warehouses often resemble the hub and spokes architecture. Legacy systems feeding the warehouse often include customer relationship management and enterprise resource planning, generating large amounts of data. To consolidate these various data models, and facilitate the extract transform load process, data warehouses often make use of an operational data store, the information from which is parsed into the actual data warehouse. To reduce data redundancy, larger systems often store the data in a normalized way. Data marts for specific reports can then be built on top of the data warehouse.

A hybrid (also called ensemble) data warehouse database is kept on third normal form to eliminate data redundancy. A normal relational database, however, is not efficient for business intelligence reports where dimensional modelling is prevalent. Small data marts can shop for data from the consolidated warehouse and use the filtered, specific data for the fact tables and dimensions required. The data warehouse provides a single source of information from which the data marts can read, providing a wide range of business information. The hybrid architecture allows a data warehouse to be replaced with a master data management repository where operational (not static) information could reside.

The data vault modeling components follow hub and spokes architecture. This modeling style is a hybrid design, consisting of the best practices from both third normal form and star schema. The data vault model is not a true third normal form, and breaks some of its rules, but it is a top-down architecture with a bottom up design. The data vault model is geared to be strictly a data warehouse. It is not geared to be end-user accessible, which, when built, still requires the use of a data mart or star schema-based release area for business purposes.

Characteristics

[edit]

There are basic features that define the data in the data warehouse that include subject orientation, data integration, time-variant, nonvolatile data, and data granularity.

Subject-oriented

[edit]

Unlike the operational systems, the data in the data warehouse revolves around the subjects of the enterprise. Subject orientation is not database normalization. Subject orientation can be really useful for decision-making. Gathering the required objects is called subject-oriented.

Integrated

[edit]

The data found within the data warehouse is integrated. Since it comes from several operational systems, all inconsistencies must be removed. Consistencies include naming conventions, measurement of variables, encoding structures, physical attributes of data, and so forth.

Time-variant

[edit]

While operational systems reflect current values as they support day-to-day operations, data warehouse data represents a long time horizon (up to 10 years) which means it stores mostly historical data. It is mainly meant for data mining and forecasting. (E.g. if a user is searching for a buying pattern of a specific customer, the user needs to look at data on the current and past purchases.)[27]

Nonvolatile

[edit]

The data in the data warehouse is read-only, which means it cannot be updated, created, or deleted (unless there is a regulatory or statutory obligation to do so).[28]

Options

[edit]

Aggregation

[edit]

In the data warehouse process, data can be aggregated in data marts at different levels of abstraction. The user may start looking at the total sale units of a product in an entire region. Then the user looks at the states in that region. Finally, they may examine the individual stores in a certain state. Therefore, typically, the analysis starts at a higher level and drills down to lower levels of details.[27]

Virtualization

[edit]

With data virtualization, the data used remains in its original locations and real-time access is established to allow analytics across multiple sources creating a virtual data warehouse. This can aid in resolving some technical difficulties such as compatibility problems when combining data from various platforms, lowering the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding the creation of a new database containing personal information can make it easier to comply with privacy regulations. However, with data virtualization, the connection to all necessary data sources must be operational as there is no local copy of the data, which is one of the main drawbacks of the approach.[29]

Architecture

[edit]

The different methods used to construct/organize a data warehouse specified by an organization are numerous. The hardware utilized, software created and data resources specifically required for the correct functionality of a data warehouse are the main components of the data warehouse architecture. All data warehouses have multiple phases in which the requirements of the organization are modified and fine-tuned.[30]

Evolution in organization use

[edit]

These terms refer to the level of sophistication of a data warehouse:

Offline operational data warehouse
Data warehouses in this stage of evolution are updated on a regular time cycle (usually daily, weekly or monthly) from the operational systems and the data is stored in an integrated reporting-oriented database.
Offline data warehouse
Data warehouses at this stage are updated from data in the operational systems on a regular basis and the data warehouse data are stored in a data structure designed to facilitate reporting.
On-time data warehouse
Online Integrated Data Warehousing represent the real-time Data warehouses stage data in the warehouse is updated for every transaction performed on the source data
Integrated data warehouse
These data warehouses assemble data from different areas of business, so users can look up the information they need across other systems.[31]

See also

[edit]

References

[edit]
  1. ^ Dedić, Nedim; Stanier, Clare (2016). Hammoudi, Slimane; Maciaszek, Leszek; Missikoff, Michele M. Missikoff; Camp, Olivier; Cordeiro, José (eds.). An Evaluation of the Challenges of Multilingualism in Data Warehouse Development. International Conference on Enterprise Information Systems, 25–28 April 2016, Rome, Italy (PDF). Proceedings of the 18th International Conference on Enterprise Information Systems (ICEIS 2016). Vol. 1. SciTePress. pp. 196–206. doi:10.5220/0005858401960206. ISBN 978-989-758-187-8. Archived (PDF) from the original on 2018-05-22.
  2. ^ "What is a Data Warehouse? | Key Concepts | Amazon Web Services". Amazon Web Services, Inc. Retrieved 2023-02-13.
  3. ^ a b c d Rainer, R. Kelly; Cegielski, Casey G. (2012-05-01). Introduction to Information Systems: Enabling and Transforming Business, 4th Edition (Kindle ed.). Wiley. pp. 127, 128, 130, 131, 133. ISBN 978-1118129401.
  4. ^ "Data Mart Concepts". Oracle. 2007.
  5. ^ Patil, Preeti S.; Srikantha Rao; Suryakant B. Patil (2011). "Optimization of Data Warehousing System: Simplification in Reporting and Analysis". IJCA Proceedings on International Conference and Workshop on Emerging Trends in Technology (ICWET). 9 (6). Foundation of Computer Science: 33–37.
  6. ^ Marakas & O'Brien 2009
  7. ^ "The Story So Far". 2002-04-15. Archived from the original on 2008-07-08. Retrieved 2008-09-21.
  8. ^ a b Kimball 2013, pg. 15
  9. ^ "The audit of the Data Warehouse Framework" (PDF). Archived (PDF) from the original on 2012-05-12.
  10. ^ Kempe, Shannon (2012-08-23). "A Short History of Data Warehousing". DATAVERSITY. Retrieved 2024-05-10.
  11. ^ "Data Warehouse – What It Is & Why It Matters". www.sas.com. Retrieved 2024-05-10.
  12. ^ Paul Gillin (February 20, 1984). "Will Teradata revive a market?". Computer World. pp. 43, 48. Retrieved 2017-03-13.
  13. ^ Devlin, B. A.; Murphy, P. T. (1988). "An architecture for a business and information system". IBM Systems Journal. 27: 60–80. doi:10.1147/sj.271.0060.
  14. ^ Inmon, Bill (1992). Building the Data Warehouse. Wiley. ISBN 0-471-56960-7.
  15. ^ a b Kimball, Ralph (2011). The Data Warehouse Toolkit. Wiley. p. 237. ISBN 978-0-470-14977-5.
  16. ^ Introduction to the focal framework
  17. ^ Data Modeling Meetup Munich: An Introduction to Focal with Patrik Lager - YouTube
  18. ^ Regardt, Olle; Rönnbäck, Lars; Bergholtz, Maria; Johannesson, Paul; Wohed, Petia (2009). "Anchor Modeling". Proceedings of the 28th International Conference on Conceptual Modeling. ER '09. Gramado, Brazil: Springer-Verlag: 234–250. ISBN 978-3-642-04839-5.
  19. ^ A short intro to #datavault 2.0
  20. ^ Data Vault 2.0 Being Announced
  21. ^ Golfarelli, Matteo; Maio, Dario; Rizzi, Stefano (1998-06-01). "The dimensional fact model: a conceptual model for data warehouses". International Journal of Cooperative Information Systems. 07 (2n03): 215–247. doi:10.1142/S0218843098000118. ISSN 0218-8430.
  22. ^ "Introduction to Data Cubes".
  23. ^ Hillard, Robert (2010). Information-Driven Business. Wiley. ISBN 978-0-470-62577-4.
  24. ^ "Information Theory & Business Intelligence Strategy - Small Worlds Data Transformation Measure - MIKE2.0, the open source methodology for Information Development". Mike2.openmethodology.org. Retrieved 2013-06-14.
  25. ^ "The Bottom-Up Misnomer - DecisionWorks Consulting". DecisionWorks Consulting. 17 September 2003. Retrieved 2016-03-06.
  26. ^ Gartner, Of Data Warehouses, Operational Data Stores, Data Marts and Data Outhouses, Dec 2005
  27. ^ a b Paulraj., Ponniah (2010). Data warehousing fundamentals for IT professionals. Ponniah, Paulraj. (2nd ed.). Hoboken, N.J.: John Wiley & Sons. ISBN 9780470462072. OCLC 662453070.
  28. ^ Inmon, William H. (2005). Building the data warehouse (4th ed.). Indianapolis, IN: Wiley Pub. ISBN 9780764599446. OCLC 61762085.
  29. ^ Paiho, Satu; Tuominen, Pekka; Rökman, Jyri; Ylikerälä, Markus; Pajula, Juha; Siikavirta, Hanne (2022). "Opportunities of collected city data for smart cities". IET Smart Cities. 4 (4): 275–291. doi:10.1049/smc2.12044. S2CID 253467923.
  30. ^ Gupta, Satinder Bal; Mittal, Aditya (2009). Introduction to Database Management System. Laxmi Publications. ISBN 9788131807248.
  31. ^ "Data Warehouse". 6 April 2019.

Further reading

[edit]