File system fragmentation: Difference between revisions
Line 17: | Line 17: | ||
If subsequently F needs to be expanded, since the space immediately following it is occupied, there are three options: (1) add a new block somewhere else and indicate that F has a second ''extent'', (2) move files in the way of the expansion elsewhere, to allow F to remain contiguous; or (3) move file F so it can be one contiguous file of the new, larger size. The second option is probably impractical for performance reasons, as is the third when the file is very large. Indeed the third option is impossible when there is no single contiguous free space large enough to hold the new file. Thus the usual practice is simply to create an ''extent'' somewhere else and chain the new extent onto the old one (Example (5).) |
If subsequently F needs to be expanded, since the space immediately following it is occupied, there are three options: (1) add a new block somewhere else and indicate that F has a second ''extent'', (2) move files in the way of the expansion elsewhere, to allow F to remain contiguous; or (3) move file F so it can be one contiguous file of the new, larger size. The second option is probably impractical for performance reasons, as is the third when the file is very large. Indeed the third option is impossible when there is no single contiguous free space large enough to hold the new file. Thus the usual practice is simply to create an ''extent'' somewhere else and chain the new extent onto the old one (Example (5).) |
||
Material added to the end of file F would be part of the same extent. But if there is so much material that no room is available after the last extent, then ''another'' extent would have to be created |
Material added to the end of file F would be part of the same extent. But if there is so much material that no room is available after the last extent, then ''another'' extent would have to be created, and so on. Eventually the file system has free segments in many places and some files may be spread over many extents. Access time for those files (or for all files) may become excessively long. |
||
To summarize, factors that typically cause or facilitate fragmentation, include: |
To summarize, factors that typically cause or facilitate fragmentation, include: |
Revision as of 06:14, 14 December 2009
In computing, file system fragmentation, sometimes called file system aging, is the inability of a file system to lay out related data sequentially (contiguously), an inherent phenomenon in storage-backed file systems that allow in-place modification of their contents. It is a special case of data fragmentation. File system fragmentation increases disk head movement or seeks, which are known to hinder throughput. The correction to existing fragmentation is to reorganize files and free space back into contiguous areas, a process called defragmentation.
Cause
It has been suggested that Defragmentation#Causes of fragmentation be merged into this article. (Discuss) Proposed since September 2008. |
When a file system is first initialized on a partition (the partition is formatted for the file system), the entire space allotted is empty.[1] This means that the allocator algorithm is completely free to position newly created files anywhere on the disk. For some time after creation, files on the file system can be laid out near-optimally. When the operating system and applications are installed or other archives are unpacked, laying out separate files sequentially also means that related files are likely to be positioned close to each other.
However, as existing files are deleted or truncated, new regions of free space are created. When existing files are appended to, it is often impossible to resume the write exactly where the file used to end, as another file may already be allocated there — thus, a new fragment has to be allocated. As time goes on, and the same factors are continuously present, free space as well as frequently appended files tend to fragment more. Shorter regions of free space also mean that the allocator is no longer able to allocate new files contiguously, and has to break them into fragments. This is especially true when the file system is more full — longer contiguous regions of free space are less likely to occur.
Note that the following is a simplification of an otherwise complicated subject. The method which is about to be explained has been the general practice for allocating files on disk and other random-access storage for over 30 years. Some operating systems do not simply allocate files one after the other, and some use various methods to try to prevent fragmentation, but in general, sooner or later, for the reasons explained in the following explanation, fragmentation will occur as time goes by on any system where files are routinely deleted or expanded. Consider the following scenario, as shown by the image on the right:
A new disk has had 5 files saved on it, named A, B, C, D and E, and each file is using 10 blocks of space (here the block size is unimportant.) As the free space is contiguous the files are located one after the other (Example (1).)
If file B is deleted, a second region of 10 blocks of free space is created, and the disk becomes fragmented. The file system could defragment the disk immediately after a deletion, which would incur a severe performance penalty at unpredictable times, but in general the empty space is simply left there, marked in a table as available for later use, then used again as needed[2] (Example (2).)
Now if a new file F requires 7 blocks of space, it can be placed into the first 7 blocks of the space formerly holding the file B, and the 3 blocks following it will remain available (Example (3).) If another new file G is added, and needs only three blocks, it could then occupy the space after F and before C (Example (4).)
If subsequently F needs to be expanded, since the space immediately following it is occupied, there are three options: (1) add a new block somewhere else and indicate that F has a second extent, (2) move files in the way of the expansion elsewhere, to allow F to remain contiguous; or (3) move file F so it can be one contiguous file of the new, larger size. The second option is probably impractical for performance reasons, as is the third when the file is very large. Indeed the third option is impossible when there is no single contiguous free space large enough to hold the new file. Thus the usual practice is simply to create an extent somewhere else and chain the new extent onto the old one (Example (5).)
Material added to the end of file F would be part of the same extent. But if there is so much material that no room is available after the last extent, then another extent would have to be created, and so on. Eventually the file system has free segments in many places and some files may be spread over many extents. Access time for those files (or for all files) may become excessively long.
To summarize, factors that typically cause or facilitate fragmentation, include:
- low free space.
- frequent deletion, truncation or extension of files.
- overuse of sparse files.
Performance implications
File system fragmentation is projected to become more problematic with newer hardware due to the increasing disparity between sequential access speed and rotational delay (and to a lesser extent seek time), of consumer-grade hard disks,[3] which file systems are usually placed on. Thus, fragmentation is an important problem in recent file system research and design. The containment of fragmentation not only depends on the on-disk format of the file system, but also heavily on its implementation.[4]
In simple file system benchmarks, the fragmentation factor is often omitted, as realistic aging and fragmentation is difficult to model. Rather, for simplicity of comparison, file system benchmarks are often run on empty file systems, and unsurprisingly, the results may vary heavily from real-life access patterns.[5]
Types of fragmentation
File system fragmentation may occur on several levels:
- Fragmentation within individual files and their metadata.
- Free space fragmentation, making it increasingly difficult to lay out new files contiguously.
- The decrease of locality of reference between separate, but related files.
File fragmentation
Individual file fragmentation occurs when a single file has been broken into multiple pieces (called extents on extent-based file systems). While disk file systems attempt to keep individual files contiguous, this is not often possible without significant performance penalties. File system check and defragmentation tools typically only account for file fragmentation in their "fragmentation percentage" statistic.
Free space fragmentation
Free (unallocated) space fragmentation occurs when there are several unused areas of the file system where new files or metadata can be written to. Unwanted free space fragmentation is generally caused by deletion or truncation of files, but file systems may also intentionally insert fragments ("bubbles") of free space in order to facilitate extending nearby files (see preemptive techniques below). Unfortunately, free space is not really "free" because there is a cost associated with it. For example, there is a longer seek time to the appropriate location of the disk to be read or write.
File scattering
File segmentation, also called related-file fragmentation, or application-level (file) fragmentation, refers to the lack of locality of reference (within the storing medium) between related files (see file sequence for more detail). Unlike the previous two types of fragmentation, file scattering is a much more vague concept, as it heavily depends on the access pattern of specific applications. This also makes objectively measuring or estimating it very difficult. However, arguably, it is the most critical type of fragmentation, as studies have found that the most frequently accessed files tend to be small compared to available disk throughput per second.[6]
To avoid related file fragmentation and improve locality of reference (in this case called file contiguity), assumptions about the operation of applications have to be made. A very frequent assumption made is that it is worthwhile to keep smaller files within a single directory together, and lay them out in the natural file system order. While it is often a reasonable assumption, it does not always hold. For example, an application might read several different files, perhaps in different directories, in the exact same order they were written. Thus, a file system that simply orders all writes successively, might work faster for the given application.
Techniques for mitigating fragmentation
Several techniques have been developed to fight fragmentation. They can usually be classified into two categories: preemptive and retroactive. Due to the difficulty of predicting access patterns these techniques are most often heuristic in nature and may degrade performance under unexpected workloads.
Preemptive techniques
Preemptive techniques attempt to keep fragmentation at a minimum at the time data is being written on the disk. The simplest of such is, perhaps, appending data to an existing fragment in place where possible, instead of allocating new blocks to a new fragment.
Many of today's file systems attempt to preallocate longer chunks, or chunks from different free space fragments, called extents to files that are actively appended to. This mainly avoids file fragmentation when several files are concurrently being appended to, thus avoiding them from becoming excessively intertwined.[4]
A relatively recent technique is delayed allocation in XFS, HFS+[7] and ZFS; the same technique is also called allocate-on-flush in reiser4 and ext4. This means that when the file system is being written to, file system blocks are reserved, but the locations of specific files are not laid down yet. Later, when the file system is forced to flush changes as a result of memory pressure or a transaction commit, the allocator will have much better knowledge of the files' characteristics. Most file systems with this approach try to flush files in a single directory contiguously. Assuming that multiple reads from a single directory are common, locality of reference is improved.[8] Reiser4 also orders the layout of files according to the directory hash table, so that when files are being accessed in the natural file system order (as dictated by readdir), they are always read sequentially.[9]
BitTorrent and other peer-to-peer filesharing applications attempt to limit fragmentation through features that allocate the full space needed for a file when initiating downloads.[10]
Retroactive techniques
Retroactive techniques attempt to reduce fragmentation, or the negative effects of fragmentation, after it has occurred. Many file systems provide defragmentation tools, which attempt to reorder fragments of files, and sometimes also decrease their scattering (i.e. improve their contiguity, or locality of reference) by keeping either smaller files in directories, or directory trees, or even file sequences close to each other on the disk.
The HFS Plus file system transparently defragments files that are less than 20 MiB in size and are broken into 8 or more fragments, when the file is being opened.[11]
Stateless techniques
The (now defunct) Commodore Amiga SFS (Smart File System) defragments itself while the filesystem is in use. The defragmentation process is almost completely stateless (apart from the location it is working on), which means it can be stopped and started instantly. During defragmentation, data integrity is ensured for both meta data and normal data.
See also
Notes and references
- ^ The partition is not completely empty: some internal file system structures are always created. However, these are typically contiguous, and their existence is negligible. Some file systems, such as NTFS and ext2+, might also preallocate empty contiguous regions for special purposes.
- ^ The practice of leaving the space occupied by deleted files largely undisturbed is why undelete programs were able to work; they simply recovered the file whose name had been deleted from the directory, but whose contents were still on disk.
- ^ Dr. Mark H. Kryder (2006-04-03). "Future Storage Technologies: A Look Beyond the Horizon" (PDF). Storage Networking World conference. Seagate Technology. Retrieved 2006-12-14.
{{cite conference}}
: Unknown parameter|booktitle=
ignored (|book-title=
suggested) (help) - ^ a b L. W. McVoy, S. R. Kleiman (1991 winter). "Extent-like Performance from a UNIX File System" (PostScript). Proceedings of USENIX winter '91. Dallas, Texas: Sun Microsystems, Inc. pp. pages 33–43. Retrieved 2006-12-14.
{{cite conference}}
:|pages=
has extra text (help); Check date values in:|date=
(help); Unknown parameter|booktitle=
ignored (|book-title=
suggested) (help) - ^ Keith Arnold Smith (2001-01). "Workload-Specific File System Benchmarks" (PDF). Harvard University. Retrieved 2006-12-14.
{{cite journal}}
: Check date values in:|date=
(help); Cite journal requires|journal=
(help) - ^ John R. Douceur, William J. Bolosky (1999-06). "A Large-Scale Study of File-System Contents". ACM SIGMETRICS Performance Evaluation Review. 27 (1). Microsoft Research: 59–70. doi:10.1145/301453.301480.
{{cite journal}}
: Check date values in:|date=
(help) - ^ Amit Singh (2004-05). "Fragmentation in HFS Plus Volumes". Mac OS X Internals.
{{cite web}}
: Check date values in:|date=
(help) - ^ Adam Sweeney, Doug Doucette, Wei Hu, Curtis Anderson, Mike Nishimoto, Geoff Peck (1996-01). "Scalability in the XFS File System" (PDF). Proceedings of the USENIX 1996 Annual Technical Conference. San Diego, California: Silicon Graphics. Retrieved 2006-12-14.
{{cite conference}}
: Check date values in:|date=
(help); Unknown parameter|booktitle=
ignored (|book-title=
suggested) (help)CS1 maint: multiple names: authors list (link) - ^ Hans Reiser (2006-02-06). "The Reiser4 Filesystem" (Google Video). A lecture given by the author, Hans Reiser. Retrieved 2006-12-14.
- ^ Jeff Layton (2009-03-29). "From ext3 to ext4: An Interview with Theodore Ts'o". Linux Magazine.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Amit Singh (2006-06-19). "The HFS Plus File System". Mac OS X Internals: A Systems Approach. Addison Wesley.