Anisotropic filtering
In 3D computer graphics, anisotropic filtering (abbreviated AF) is a method of enhancing the image quality of textures on surfaces that are far away and at high angles with respect to the camera so the projection of the texture (not the polygon or other primitive it is rendered on) appears like a trapezoid instead of a square. Like bilinear and trilinear filtering it eliminates aliasing effects, but introduces less blur at extreme angles and thus preserves more detail. Anisotropic filtering is relatively expensive (usually computationally, though the standard space-time tradeoff rules apply) and has only in the '90s become a standard feature of consumer-level graphics cards.
From here on in, it is assumed the reader is familiar with mip mapping.
If we were to explore an approximation to anisotropic filtering, mip mapping, we can understand how anisotropic filtering gains so much texture mapping quality. If we need to texture a horizontal plane which is at an oblique angle to the camera, traditional linear minification would give us insufficient horizontal resolution and extraneous vertical resolution. The texture would appear to be 'striped' (undersampled) parallel to the plane's axis of rotation due to the isotropic minification being used and would shimmer (oversampled) perpendicular to the plane's axis of rotation.
In mip mapping, a mip level smaller than the surface being mapped is always used and a linear magnification fits it to the surface. This avoids pixel shimmer. In mip mapping, the mip level is anisotropic, so a 256x256 texture may be downsized to a 128x64 bitmap, which doubles its horizontal resolution. Thus when the texture is applied, it appears consistent with the camera to plane angle.
This pre-generation of mip-maps has its own problems, as each mip level is only consistent to anisotropy at a single viewing angle and an actual anisotropic texture may have all four corners of the texture at arbitrary angles with respect to the two adjacent ones.
True anisotropic filtering generates the anisotropic texture maps on the fly on a per-pixel basis. When the texture is sampled, several pixels (samples) of the texture around the center point are taken, but on a sample grid skewed according to view perspective. The texture is 'pre-perspective corrected', so to speak. A more distant part of the texture will contribute fewer samples, a closer part of it will contribute more samples. Each sample must also be trilinear filtered (or bilinear) which adds more sampling to the process. Sixteen trilinear anisotropic samples will then require 128 samples from the stored texture, as trilinear filtering needs to take four samples from each mip level and then anisotropic sampling (at 16-tap) needs to take sixteen trilinear filtered samples.
This makes anisotropic filtering extremely bandwidth intensive. Each sample is four bytes (32 bits) so each anisotropic pixel has required 64 bytes from texture memory. A display can easily contain over a million pixels, so the hit on texture memory can get very high (tens to hundreds of gigabytes per second) very quickly.