Aerial photographic and satellite image interpretation: Difference between revisions
Necrothesp (talk | contribs) hatnote |
fix hatnote |
||
Line 1: | Line 1: | ||
{{ |
{{redirect|Aerial photograph interpretation|geological use|Aerial photograph interpretation (geology)}} |
||
{{more footnotes|date=August 2017}} |
{{more footnotes|date=August 2017}} |
||
[[File:Photo interpretation.jpg|thumb|Photo interpretation at the [[United States|U.S.]] [[National Geospatial-Intelligence Agency#National Photographic Interpretation Center (NPIC)|National Photographic Interpretation Center]] during the [[Cuban Missile Crisis]].]] |
[[File:Photo interpretation.jpg|thumb|Photo interpretation at the [[United States|U.S.]] [[National Geospatial-Intelligence Agency#National Photographic Interpretation Center (NPIC)|National Photographic Interpretation Center]] during the [[Cuban Missile Crisis]].]] |
||
Line 11: | Line 11: | ||
=== Location === |
=== Location === |
||
: There are two primary methods to obtain a precise location in the form of coordinates. 1) survey in the field by using traditional surveying techniques or global positioning system instruments, or 2) collect remotely sensed data of the object, rectify the image and then extract the desired coordinate information. Most scientists who choose option 1 now use relatively inexpensive GPS instruments in the field to obtain the desired location of an object. If option 2 is chosen, most aircraft used to collect the remotely sensed data have a GPS receiver. |
: There are two primary methods to obtain a precise location in the form of coordinates. 1) survey in the field by using traditional surveying techniques or global positioning system instruments, or 2) collect remotely sensed data of the object, rectify the image and then extract the desired coordinate information. Most scientists who choose option 1 now use relatively inexpensive GPS instruments in the field to obtain the desired location of an object. If option 2 is chosen, most aircraft used to collect the remotely sensed data have a GPS receiver. |
||
=== Size === |
=== Size === |
Revision as of 22:34, 23 January 2023
This article includes a list of general references, but it lacks sufficient corresponding inline citations. (August 2017) |
Aerial photographic and satellite image interpretation, or just image interpretation when in context, is the act of examining photographic images, particularly airborne and spaceborne, for the purpose of identifying objects and judging their significance.[1] This is commonly used in military aerial reconnaissance, using photographs taken from reconnaissance aircraft and reconnaissance satellites.
The principles of image interpretation have been developed empirically for more than 150 years. The most basic are the elements of image interpretation: location, size, shape, shadow, tone/color, texture, pattern, height/depth and site/situation/association. They are routinely used when interpreting aerial photos and analyzing photo-like images. An experienced image interpreter uses many of these elements intuitively. However, a beginner may not only have to consciously evaluate an unknown object according to these elements, but also analyze each element's significance in relation to the image's other objects and phenomena.
Elements of interpretation
Location
- There are two primary methods to obtain a precise location in the form of coordinates. 1) survey in the field by using traditional surveying techniques or global positioning system instruments, or 2) collect remotely sensed data of the object, rectify the image and then extract the desired coordinate information. Most scientists who choose option 1 now use relatively inexpensive GPS instruments in the field to obtain the desired location of an object. If option 2 is chosen, most aircraft used to collect the remotely sensed data have a GPS receiver.
Size
- The size of an object is one of the most distinguishing characteristics and one of the most important elements of interpretation. Most commonly, length, width and perimeter are measured. To be able to do this successfully, it is necessary to know the scale of the photo. Measuring the size of an unknown object allows the interpreter to rule out possible alternatives. It has proved to be helpful to measure the size of a few well-known objects to give a comparison to the unknown-object. For example, field dimensions of major sports like soccer, football, and baseball are standard throughout the world. If objects like this are visible in the image, it is possible to determine the size of the unknown object by simply comparing the two.
Shape
- There is an infinite number of uniquely shaped natural and man-made objects in the world. A few examples of shape are the triangular shape of modern jet aircraft and the shape of a common single-family dwelling. Humans have modified the landscape in very interesting ways that has given shape to many objects, but nature also shapes the landscape in its own ways. In general, straight, recti-linear features in the environment are of human origin. Nature produces more subtle shapes.
Shadow
- Virtually all remotely sensed data are collected within 2 hours of solar noon to avoid extended shadows in the image or photo. This is because shadows can obscure other objects that could otherwise be identified. On the other hand, the shadow cast by an object act as a key for the identification of the object as the length of the shadow will be used to estimate the height of the object which is vital for the recognition of the object. Take for example, the Washington Monument in Washington D.C. While viewing this from above, it can be difficult to discern the shape of the monument, but with a shadow cast, this process becomes much easier. It is a good practice to orient the photos so that the shadows are falling towards the interpreter. A pseudoscopic illusion can be produced if the shadow is oriented away from the observer. This happens when low points appear high and high points appear low.
Tone and color
- Real-world materials like vegetation, water and bare soil reflect different proportions of energy in the blue, green, red, and infrared portions of the electro-magnetic spectrum. An interpreter can document the amount of energy reflected from each at specific wavelengths to create a spectral signature. These signatures can help to understand why certain objects appear as they do on black and white or color imagery. These shades of gray are referred to as tone. The darker an object appears, the less light it reflects. Color imagery is often preferred because, as opposed to shades of gray, humans can detect thousands of different colors. Color aids in the process of photo interpretation.
Texture
- This is defined as the “characteristic placement and arrangement of repetitions of tone or color in an image.” Adjectives often used to describe texture are smooth (uniform, homogeneous), intermediate, and rough (coarse, heterogeneous). It is important to remember that texture is a product of scale. On a large scale depiction, objects could appear to have an intermediate texture. But, as the scale becomes smaller, the texture could appear to be more uniform, or smooth. A few examples of texture could be the “smoothness” of a paved road, or the “coarseness” a pine forest.
Pattern
- Pattern is the spatial arrangement of objects in the landscape. The objects may be arranged randomly or systematically. They can be natural, as with a drainage pattern of a river, or man-made, as with the squares formed from the United States Public Land Survey System. Typical adjectives used in describing pattern are: random, systematic, circular, oval, linear, rectangular, and curvilinear to name a few.
Height and depth
- Height and depth, also known as “elevation” and “bathymetry”, is one of the most diagnostic elements of image interpretation. This is because any object, such as a building or an electric pole that rises above the local landscape will exhibit some sort of radial relief. Also, objects that exhibit this relief will cast a shadow that can also provide information as to its height or elevation. A good example of this would be buildings of any major city.
Site/situation/association
- Site has unique physical characteristics which might include elevation, slope, and type of surface cover (e.g., grass, forest, water, bare soil). Site can also have socioeconomic characteristics such as the value of land or the closeness to water. Situation refers to how the objects in the photo or image are organized and “situated” in respect to each other. Most power plants have materials and building associated in a fairly predictable manner. Association refers to the fact that when you find a certain activity within a photo or image, you usually encounter related or “associated” features or activities. Site, situation, and association are rarely used independent of each other when analyzing an image. An example of this would be a large shopping mall. Usually there are multiple large buildings, massive parking lots, and it is usually located near a major road or intersection.
See also
- Aerial photograph interpretation (geology)
- Aerial photography
- Orthophoto
- Photogrammetry
- Photomapping
- Remote sensing
- United States Geological Survey
References
- ^ American Society of Photogrammetry; Colwell, R.N. (1960). Manual of Photographic Interpretation. Manual of Photographic Interpretation. American Society of Photogrammetry. Retrieved 2022-01-23.
Further reading
- Jensen, John R. (2000). Remote Sensing of the Environment. Prentice Hall. ISBN 978-0-13-489733-2.
- Olson, C. E. (1960). "Elements of photographic interpretation common to several sensors". Photogrammetric Engineering. 26 (4): 651–656.
- Philipson, Warren R. (1997). Manual of Photographic Interpretation (2nd ed.). American Society for Photogrammetry and Remote Sensing. ISBN 978-1-57083-039-6.