MARS2014 Lecture Notes - Lecture 6: Ikonos, Horizontal Plane, Satellite Imagery
Unit 6: Remote Sensing
o What is spatial data?
▪ Data with a known location
o What are geographic information systems (GIS)?
▪ Automated systems for capturing, storing, displaying, manipulating, and
analysing data with a known position
o What is the difference between Raster and Vector data?
▪ Raster
• Is spatial variables represented in grid form
• A simpler data structure
▪ Vector
• Uses more abstract descriptors, faster to calculate, less used
• Feature boundaries are converted to polygons approximating
original regions
o What is the trade-off between area and accuracy?
▪ In vector maps
▪ More spatial resolution means a smaller area captured
▪ In the field: 0.5% area covered with 95% accuracy
▪ Via satellite: 100% area covered with 70% accuracy
o What are the three aspects you need to know when using spatial data?
▪ Projection
• We use equations to convert from a 3D earth to a 2D map, as
though projecting onto another surface
▪ Datum
• The original point for the coordinates
▪ Coordinate system
• Location reference system, using y,X, latitude, longitude, easting,
northing
o What are the two output coordinate systems?
▪ Geographical coordinates
• Latitude & longitude
• Define a place on earth and based on angles that can relate places
on earth
• Do’t ko tie/distae relatioship
▪ Eastings & Northings
• X, y positions
• Make a grid o the earth’s surfae
• Used in more localised projections for specific parts of earth
• More precise – can give degrees of longitude
o What are satellites images comprised of?
▪ Each pixel represents the amount of light reflected from the earth’s surfae
▪ Uses layers, where each layer is sensitive to a different wavelength of light
▪ It is the reflection of Electromagnetic Radiation, or sunlight, and that
satellite image is detecting
o How do we turn this into maps?
find more resources at oneclass.com
find more resources at oneclass.com
▪ Use light reflection gradient to mark areas with the same value as the same
feature
o What types of light interactions are recorded by remote sensing?
▪ Reflection/scattering
▪ Absorption
▪ Transmission
▪ Fluorescence
o How does the composition of the water column affect what is recorded by the
satellite?
▪ Movements such as waves, currents, tides affect water composition
▪ Less light gets through in turbid conditions than in clear
o What are the two different kinds of remote sensing?
▪ Active remote sensing (ARS)
▪ Passive remote sensing (PRS)
o What are the limiting factors for the two types of remote sensing?
▪ Active remote sensing
• Ca peetrate louds, a’t peetrate ater olu
• Ca ol see hat’s o the surfae
• Can detect oil spills or mangrove distribution etc. in areas where
there are clouds
▪ Passive remote sensing
• Ca’t peetrate louds ut a peetrate ater
o What are the degrees of water clarity?
▪ Clear: you can almost always see the bottom even in deep water
▪ Clear to turbid: you can see the bottom through most of the area
▪ Turbid: you can almost never see the bottom, even in knee deep water
o What are the two sensor characteristics?
▪ Sun-synchronous polar orbit
• Always capture in daylight
• Coer earth’s surfae i a 600-800km orbit
• Capture the same spot at the same time everyday
▪ Geostationary orbit
• Follow the same spot on earth in a 35000km orbit
• Capture one spot all the time
• Used for weather reports etc.
o How do sensors differ?
▪ Spatial characteristics
• In pixel size (resolution)
• Extant (land covered in one image, area)
▪ Revisited time
▪ Spectral properties (light)
▪ Availability
▪ Dollars (expense)
o What are the 3 types of sensors?
▪ Quickbird
• Lots of detail (includes individual components of reef etc.)
find more resources at oneclass.com
find more resources at oneclass.com
• Limitation: can only capture small areas in one go, and will only take
pictures when prompted
▪ Landsat
• Been capturing images every day since the 80s, and revisits the
same place on earth every 16 days
• The only satellite where we can detect change over a long time
• Limitation: not as detailed (pixel seize 30m)
▪ Aquamodis
• Covers a large area (185km)
• Limitation: even less detail than Landsat, pixel size 250m
o What are the spectral characteristics of sensors?
▪ How sensitive the sensor is to light
▪ If the reflectance = 1, all light is reflected
o How does water depth affect spectral characteristics?
▪ Through absorption and scattering
▪ Different colour bands penetrate water differently
▪ Blue bands penetrate deep, whereas NIR band is absorbed
▪ A satellite using blue bands sees more since blue is not absorbed as much,
but using red sees less and using IR sees only objects that stick out above
the bottom since most is absorbed
o What are the sensor characteristics that must be considered?
▪ Hyper & Multi spectral
▪ Temporal acquisition time
▪
o How do different objects appear on the visible spectrum?
▪ Sand is more reflective than seagrass, which is more reflective than algae
and Cyanobacteria
▪ The deeper you go, the more these spectral signature look the same, which
is h at deeper ater satellites a’t differetiate etee features
o What is the difference between Hyper and Multi Spectral?
▪ Hyper spectral
• Shows wavelength, reflectance
• > 10 narrow bands, gives almost a pure signature
• Cohesive picture
▪ Multi spectral
• Shows wavelength, reflectance, gets an indication of where features
are o graph at aelegths used, ut does’t gie a ohesie
picture (fragmented)
• 10 broad bands
o How does the type of water affect how it looks on sensor?
▪ The type of sediments or organic matter suspended in the water affects the
colour showing on the sensor
▪ Water shows blue = low sediments and low phytoplankton
▪ Water shows green = phytoplankton-rich
▪ Water shows brown = Sediment-rich (such as after a cyclone or flood)
o What is the Temporal Acquisition Time in secs?
▪ The time of day, year or month when image was taken
find more resources at oneclass.com
find more resources at oneclass.com
Document Summary
Location reference system, using y,x, latitude, longitude, easting, northing: what are the two output coordinate systems, geographical coordinates. In pixel size (resolution: extant (land covered in one image, area, revisited time, spectral properties (light, availability, dollars (expense, what are the 3 types of sensors, quickbird. Lots of detail (includes individual components of reef etc. ) Limitation: not as detailed (pixel seize 30m: aquamodis, covers a large area (185km) Where like pixels are labelled with the same signature: object-based. Looking at the shape and feature of each pixel: continuous-biophysical model, pixel-based. If you know the pixel characteristics in relation to water features such as depth, you can find the relationship between the two and create a bathymetry map. Each pixel has a different colour and value, which gives information about the different wavelengths. Using how light or wavelengths are absorbed, you can find the relationship and an empirical model for variable testing such as chlorophyll.