CIS 2050 Lecture Notes - Lecture 3: Computational Photography, Neuroimaging, Computer Vision

59 views5 pages
Differentiate between digital photography and computational
photography
1.
Appraise the role of computer algorithms in advancing
technology
2.
Describe some of the algorithmic techniques used in enhancing
the analysis of digital images
3.
Explain the basic ideas of neural mapping of the brain
4.
Describe some applications of computer vision in the sciences,
such as using satellite imaging and Femto-photography
5.
Learning Outcomes:
It can also analyze pictorial data in large quantity, and in
fast incoming speed
It can extract information data of multiple images, videos
and other forms of media and integrate them to provide a
better analysis
Computational photography takes advantage of innovative
algorithm design to solve problems involving digital pictures
Femto-photography applies computational photography to
solve difficult problem such as "seeing things" behind a
blocking screen, by analyzing the differences of time between
photons bouncing from a blocked object
Computational photography may analyze and enhance patterns
from satellite imagery to determine certain outcomes of the
landscape, and to predict future events
Other applications of computational photography may analyze
images from brain scans of a patient or test human subject,
using signals other than a light source
Impacts of computer algorithm photography:
*even though there is a parallel between computational photography
and human perception, differences exist between what is processed in
computational photography and the human sensory capability
Photographer can make basic changes to a picture from within
the camera, but may also use photoediting software on a
computer to significantly alter the look, feel and composition
The ability to record richer information about a scene and
use powerful image enhancing techniques are redefining
the field
Researchers and engineers are designing different types of
cameras, developing increasingly sophisticated algorithms, and
using new types of sensors and systems
For instance, they will make it possible to detect a tiny
object or imperceptible motion from the field of view
They might change the perspective of angle after a photo
is snapped, or provide a 360-degree panoramic view
They might also augment reality and refocus various
objects in scenes, after a photo has been shot
The cameras, along with more advanced software, will
radically change the way people view and use images
The use of computational photography, imaging and
optics promises to significantly change the way people
approach photography, capture images and edit them
By combining multiple shots, it is possible to
create a single sharp, low noise image that has a
beautiful tone scale
!
Computational cameras could capture multiple images at
a time to compensate for glare, over saturation and other
exposure problems (also could eliminate the need for a
flash which sometimes ruins the tonal scale of images)
Transition from film to pixels created an opportunity to
manipulate and share photos in ways that were not imaginable
A sensor that would capture different levels of light on
different pixels could create entirely new types of photographs,
including images with markedly different brightness and colour
ranges
New types of camera bodies, lenses and optics
Technology of computational photography could also lead to
changes in camera design
Medicine, manufacturing, transportation, security
Technology could impact an array of industries
A camera that would see through crowds, objects and
people
Advanced computational abilities would redefine the way we
think about the world around us, and provide insights that
extend beyond basic images or videos
Focusing would take place after picture was taken
!
Camera could automatically count the number of
cells in an image and provide information more
accurately and faster than any human
!
Ex. Technician could capture images of cell cultures
without focusing a microscope
Could create new opportunities in biology and microscopy
Google glass -highest-profile example of computational
photography
There is the possibility of capturing images beyond the visible
spectrum of light, incorporating environmental sensors, or
finding ways to apply algorithms to detect small but important
changes in the environment
Ex. Algorithms that can sense the flow of blood through
skin or detects one's heart beat based on subtle head
motions
= motion magnification
It amplifies pulse signals and colour variations that
cannot be detected through the human eye
!
Could be used to detect weaknesses in bridges and
buildings
New types of cameras and software could generate robust 3D
images that reveal things not visible through optics alone
As smaller numbers of photons appear on a pixel during
exposure time, there is a larger amount of noise
generated
Noise removal is a growing challenge
Larger numbers of megapixels means images with more pixels
of a smaller size
Computational photography puts data to use in new and
better ways
This technology will not replace today's cameras and
photographs, but enhance them and continue advancing this
process
Read: Computational photography comes into focus
Involves the development of a theoretical and
algorithmic basis to achieve automatic visual
understanding
Computers can be made for gaining high-level understanding
from digital images or videos
Transformation of visual images (the input of the retina)
into descriptions of the world that can interface with
other thought processes and elicit appropriate action
Computer vision tasks include methods for acquiring,
processing, analyzing and understanding digital images, and
extraction of high-dimensional data from the real world in
order to produce numerical or symbolic information
Computer visions seeks to apply its theories and models
for the construction of computer vision
Computer vision is concerned with the theory behind artificial
systems that extract information from images
Computer vision began at universities that were pioneering
artificial intelligence, it was meant to mimic the human visual
system (late 1960s)
*see webpage for list of applications
Image acquisition
Pre-processing
Feature extraction
Detection/segmentation
High-level processing
Decision making
System methods:
Read: Computer Vision
Human mind and brain is a collection of highly specialized
components
Brain imaging technology (especially MRI) can allow us to
determine the different parts of the brain --> anatomy
Used functional MRIs with pictures of faces and
objects
!
When electrically stimulated that part of the brain
when looking at a doctor, the patient saw a
different face (like a morph)
!
Ex. To determine what part of the brain is used for face
recognition
Brain activity via blood flow is imaged with functional MRI to
show neural activity
Size and exact location of these regions differs between
individuals
We have both specific and general processing regions in our
brain
Watch: A neural portrait of the human mind
Satellites are big, expensive, and slow
Use modern production techniques -can produce a lot of
these
Team is passionate about using satellites to help
humanity
Acting in a single orbit, they scan every point of the
planet as the Earth rotates every day
Will be able to track urban growth, water supply, crop
growth, deforestation…etc
This company ensures universal access to this data to
empower society in all fields
Company makes small satellites (termed 'dove') with modern
technology and sensors with high resolution cameras
Watch: Tiny satellites show us the Earth as it changes in near-real-
time
Allows one to observe the movement of light in slow motion
It then hits the mannequin, which then causes the
photons to scatter again
!
Hits the wall and scatters in all direction
A laser pulse is fired
Femto camera -can follow the speed of light
Could create cars that look around the bend
Look into hazardous houses
Cardioscopes
Applications:
Could make a new way of computational photography
The next dimension of imaging
Watch: Imaging at a trillion frames per second
What are some applications that computational photography
can perform beyond digital photography?
Identify some societal issues with advanced techniques in
computational photography.
How do you describe the typical phases in a computer vision
system in processing digital images?
Questions:
Computational Photography
Tuesday,* January* 23,*2018
3:18*PM
Unlock document

This preview shows pages 1-2 of the document.
Unlock all 5 pages and 3 million more documents.

Already have an account? Log in
Differentiate between digital photography and computational
photography
1.
Appraise the role of computer algorithms in advancing
technology
2.
Describe some of the algorithmic techniques used in enhancing
the analysis of digital images
3.
Explain the basic ideas of neural mapping of the brain4.
Describe some applications of computer vision in the sciences,
such as using satellite imaging and Femto-photography
5.
Learning Outcomes:
It can also analyze pictorial data in large quantity, and in
fast incoming speed
It can extract information data of multiple images, videos
and other forms of media and integrate them to provide a
better analysis
Computational photography takes advantage of innovative
algorithm design to solve problems involving digital pictures
Femto-photography applies computational photography to
solve difficult problem such as "seeing things" behind a
blocking screen, by analyzing the differences of time between
photons bouncing from a blocked object
Computational photography may analyze and enhance patterns
from satellite imagery to determine certain outcomes of the
landscape, and to predict future events
Other applications of computational photography may analyze
images from brain scans of a patient or test human subject,
using signals other than a light source
Impacts of computer algorithm photography:
*even though there is a parallel between computational photography
and human perception, differences exist between what is processed in
computational photography and the human sensory capability
Photographer can make basic changes to a picture from within
the camera, but may also use photoediting software on a
computer to significantly alter the look, feel and composition
The ability to record richer information about a scene and
use powerful image enhancing techniques are redefining
the field
Researchers and engineers are designing different types of
cameras, developing increasingly sophisticated algorithms, and
using new types of sensors and systems
For instance, they will make it possible to detect a tiny
object or imperceptible motion from the field of view
They might change the perspective of angle after a photo
is snapped, or provide a 360-degree panoramic view
They might also augment reality and refocus various
objects in scenes, after a photo has been shot
The cameras, along with more advanced software, will
radically change the way people view and use images
The use of computational photography, imaging and
optics promises to significantly change the way people
approach photography, capture images and edit them
By combining multiple shots, it is possible to
create a single sharp, low noise image that has a
beautiful tone scale
!
Computational cameras could capture multiple images at
a time to compensate for glare, over saturation and other
exposure problems (also could eliminate the need for a
flash which sometimes ruins the tonal scale of images)
Transition from film to pixels created an opportunity to
manipulate and share photos in ways that were not imaginable
A sensor that would capture different levels of light on
different pixels could create entirely new types of photographs,
including images with markedly different brightness and colour
ranges
New types of camera bodies, lenses and optics
Technology of computational photography could also lead to
changes in camera design
Medicine, manufacturing, transportation, security
Technology could impact an array of industries
A camera that would see through crowds, objects and
people
Advanced computational abilities would redefine the way we
think about the world around us, and provide insights that
extend beyond basic images or videos
Focusing would take place after picture was taken
!
Camera could automatically count the number of
cells in an image and provide information more
accurately and faster than any human
!
Ex. Technician could capture images of cell cultures
without focusing a microscope
Could create new opportunities in biology and microscopy
Google glass -highest-profile example of computational
photography
There is the possibility of capturing images beyond the visible
spectrum of light, incorporating environmental sensors, or
finding ways to apply algorithms to detect small but important
changes in the environment
Ex. Algorithms that can sense the flow of blood through
skin or detects one's heart beat based on subtle head
motions
= motion magnification
It amplifies pulse signals and colour variations that
cannot be detected through the human eye
!
Could be used to detect weaknesses in bridges and
buildings
New types of cameras and software could generate robust 3D
images that reveal things not visible through optics alone
As smaller numbers of photons appear on a pixel during
exposure time, there is a larger amount of noise
generated
Noise removal is a growing challenge
Larger numbers of megapixels means images with more pixels
of a smaller size
Computational photography puts data to use in new and
better ways
This technology will not replace today's cameras and
photographs, but enhance them and continue advancing this
process
Read: Computational photography comes into focus
Involves the development of a theoretical and
algorithmic basis to achieve automatic visual
understanding
Computers can be made for gaining high-level understanding
from digital images or videos
Transformation of visual images (the input of the retina)
into descriptions of the world that can interface with
other thought processes and elicit appropriate action
Computer vision tasks include methods for acquiring,
processing, analyzing and understanding digital images, and
extraction of high-dimensional data from the real world in
order to produce numerical or symbolic information
Computer visions seeks to apply its theories and models
for the construction of computer vision
Computer vision is concerned with the theory behind artificial
systems that extract information from images
Computer vision began at universities that were pioneering
artificial intelligence, it was meant to mimic the human visual
system (late 1960s)
*see webpage for list of applications
Image acquisition
Pre-processing
Feature extraction
Detection/segmentation
High-level processing
Decision making
System methods:
Read: Computer Vision
Human mind and brain is a collection of highly specialized
components
Brain imaging technology (especially MRI) can allow us to
determine the different parts of the brain --> anatomy
Used functional MRIs with pictures of faces and
objects
!
When electrically stimulated that part of the brain
when looking at a doctor, the patient saw a
different face (like a morph)
!
Ex. To determine what part of the brain is used for face
recognition
Brain activity via blood flow is imaged with functional MRI to
show neural activity
Size and exact location of these regions differs between
individuals
We have both specific and general processing regions in our
brain
Watch: A neural portrait of the human mind
Satellites are big, expensive, and slow
Use modern production techniques -can produce a lot of
these
Team is passionate about using satellites to help
humanity
Acting in a single orbit, they scan every point of the
planet as the Earth rotates every day
Will be able to track urban growth, water supply, crop
growth, deforestation…etc
This company ensures universal access to this data to
empower society in all fields
Company makes small satellites (termed 'dove') with modern
technology and sensors with high resolution cameras
Watch: Tiny satellites show us the Earth as it changes in near-real-
time
Allows one to observe the movement of light in slow motion
It then hits the mannequin, which then causes the
photons to scatter again
!
Hits the wall and scatters in all direction
A laser pulse is fired
Femto camera -can follow the speed of light
Could create cars that look around the bend
Look into hazardous houses
Cardioscopes
Applications:
Could make a new way of computational photography
The next dimension of imaging
Watch: Imaging at a trillion frames per second
What are some applications that computational photography
can perform beyond digital photography?
Identify some societal issues with advanced techniques in
computational photography.
How do you describe the typical phases in a computer vision
system in processing digital images?
Questions:
Computational Photography
Tuesday,* January* 23,*2018 3:18*PM
Unlock document

This preview shows pages 1-2 of the document.
Unlock all 5 pages and 3 million more documents.

Already have an account? Log in

Get access

Grade+20% off
$8 USD/m$10 USD/m
Billed $96 USD annually
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
40 Verified Answers
Class+
$8 USD/m
Billed $96 USD annually
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
30 Verified Answers

Related Documents