Study Guides (256,177)
CA (124,572)
Western (13,303)

Medical Biophysics 3503G Final: Final Notes (Lacefield, Scholl, Ward)

81 Pages

Medical Biophysics
Course Code
Medical Biophysics 3503G
James Lacefield

This preview shows pages 1-4. Sign up to view the full 81 pages of the document.

Loved by over 2.2 million students

Over 90% improved by at least one letter grade.

Leah — University of Toronto

OneClass has been such a huge help in my studies at UofT especially since I am a transfer student. OneClass is the study buddy I never had before and definitely gives me the extra push to get from a B to an A!

Leah — University of Toronto
Saarim — University of Michigan

Balancing social life With academics can be difficult, that is why I'm so glad that OneClass is out there where I can find the top notes for all of my classes. Now I can be the all-star student I want to be.

Saarim — University of Michigan
Jenna — University of Wisconsin

As a college student living on a college budget, I love how easy it is to earn gift cards just by submitting my notes.

Jenna — University of Wisconsin
Anne — University of California

OneClass has allowed me to catch up with my most difficult course! #lifesaver

Anne — University of California
Ward Lecture 1: Intro to MatLab - These notes accompany the lecture notes General Things to Know - Semicolon allows you to press enter and input more commands - Variables do not have to be x, y; they can be named anything (e.g. billy = 3) - You can perform trig functions (sin(x), cos(x)) - Vectors are made with square brackets: - In a vector, you can refer to individual elements like this: o x = [8 5 7]; x(1) ans = 8 - Euclidian distance between two points: o sqrt(x^2 + y^2 + … + z^2) - Matrices are made with semicolons within the square brackets: - If y = [2 3], it will be a row vector, but you can transpose it by adding an apostrophe to y (y’) to make it a column vector - A script is a sequence of commands that you want MatLab to run in order o You can put as many commands as you want in the script o On the input screen, you just type the script name and it’ll work Ward Lecture 2: Plotting, Histograms, Displaying, Cropping Images Plotting and Histograms - With a vector y = [2 1 0 -1 -2 -1 0 1 2 1], we can plot it using plot(y), which will make a figure pop up containing a graph plotting these numbers on the y axis o Variable doesn’t have to be y  can be anything and it’ll still ploy on y axis - Using the ‘o’ parameter will make the graph only have circles instead of a connected line: plot(y,‘o’) - hist(y) is a function that displays a histogram of the values in the vector o however it is a clunky graph, so you need to set the bins:  hist(y,[-2 -1 0 1 2])  This adds a vector as a 2 parameter, setting the centre of the histogram bins to allow it to look better Images and Matrices - An image is a matrix of numbers: e.g. I = [1 2 3; 4 5 6; 7 8 9] - We can pass it to the “image” function, telling MatLab to interpret the matrix as an image (a 3x3 image in this case), where each # corresponds to a brightness level o image(I), which leads to: o o If we want it to display in greyscale, we use the colormap command  In this case, “gray(9)” is used since we have 9 different values in the image  >> image(I); >> colormap(gray(9)) Displaying - “imread” function reads images from the disk and puts them into any variable you want o E.g. >> I = imread(‘BrainMRI.jpg’) o Be sure those are apostrophes and not quotation marks - “imshow” tells MatLab to display the image (in a separate window) - imhist(I) puts up a histogram of the intensities (diff brightness levels) of an image o e.g. >> I = imread(‘BrainMRI.jpg’) >> imhist(I) - size(I) tells us how any rows and columns are in the matrix I o “542 464” means 542 rows (y axis) and 464 columns (x axis) - imresize(I,0.5) resizes the image to whatever factor you put in as the 2 parameter (0.5 here) o you’d want to resize to make image transfer faster – important for medicine since you need a quick readout - imrotate(I,90) will rotate the image counterclockwise 90º Cropping >> X = [7 2 9 3 1 8]; >> X(3:5) ans = 9 3 1 We are extracting part of the vector. This is saying get everything from row 1 to row 2, and everything from column 1 to column 4 in I, and put it in J. To crop an image, use the tool that selects a pixel. Choose the top left corner of a rectangle you want to crop (e.g. X: 139, Y: 136) and then choose the bottom right corner (e.g. X: 340, Y: 392). This means rows 136 to 392 and columns 139 to 340 are cropped. And then we can do imshow(J) to display the cropped image. Ward Lecture 3: Segmentations Thresholding-Based Segmentation - to get a threshold for e.g. X = [7 2 9 3 1 8], we do X < 5 and press enter, and we see the answer being 0 1 0 1 1 0, because any number that is below 5 is equal to 1 - For an image, look at the histogram of the image to find an appropriate threshold, and then do the same thing: e.g. threshold is 45, so we do K = J < 45 - In order to only isolate what we want (e.g. only the ventricles), we do bwselect  whatever you click will be kept and everything else will be removed Measuring Segmentation Area - sum(X) function adds up all the values in a vector o for a matrix, however, it adds up the values in each column – but we want to add up all the 1s only, so we need to somehow get all the numbers to be in one column o in order to find out how many 1s are in an image, we can flatten the matrix using a colon – (X(:)) – this makes all columns join into one vector (one column) o e.g.  >> X = [0 1 1 0 1; 0 0 1 1 0]  >> sum(X(:))  Ans = 5 o So now we can figure out how many pixels are in the brain ventricles in the above image - We should convert 9414 pixels into physical units which is more useful o First we need to know how big a pixel is, which is given by the MRI scanner: 0.4 mm wide, 0.4 mm tall, 1 mm thick, therefore 0.4*0.4*1 = 0.16 mm volume for each pixel 3 o >> 0.16*sum(L(:)) = 1506.2 mm o So the brain ventricles occupy 1506.2 mm of volume Measuring Intensity Info in a Segmentation - If Z = X(Y), then we can get the mean by >> mean(Z), which is = 2 - We can also get min(Z) and max(Z) - Same with matrices  Y = I < 5, then Z = I(Y)  then find mean, min, max - Same with images: o J contains the cropped image o L contains the mask with 1s indicating the ventricles o Z contains only the intensity values inside the ventricular segmentation o mean(Z) tells us the mean intensity inside of the ventricles Preprocessing Noisy Images for Segmentation - if there is noise, using the same threshold method to segment will not be very good – there would be a substantial difference in volume - in order to de-noise it, by smoothing it, we do J = imfilter(J,fspecial(‘disk’)) o “imfilter” moves a filtering element across the image, performing an operation on the image at each point o “fspecial(‘disk’)” creates a disk-shaped filtering element: disk is centred on every pixel in the image and the intensity level of each pixel is replaced by the average intensity of the pixels within the disk o Default radius of disk is 5 pixels, but you can change it to 10 by doing:  J = imfilter(J,fspecial(‘disk’,10)) - Now if we threshold this imfilter image, there’s a much better result, but still a bit smaller than the volume of the non-noisy segmentation image o We can increase the volume by setting the threshold to a higher value, which gives a better result Ward Lecture 4: Postprocessing - If the noise is salt and pepper noise, there will still be bright spots within the ventricles even when we try smoothing it out with fspecial(‘disk’) o We then see holes in the subsequent segmentation o Thus the volume is much lower due to the holes - This happened b/c we used an averaging filter which is sensitive to outliers, resulting in white dots and thus holes - Instead we should use a median filter o E.g. medfilt2: similar filtering element as fspecial(‘disk’) but uses median intensity of pixels within the range to replace each pixel’s intensity o The result is much better, w no holes (except a few tiny ones) o Volume of this also comes in very close Adding Noise st - 1 noisy image was: Gaussian noise: I = imnoise(I,’gaussian’) o At each pixel, a random amount of intensity is added or subtracted, with a greater tendency to make small changes than large changes  this means there won’t be many outliers, so smoothing via an averaging filter will help denoise the image nd - 2 noisy image was: salt and pepper noise: I = imnoise(I,’salt & pepper’) o This changes each pixel to either pure black or pure white with some probability (default is 0.05, therefore there’s a 5% chance that this will happen and 95% chance that it’ll stay the same) o This generates outliers since pure white and pure black is very far off from the mid-grey region that the image is in o An averaging filter will not be useful in denoising this, but a median filter is robust  Averaging filter creates a new value by averaging e.g. 0, 0, and 255 = 85, which makes for a fuzzy edge  Median filter is an edge-preserving filter  Median filter still leaves SOME noise because there might just happen to be multiple outliers adjacent to each other Morphological Operations - Change the shapes of structures on binary images - Erosion: structuring element travels around the inside of a structure, always touching the perimeter, and the centre point of the SE traces out a smaller structure o Overall erosion shrinks the image o E.g. >> K = imerode(I,strel(‘square’,3)) - Dilation: structuring element has its centre travel along the perimeter so the outer edge of the SE allows for the image to be expanded o E.g. >> J = imdilate(I,strel(‘square’,3))  This command dilates the image using a square SE with a width of 3 pixels - To fill in holes, we can use imdilate; to make the image back to its orig size, we use imerode - Dilation and then erosion is given its own name: morphological closing = imclose o This is given in one command - You can fill in holes in an image: e.g. M = imclose(I,strel(‘square’,3)) o But if there are still holes, just make the structuring element larger, e.g. 4, 5, or 6 o But don’t start off with too big of a # because that will cause unintended bridging Differences in Segmentations - Abs(J – I) will find the pixels in white that are in J but not in I - E.g. for 2 segmentations, S1 and S2, we find a difference image D = abs(S2 – S1) - This way we can quantify the size of the difference, and also explore ways of tuning segmentation algorithms to compensate - We can also calculate the volume of the difference: 0.16*sum(D(:)) Ward Lecture 5: Measuring Segmentation Accuracy Dice Similarity Coefficient - If two segmentations overlap each other, this is how we measure the overlap: o |X| is the area of one segmentation and |Y| the area of the other segmentation  We get this by summing up all the 1s in the matrices  sum(X(:)) and sum(Y(:)) o |X n Y| is the area of their intersection/overlap region o If DSC = 1  perfect overlap; if DSC = 0  no overlap at all - To calculate the intersection region, we use & (which means “logical and”) – it only gives back a value of 1 if there is a 1 in X and a 1 in Y at the same location - Int = X & Y - Therefore, DSC = 2*sum(Int(:)) / (sum(X(:)) + sum(Y(:))) - We can do this with segmentations: - We can also compare our own algorithm with the manual segmentations: o >> Int = ManualS1 & L; >> DSC = 2*sum(Int(:)) / (sum(ManualS1(:)) + sum(L(:))) - DSC tells us that the overlaps are pretty much the same, but volume is substantially different Mean Absolute Distance - Take 2 closest neighbouring points on two segmentations, and measure the distance boundary to boundary But algorithms can make mistakes: - Left: same error evenly distributed around the algorithm, so DSC would be low o You can fix this with morphological erosion - Right: DSC doesn’t detect the outliers (volume wouldn’t either since the outliers don’t contribute much to the whole volume) o MAD is sensitive to outliers though, dragging it to 3.1 mm o So having a high DSC and high MAD means there are errors - To make a boundary-based measure, we need to isolate the boundaries by using bwperim, which extracts ONLY the boundary pixels from binary images - Now we put two boundaries together, and we measure the distances between EACH point on one boundary and its corresponding closest point on the other boundary o One will be the reference boundary (usually the manual segmentation), and the other the non-reference boundary - Using the bwdist command, we can find the distances to all the points on the non-reference boundary o E.g. if and then Z = bwdist(Y): o bwdist calculates the Euclidian distance to the nearest 1 in one image (here, it’s Y) o Bottom left value: closest 1 is 1 itself, so distance is 0 o Middle left value: closest 1 is distance of 1 so distance is 1 o Top left value: closest 1 is a diagonal distance to the 1 in the middle, which is z^2 = x^2 + y^2 which is z = sqrt(1^2 + 1^2) = sqrt2 = 1.4142 o THEN, we only care about the top row because that’s where the reference boundary is o If we do AD = Z(X) (using X to index into Z), we get a list of the absolute distances between the points on the reference boundary and the points on the non-reference boundary o And then we can take the mean absolute distance (MAD) between the two boundaries - We can use the same procedure to get the MAD between the two manual segmentations o Here we choose BManualS1 as the non-reference so we do bwdist to that, and then use BManualS2 as the reference o Then we convert the pixels into physical units: >> 0.4*MAD = 0.55 mm - The way we interpret these measures depends on the application we are using it for o If you are going to target structures that have sensitive structures beside it, you don’t want a MAD that is relatively large like 1.31 mm o If you want to remove a % of a tumour we’d want it to have high DSC o If we are correlating ventricular volume to the onset of neurodegenerative disease, then we’d look at volume and DSC/MAD do not matter in this case - If the differences between an algorithm and human observers are comparable to the differences between human observers, the algorithm is potentially useful - Lacefield Lecture 1: Screening Mammography - Screening: diagnostic test performed in a large # of healthy subjects to detect pre-clinical disease in a small fraction of subjects Guidelines - Est’d standard: annual exams for women 40-70 years old - Proposed standard: biennial exams for women ages 50-74 years old, biennial exams by individual decision for women aged 40-49 o Hypothesis: most harm is because of overdiagnosis, not FP  Overdiagnosis: detection of pre-clinical tumours that would not have produced symptoms if left untreated; leads to overtreatment Granting Agency Debate True or false? Scientific granting agencies should not fund projects to develop new or improved imaging technologies for breast cancer screening because further improvements to the technical performance of breast imaging systems will not benefit the patients who undergo screening. Imaging Science - Multidisciplinary field concerned with the generation, collection, duplication, analysis, modification, and visualization of images - Imaging chain: conceptual model describing all of the factors that must be considered when developing a system for creating images o Imaging system acts as an intermediary between the object and the viewer - What is an image? o 1. Spatial position (2D or 3D) o 2. Measurement (e.g. # of pixels using binary units is a measurement of how tissue reacts with the wave) o 3. Visual interpretation o So an image is a measurement made as a func of position presented in a format that facilitates visual interpretation o However, an image cannot be a perfect representation of an object b/c:  Impractical to make all possible measurements  Info is lost or distorted by system Projection Radiography - Radiography produces images of cumulative x-ray attenuation along straight-line paths from source to detector o Attenuation depends on e- density and effective atomic # of tissue  More electron-dense (packed together), the more x-ray absorption will occur - how a mammogram is formed (projection radiography) - source of x-rays generates a cone-shaped beam of x-rays that diverges away from the source - oval = breast - red bar = tumour - a cancerous tumour absorbs more x-ray than the surrounding healthy tissue - x-ray photon that goes around tumour will reach the other side whereas the ones that hit the tumour will be absorbed by the tissue mass - so looking at the photo-negative (IMAGE), the parts that show up light means tissue that absorbed a lot of x-rays Visual Interpretation - Grey levels: integer representation of measured data corresponding to displayed shade of each pixel (we can only see 256 shades of grey) Images Provide: - Structural information: identify objs, measure dimensions, determine material compositions - Functional information: quantify motion, temp, pressure, metabolism Generic Imaging System Block Diagram - Transducer: device that converts 1 form of energy to another (i.e. sensor) - Processor: extracts useful info from signals measured by transducer - Display: presents info in a convenient format for visual interpretation Lacefield Lecture 2: Physical Performance Measures - Hierarchy of perspectives used when assessing effectiveness of imaging technology: - - 1. Engineers care about this - 2. Radiologists care about this Spatial Resolution - How close together 2 point-like objects can be and still appear distinct in an image - If distinct, it is said to be resolved - If 2 objects are blurred together into one thing it is unresolved - What happens if it’s in between and we are not sure if it’s 2 objects or 1? Point Spread Function - PSF: The peak produced in an img by a single point-like object Rayleigh Resolution Criterion - 2 identical, point-like objs are barely resolved when the maximum of the PSF from 1 object nd overlaps with the first zero of the PSF from the 2 object o This is the minimum distance to be considered resolved Resolution Measurements - Measured in distance units - A fine resolution (aka high resolution) image has a small value for resolution - A coarse resolution (aka low resolution) image has a large value for resolution Image Contrast - Difference in GLs (brightness) of 2 features/regions in an image - 4 possible combinations; e.g. breast cancer tumour x-ray is a positive contrast image - Image contrast of a feature depends on: o Object contrast: Actual diff b/w feature and bg in physical property measured by imging system o Imaging system’s sensitivity to differences in that physical property (system must be able to detect the differences accurately) Effect of Resolution on Large Feature - in the img with the finer resolution system, there's a sharp boundary between feature and bg - sharp boundary shows up as a very rapid transition (blue line) - whereas low resolution there's a gradual change from bg GL to inside obj GL, and it starts before the actual boundary, and don’t make it all the way to the actual GL until we get to the pixels that are well within the boundaries of the physical obj Effect of Resolution on Small Feature - Even when we’re trying to get the pixel at the dead center of the object, we’re averaging together some measurements from inside the object and measurements made outside the object and it doesn’t make the grey level you would expect, killing the contrast of the image because the size of the object is approaching our spatial resolution and we’re blurring it into the background. - Therefore resolution will affect contrast. Especially when you’re measuring small features/objects. Sufficient contrast is necessary to ensure features stand out from background and from each other, but it’s a tradeoff b/w SR, contrast, and SNR. SNR - Noise: random variations in GL - Low SNR means it’s harder to see feature against bg - More noise  lower SNR Lacefield Lecture 3: Diagnostic Accuracy of Imaging Test – CD Curves Contrast Detail Procedure 1. Build test objects (“phantoms”) containing spherical features of different sizes and differences in the measured object property. 2. Acquire test images of all phantoms as well as some phantoms without spherical features. 3. Blinded observer determines whether a feature is detected in each image. 4. Each feature corresponds to a point in the CD plane. - i.e. each point has a particular object contrast and diameter 5. Flag each point to indicate whether that feature was detected or missed by observer 6. Draw smooth curve that separates detected and missed features. Realistic CD Curve - If vertical asymptote shifted to left, that means better spatial resolution - If diagonal asymptote shifted downwards, that means better SNR Resolution vs. SNR Trade-Off in Digital Mammography Interpretation of CD Curves - Tumours are expected to be large and low contrast - Higher SNR (shifting diagonal asymptote downwards) improves detection - SR changes don’t help - Microcalcifications are small and high contrast - Shift vertical asymptote to the left to improve SR; this will make SNR worse but SNR is irrelevant in this case Limitation of Contrast-Detail Curves - Too simplistic: - CD analysis works with these test imgs of a perfectly circular uniform feature against a perfectly uniform bg, BUT in a real medical case, what you will have is a tumour that’s not necessarily perfectly circular, and not necessarily right in the centre of the img, and a background that isn’t homogeneous (will have diff anatomical strucs to distract the eye) - i.e. Effect of other anatomical structures on the conspicuity of a feature is not represented - conspicuity: ability of tumour to stand out Lacefield Lecture 4 + 5: Diagnostic Accuracy – ROC Curves Receiver Operating Characteristic (ROC) Procedure 1. Select one condition to detect or classify. 2. Image a large population including many individuals with and without disease. 3. Choose ordinal (ranking) parameter or criteria as basis for diagnosis… via: Objective: Size, contrast, etc. Subjective: N-point scale indicating radiologist’s certainty about diagnosis (typically N = 5). 4. Perform “gold standard” diagnostic test to obtain “actual” diagnosis for each patient. - Determine whether the radiologists’ diagnoses were right 5. Choose threshold for making negative (condition absent) and positive (condition present) diagnoses. - If image seems ambiguous radiologist will say negative (be conservative) - Usually when threshold is high - Radiologist can also be aggressive and say positive and only say negative when he is sure - Usually when threshold is low 6. Compute TPF and FPF 7. Increase or decrease threshold  change diagnosis for patients near threshold  compute new TPF and FPF. 8. Plot (FPF, TPF) pairs obtained at each threshold. - Often (1 – TNF) is used to describe FPF - 2 alternative forced choice experiment (have to choose one or the other) - Chance line is when TPF = FPF; done by guessing - Perfect detector is when TPF = 1.0 and FPF = 0; this is when it is always right - Only way to incr TPF is to be more aggressive, but that also increases FPF; both always increasing together Area Under ROC Curve (AROC) Applications of ROC Analysis 1. Compare two methods or implementations of the same imaging modality. - E.g. Compare two different manufacturers’ x-ray systems 2. Compare two different imaging modalities - E.g. comparing x-ray mammography to MRI 3. Evaluate diagnostic performance of a single imaging method. 4. Compare diagnostic performance of human observers. Comparing Methods and Modalities - AROC is a summary of how well the test performs compared to guessing and compared to being perfect detector - Often we are interested in something that doesn’t describe the overall performance, but how the test performs at ends of curves depending on clinical context and what types of mistakes to minimize - If you want to avoid a false positive, then you’re most interested in the decision thresholds that give you points on the ROC curve that are at low values of false positive fraction (points to the left); if you concentrate at points of low values, system A’s area is greater than system B’s so you should use system A - If you want to avoid false negatives, then you’re interested in decision thresholds that give you TPFs that are close to 1; if that was the case, then curve B would be the best case (you are interested in portions in the curve that go through the top rectangle, which is method B) Statistical View of ROC Analysis PDF - Take histograms and smooth them into continuous probability density function FPF = area under PDF of disease absent cases from x = threshold to x = infinity TPF = area under PDF of disease present cases from x = threshold to x = infinity - To improve AROC, and to make it easier to distinguish patients from disease absent group to disease present group, we can decrease the SD or increase distance b/w means - If distribution of both groups is identical then AROC = chance line - As you have the means more further apart, AROC increases Muscle Size Example - SNR high; resolution changes - For coarser resolution, the boundaries are blurred - If 2 diff radiologists were to sketch this, one may draw an outline where the feature fades into the bg (he will end up with a contour which is too big), and the other may draw an outline by guessing where to put the contour and may draw a contour which is too small - Mean would be the same - SD would be very different - So need to ask multiple radiologists to draw this to get a more accurate number - Thus, AROC decreases - SR high; SNR changes - A radiologist might draw an outline on the outer edges, and another might draw an outline on the inner edges - Mean would be similar - SD would be different From Contrast-Detail Curve to ROC Lacefield Lecture 6: Breast Cancer Screening Digital Breast Tomosynthesis - USPSTF says that current evidence is insufficient in assessing the benefits and harms of digital breast tomosynthesis as a primary screening method for breast cancer - Projection radiography: might have 2 objects in the same photon path so you will only see one - Tomosynthesis allows you to approximate 3D images; repeat projection 20-40x at diff angles o (+/- 25 ) - Tumour is more easily visualized in slice with tomosynthesis image - Benefits: - Slight reduction in FPF compared to conventional mammography - Weak evidence in increased TPF - Insufficient evidence to conclude increased TPF contribs to more overdiagnosis - Harms: - 2x radiation exposure - Lower energy x-rays used but a lot more projections nd 2 Imaging Test - If breasts are dense, there may be false negatives; so we could use a 2 imaging test  MRI, tomosynthesis, ultrasonography Mammographic Breast Density - Dense with glandular tissue - With more glandular tissue, there will be more x-ray attenuation - There’s a reduction in the obj contrast in the extremely dense subjects - it increases the avg x-ray attenuation everywhere in the background (tumour is also bright, so object contrast is lower, and thus image contrast is less as well so tumour won’t stand out as well) - tumour detection in predominantly fat breasts have high sensitivity (TPF) and high specificity (1 – FPF); but tumour detection in extremely dense breasts have low sensitivity and lower specificity - so it affects accuracy a lot! - Proportion of women with dense breasts decreases with age - Women with category 3 or 4 have a higher chance of developing cancer, but those who do develop cancer do not have increased risk of death - So the patients that are most difficult to read also have the most likelihood to have cancer! - So this is why we should do the secondary screening Breast Ultrasonography - Negative contrast, so tumour appears dark as opposed to bright (compared with x-rays) - width of the slice is limited by the width of the probe and the depth of the slice is limited by how far the ultrasound wave can travel - uses mechanical wave instead of e/m wave - we are measuring obj contrast from the variations in the density and compressibility of the tissue along the spatial scale the ultrasound is sensitive to - variations meaning in glandular tissue, the cells are more spread out - but in tumours, cells are jammed so close together that the ultrasound can’t see the boundaries between cells - ultrasound is decent for TPF but produces more FPF Breast Magnetic Resonance Imaging - Contrast-enhanced MRI (with gadolinium dye) shows a LOT more contrast to see visible tumour compared to mammography - MRI is measuring hydrogen proton density, which is basically an img that is sensitive to diffs in water content of tissue - MRI data has sensitivity that has a wide range, along with specificity - So it’s inconsistent Benefits and Harms of Secondary Screening - Ultrasound and MRI both increase TPF & FPF - More true positives such as in dense breasts, but also more false positives in non-dense breasts - Majority of additional positive results may be false positives - So patients go through unnecessary biopsies - Insufficient evidence to assess whether additional true positives increase overdiagnosis - MRI is expensive Comparison of Sensitivities and Specificities X-Ray Mammography US MRI Lacefield Lecture 7 + 8: Cost-Benefit Analysis Cost-Benefit Analysis - We want to maximize E(U) - U = utility: some dimensionless but quantitative assessment of the costs and benefits of the outcome of a decision - We think about the effect on patient’s mental health, medical prognosis, patient comfort, healthcare economics - p = prevalence: % of patients in the pop coming in for screening that we expect to have cancer Specifying Utilities - U(TP) may be based on a quality-adjusted life year: if we think we can provide patient w full recovery that’s a high quality life year, but if capacitated then not that beneficial Implication of U(TN) = 0 - Patient outcome is identical to an unscreened patient  no further symptoms, treatments Implication of U(FP) = -1 - Benefits and costs for other results are normalized wrt harm from false-positive result - For screening mammography, false-positive harms are costs of additional imaging and biopsy, in addition to patient anxiety Implications of high U(TP) - No matter what #, 150-500 is still a huge # compared to U(FP) - If we go towards the high end of the range, we are taking the pink ribbon approach of “early detection saves lives” - U(TP) must be weighted average of: - Benefit to patients whose lives are saved - Benefit to patients whose lives are prolonged - Benefit to patients who do not respond to treatment - Harm to overtreated patients Implication of large negative U(FN) - Cancer progresses rapidly in FN patients such that delayed treatment until next screening / onset of symptoms occurs, and this leads to worse prognosis (outcome) of cancer - i.e. they would’ve had a good outcome had they been detected - tied to U(TP) b/c of the missed opportunity Example - The expected utility func tells us that for the prevalence and utilities we assume this particular imaging test is best used with the decision threshold that gave the max(E(U)) point since it has the highest TP while minimizing the FPF Screening Mammography Policy Should we continue having annual screening for women aged 40-70 or do biennial exams for age 50-74? Assuming U(TN) = 0 and U(FP) = -1, E(U) = p*TPF*U(TP) + (1 – p)* FPF *(-1) + p*(1 - TPF) *U(FN), So we need to assign values to 5 parameters: 1. p 2. TPF 3. FPF 4. U(TP) 5. U(FN) - All 5 parameters may be age-dependent Approximate Breast Cancer Prevalence - At age 40-49, prevalence of 0.02  200 / 100 000 women per year - This assumes early detection saves lives approach with high U(TP) and -1/2U(TP) for U(FN) - With this really optimistic view, with overdiagnosis not being a significant problem, this case is beneficial to society Overdiagnosis - However, overdiagnosis would reduce U(TP) - If we’re arguing for the newer guidelines then we are going to say that overdiagnosis is prevalent, which will pull down the U(TP) - Evidence of overdiagnosis: - Red curve is metastasized cancers and green curve is newly discovered cancers - In 1982 women started to universally follow the checkup routine - Inference from this graph is that people who had cancer detected weren’t going to develop metastatic cancer anyway - If there was a benefit, the red curve should go down BUT red doesn’t change - If we assume overdiagnosis is more common than was previously thought, this implies diff values: - U(TP) = 150 (reduced) - U(FN) = -75 (less negative) - Here we see max[E(U)] is negative so there is net cost What if we increase prevalence? - Prevalence doubled to 0.004 - This increases the max[E(U)] to a positive # - If we increase prevalence even more to 0.0067, it raises the max[E(U)] to match case 1’s How Could p Be Increased? - Wait until you are older to screen b/c prevalence is higher in older women - Go from annual to biennial screening  wait for more cancers to accumulate - Select patients based on family history or genetics - This also saves healthcare system money - USPSTF’s recommendation is to have the patient accept the higher risk of overdiagnosis and false positive if they are under the age of 50 Criticisms of USPSTF Recommendations 1. Age thresholds not justified. - Some argue that 40 and 50 years old is no difference 2. Data used underestimate mortality benefit and overestimate frequency of overdiagnosis. 3. Computer models favoured over clinical data. 4. Reduced death rates correlate with increased screening rates. - Rebuttal: better drugs 5. Some think that the stat analysis was done improperly What About Missed Cancers? - FN patients are: - Patients whose cancer progresses minimally before next screening - Patients whose cancer progresses significantly but delay in treatment has little effect on prognosis - Patients whose cancer progresses significantly and delayed treatment makes prognosis worse Granting Agency Debate True or false? Scientific granting agencies should not fund projects to develop new or improved imaging technologies for breast cancer screening because further improvements to the technical performance of breast imaging systems will not benefit the patients who undergo screening. - If we argue this is true: we would raise SR, SNR, object contrast (by changing from conventional mammography to MRI, US, tomosynthesis), which would improve AROC - - We get a positive max[E(U)] with an increased AROC, even with low prevalence and pessimistic U(TP) - A significant improvement in AROC could yield a similar max[E(U)] to that achieved by doubling p at the current AROC, with the added benefit of significantly reducing the FPF corresponding to max[E(U)] while maintaining a similar TPF - If we argue this is false: overdiagnosis would increase as a result of all these technical improvements, reducing the utility of screening mammography Scholl Lecture 1: Digital Imaging Light - e/m wave, a collection of many photons o an e/m wave is a self-propagating wave in a vacuum or in matter comprised of electric and magnetic fields oscillating in phase perpendicular to e/o and perpendicular to direction of propagation - light interacts with air (that it is traversing predominantly in) via its electric field - visible light is the portion of the e/m spectrum detectable by eye (400-700 nm) - photon: basic quantum or building block of light o E = hc / lambda  Where E is energy, h is the proportionality/Planck’s constant (6.63x10 ), c is the speed of propagation (2.998 x 10 m/s), and lambda is the wavelength of the photon -19  E.g. 500nm= 4*10 J Detection of Light - Light is produced by a luminous source (sun, lightbulb, LED, laser) o To be detected, it must be transferred from source to detector  During propagation, light can be reflected, refracted, scattered, absorbed, re- emitted  All of these alters the spectrum, intensity, and polarization of the orig light as it left the source Optical Detectors - Converts light E to an electric signal o E.g. eye: rods/cones convert light to electrical impulse via optic nerve o E.g. camera: CCD detector converts light to electric charge o E.g. photodiode/photomultiplier: converts light to short pulse of electric current Digital Imaging - Photons from an analog object are imaged (measured as a func of 2D position), then stored as digital information (quantized as pixels, from values of 0-255) o Same process for black and white and colour Electronic Image Sensors - An imaging sensor: o 1. Converts optical photons to electrons o 2. Stores e- in an array of pixels (pixel = 2D picture element; 3D is a voxel) o 3. Reads out pattern o 4. Digitizes this info (counts # of e- stored in each pixel) o 5. Stores it as a digital image - 2 types of imaging sensors: - (1) passive pixel sensor (PPS): CCD/CCD sensor - (2) active pixel sensor (APS): CMOS (complementary metal-oxide-semiconductor) CCD (PPS) - Receives optical photons, generates and stores e- in an array of pixels - Charge pattern (representing photon intensity pattern) is obtained from each pixel, read out 1 at a time, counted, and converted to digital info - CCD image detector is formed by combining a photoactive silicon layer with a transistor shift register o Photoactive region creates e- under exposure to light o Shift register transfers charge pattern to output recorders Doping Semiconductors - Convert light to electricity by adding impurities to elements around Group IV that are in b/w conductor and insulator - E.g. add P or Al to Si, such that small areas in Si will have an increase in e- (or decrease in e- if Al) o For P-doped, outermost Si valence e- will bind to P conductor e-, gaining energy o Valence
More Less
Unlock Document

Only pages 1-4 are available for preview. Some parts have been intentionally blurred.

Unlock Document
You're Reading a Preview

Unlock to view full version

Unlock Document

Log In


Don't have an account?

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.