Homepage › Solution manuals › Kevin P. Murphy › Machine Learning: a Probabilistic Perspective › Exercise 4.23 - Scalar QDA
Exercise 4.23 - Scalar QDA
Answers
import math hm=[67,79,71] hf=[68,67,60] def mu(h): return (h[0]+h[1]+h[2])/3 def sigma2(h): m=mu(h) return ((h[0]-m)**2+(h[1]-m)**2+(h[2]-m)**2)/3 # For question (a): mu_m=mu(hm) mu_f=mu(hf) sigma2_m=sigma2(hm) sigma2_f=sigma2(hf) print(mu_m) print(sigma2_m) print(mu_f) print(sigma2_f) # \pi_{m}=\pi_{f}=0.5. # For question (b): temp_m=(2*math.pi*sigma2_m)**(-0.5)*math.exp(-(72-mu_m)**2/2/sigma2_m) temp_f=(2*math.pi*sigma2_f)**(-0.5)*math.exp(-(72-mu_f)**2/2/sigma2_f) print(temp_m/(temp_m+temp_f))
For question (c) we can use a naive Bayes model, which is tantamount to adopting diagonal covariance matrices for both classes.
2021-03-24 13:42