You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
+ #strong[Evaluate the function] $f (x) = 2 x^2 - 0.1 x$ at $x = 5.21$ using 4-digit arithmetic with chopping. What is the result?
8
+
]
9
+
10
+
#block[
11
+
#setenum(numbering: "(A)", start: 1)
12
+
+ 53.75
13
+
14
+
+ 53.76
15
+
+ 53.77
16
+
+ 53.74
17
+
]
18
+
19
+
#block[
20
+
#setenum(numbering: "1)", start: 2)
21
+
+ #strong[Given a symmetric, positive real matrix $A$ and initial eigenvalue guesses $lambda_1^\* , lambda_2^\*$ such that $|lambda_1^\* - lambda_1| > |lambda_2^\* - lambda_2|$,] which iterative method will converge with the best rate?
22
+
]
23
+
24
+
#block[
25
+
#setenum(numbering: "(A)", start: 1)
26
+
+ $x_n = (A - lambda_1^\* I) x_(n - 1)$
27
+
28
+
+ $x_n = (A - lambda_2^\* I) x_(n - 1)$
29
+
+ $(A - lambda_1^\* I) x_n = x_(n - 1)$
30
+
+ $(A - lambda_2^\* I) x_n = x_(n - 1)$
31
+
]
32
+
33
+
#block[
34
+
#setenum(numbering: "1)", start: 3)
35
+
+ #strong[Which of the following iterative methods is unstable with respect to numerical error growth at $x_0$?]
36
+
]
37
+
38
+
#block[
39
+
#setenum(numbering: "(A)", start: 1)
40
+
+ $x_(n + 1) = 3 x_n + 2$
41
+
42
+
+ $x_(n + 1) = 1 / 6 x_n + 100$
43
+
+ $x_(n + 1) = 7 / 8 x_n + 20$
44
+
+ $x_(n + 1) = 0.1 x_n + 10$
45
+
]
46
+
47
+
#block[
48
+
#setenum(numbering: "1)", start: 4)
49
+
+ #strong[Given the points $x_0 = 1 , x_1 = 2 , x_2 = 3$, which of the following is not a Lagrange basis function?]
50
+
]
51
+
52
+
#block[
53
+
#setenum(numbering: "(A)", start: 1)
54
+
+ $- (x - 1) (x - 3)$
55
+
+ $frac((x - 1) (x - 2), 2)$
56
+
57
+
+ $frac((x - 2) (x - 3), 2)$
58
+
+ $frac((x - 1) (x - 3), 2)$
59
+
]
60
+
61
+
== II. Fill in the Blanks (30 pts)
62
+
<ii.-fill-in-the-blanks-30-pts>
63
+
#block[
64
+
#setenum(numbering: "1)", start: 1)
65
+
+ #strong[For the equation] $5 x^2 + x - 6 = 0$, determine if the following fixed-point iterations starting with $x_0 = 0.9$ are convergent. Fill 'True' if convergent, 'False' if not. (2 pts each)
66
+
]
67
+
68
+
- $x = sqrt(frac(6 - x, 5))$
69
+
70
+
- $x = 6 - 5 x^2$
71
+
- $x = sqrt(frac(- 3 x^2 - x + 6, 2))$
72
+
73
+
#block[
74
+
#setenum(numbering: "1)", start: 2)
75
+
+ #strong[Given points] $x_0 = 1 , x_1 = 2$, and the derivative at $x_0$, determine the three basis polynomials for Hermite interpolation. (2 pts each)]
76
+
77
+
#block[
78
+
#setenum(numbering: "1)", start: 3)
79
+
+ #strong[Given the matrix] $mat(delim: "[", 100, 14; 14, 4)$, find its eigenvalues and condition number under the spectral norm. (2 pts each)]
80
+
81
+
#block[
82
+
#setenum(numbering: "1)", start: 4)
83
+
+ #strong[To minimize the local truncation error of the formula]
for solving the IVP $y' = f (t , y)$, find the values of $a_0$, $a_1$, and $beta$. (2 pts each)]
86
+
87
+
#block[
88
+
#setenum(numbering: "1)", start: 5)
89
+
+ #strong[Find the monic polynomials] $phi_k (x)$ (for $k = 0 , 1 , 2$) that are orthogonal on $[0 , 4]$ with respect to the weight function $rho (x) = 1$. (2 pts each)]
90
+
91
+
== III. Iterative Method Convergence (12 pts)
92
+
<iii.-iterative-method-convergence-12-pts>
93
+
Given $A = mat(delim: "[", 8, 2; 0, 4)$, $arrow(b) = mat(delim: "[", 2; 1)$, and the iterative method
94
+
$arrow(x)^((k)) = arrow(x)^((k - 1)) + omega (A arrow(x)^((k - 1)) - arrow(b)) $ answer the following:
95
+
96
+
#block[
97
+
#setenum(numbering: "1)", start: 1)
98
+
+ #strong[For which values of] $omega$#strong[will the method converge?]
99
+
(8 pts)
100
+
101
+
+ #strong[For which values of] $omega$#strong[will the method converge the fastest?]
102
+
(4 pts)
103
+
]
104
+
105
+
== IV. Vector Norm Proof (10 pts)
106
+
<iv.-vector-norm-proof-10-pts>
107
+
Prove that $||X||_1 = sum_(i = 1)^n |X_i|$ is a valid vector norm, where $X_i$ is the $i$-th component of vector $X$.
108
+
109
+
== V. Richardson Extrapolation (10 pts)
110
+
<v.-richardson-extrapolation-10-pts>
111
+
Given the formula for the second derivative approximation
derive a better formula to approximate $f'' (x_0)$ with error $O (h^4)$ using Richardson extrapolation.
114
+
115
+
== VI. Least Squares Fit (12 pts)
116
+
<vi.-least-squares-fit-12-pts>
117
+
Find the values of $a$ and $b$ such that $y = a x + b x^3$ fits the following data using least squares, weighted by the given weights:
118
+
119
+
#figure(
120
+
align(center)[#table(
121
+
columns: 4,
122
+
align: (auto,auto,auto,auto,),
123
+
stroke: none,
124
+
[$X$], table.vline(),[1], [2], [3],
125
+
[$Y$], [-4], [24], [6],
126
+
[Weights], [1], [1/4], [1/9],
127
+
)]
128
+
, kind: table
129
+
)
130
+
131
+
== VII. Region of Absolute Stability (10 pts)
132
+
<vii.-region-of-absolute-stability-10-pts>
133
+
For the following methods solving Initial-Value Problems for ODEs, calculate the region of absolute stability using the test equation $y' = lambda y$ with $"Re" (lambda) < 0$. Which method is more stable (or are they the same)?
0 commit comments