Homepage Solution manuals Ivan Niven An Introduction to the Theory of Numbers Exercise 3.5.16* (Condition to complete a row $a_1,a_2,\ldots,a_n$ to obtain a matrix $A \in \mathrm{SL}_n(\mathbb{Z})$)

Exercise 3.5.16* (Condition to complete a row $a_1,a_2,\ldots,a_n$ to obtain a matrix $A \in \mathrm{SL}_n(\mathbb{Z})$)

Let a 1 , a 2 , , a n be given integers. Show that there is a n × n matrix with integral elements and determinant 1 whose first row is a 1 , a 2 , , a n , if and only g.c.d. ( a 1 , a 2 , , a n ) = 1 .

Answers

Proof. Suppose that there is a n × n matrix A = ( a ij ) 1 i , j n with integral elements and determinant 1 whose first row is a 1 , a 2 , , a n , so that a 1 j = a j for 1 j n . The development of the determinant following the first line is written

det ( A ) = a 1 c 1 + a 2 c 2 + + a n c n ,

where c i = ( 1 ) 1 + j m 1 j is the cofactor of a j = a 1 j in the matrix A (and m 1 j the corresponding minor).

Then the relation a 1 c 1 + a 2 c 2 + + a n c n = 1 shows that a 1 a 2 a n = 1 .

We prove the converse by induction. For n = 2 , this is the result of Problem 3. Since a 1 a 2 = 1 , there exist integers u , v such that a 1 v a 2 u = 1 , thus | a 1 a 2 u v | = 1 , so

A = ( a 1 a 2 u v ) SL 2 ( ) .

To introduce the method, we treat the case n = 3 , before showing the general case. It it is just for explaining things, so you can skip this part and go to the general method.

Suppose that gcd ( a , b , c ) = 1 . Put d = a b . Then d c = a b c = 1 .

If d = 0 , then a = b = 0 , and c = 𝜀 = ± 1 . Then we can take

A = ( 0 0 𝜀 𝜀 0 0 0 1 0 ) SL 3 ( ) .

Suppose now that d 0 . Put a = a d , b = b d . Then a b = 1 . The case n = 2 shows that there are integers u , v such that | a b u v | = 1 , which gives | a b u v | = d .

Then we search integers λ , μ , ν such that

A = ( a b c u v 0 λ μ ν ) SL 3 ( ) .

Here

det ( A ) = λvc + μuc + ν ( av ub )

thus

det ( A ) = ( λv + μu ) c + νd . (1)

This shows that A SL 3 ( ) if and only if ( λv + μu ) c + νd = 1 . To prove the existence of such integers λ , μ , ν , we choose the integers k , ν such that

kc + νd = 1 . (2)

Since d c = a b c = 1 , such an integer k exists. Moreover u v = 1 (because a v b u = 1 ), therefore there are integers λ , μ such that

λv + μu = k . (3)

(Note that it is useless to solve this equation by the Bézout’s algorithm, because ( a d ) v ( b d ) u = 1 , thus k = ( ka d ) v ( kb d ) u , so λ = ka d , μ = kb d is a solution.)

Then, by equations (1), (2), and (3),

det ( A ) = ( λv + μu ) c + νd = kc + νd = 1 ,

so that A = ( a b c u v 0 λ μ ν ) SL 3 ( ) .

For instance, take ( a , b , c ) = ( 6 , 15 , 10 ) . Then d = 5 15 = 3 , and 3 6 1 15 = 3 = d , so we can take u = 1 , v = 3 . Equation (2) is 10 k + 3 ν = 1 , so we can take k = 1 , ν = 3 . Then λ = ka d = 2 , μ = kb d = 5 , so that

A = ( 6 15 10 1 3 0 2 5 3 ) SL 3 ( ) .

Now we treat the general case. Suppose that for some integer n 2 , and for any a 1 , a 2 , a n such that a 1 a 2 a n = 1 , we can find a matrix A = ( a ij ) 1 i , j n SL 3 ( ) such that a 11 = a 1 , a 22 = a 2 , , a n = a 1 n .

Take now n + 1 integers a 1 , a 2 , a n , a n + 1 such that a 1 a 2 a n + 1 = 1 . Put d = a 1 a 2 a n .

If d = 0 , then a 1 = a 2 = = a n = 0 , and a n + 1 = 𝜀 = ± 1 . We can take

A = ( 0 0 0 𝜀 ( 1 ) n 𝜀 0 0 0 0 1 0 0 0 0 0 1 0 ) ,

such that det ( A ) = 1 .

If d 0 , put a 1 = a 1 d , a 2 = a 2 d , , a n = a n d . Then a 1 a 2 a n = 1 . If we apply the induction hypothesis to a 1 , a 2 , , a n , we obtain a matrix

A = ( a 1 a 2 a n a 21 a 22 a 2 n a n 1 a n 2 a nn ) SL n ( ) .

Therefore

B = ( a 1 a 2 a n a 21 a 22 a 2 n a n 1 a n 2 a nn )

satisfies det ( B ) = d . We prove the existence of integers λ 1 , λ 2 , , λ n , λ n + 1 such that

A = ( a 1 a 2 a n a n + 1 a 21 a 22 a 2 n 0 a n 1 a n 2 a nn 0 λ 1 λ 2 λ n λ n + 1 ) SL n + 1 ( ) .

By expanding det ( A ) along the last column, we obtain

det ( A ) = λ n + 1 det ( B ) + ( 1 ) n + 2 a n + 1 det ( Y ) , (4)

where

Y = ( a 21 a 22 a 2 n a n 1 a n 2 a nn λ 1 λ 2 λ n ) .

Here det ( B ) = d = a 1 c 1 + a 2 c 2 + + a n c n , where c j = ( 1 ) 1 + j m 1 j is the cofactor of a j = a 1 j in B , and

det ( Y ) = | a 21 a 22 a 2 n a n 1 a n 2 a nn λ 1 λ 2 λ n | = ( 1 ) n 1 | λ 1 λ 2 λ n a 21 a 22 a 2 n a n 1 a n 2 a nn | = ( 1 ) n 1 ( λ 1 c 1 + λ 2 c 2 + + λ n c n ) .

Therefore equation (4) gives

det ( A ) = λ 1 c 1 λ 2 c 2 + λ n c n + λ n + 1 d . (5)

Since d a n + 1 = a 1 a 2 a n a n + 1 = 1 , there exist integers k and λ n + 1 such that

k a n + 1 + λ n + 1 d = 1 . (6)

Moreover, we can find λ 1 , λ 2 , , λ n such that

λ 1 c 1 λ 2 c 2 λ n c n = k . (7)

Indeed, 1 = ( a 1 d ) c 1 + ( a 2 d ) c 2 + + ( a n d ) c n , so c 1 c 2 c n = 1 . More explicitly, since a 1 c 1 + a 2 c 2 + + a n c n = d , we obtain ( k a 1 d ) c 1 + ( k a 2 d ) c 2 + + ( k a n d ) c n = k , so a solution is

λ 1 = k a 1 d , , λ n = k a n d .

Then equations (5), (6) and (7) show that

det ( A ) = 1 ,

for this choice of λ 1 , , λ n + 1 , and so the induction is done.

There is a n × n matrix with integral elements and determinant 1 whose first row is a 1 , a 2 , , a n , if and only if gcd ( a 1 , a 2 , , a n ) = 1 . □

User profile picture
2024-11-25 12:38
Comments