Reply: When I last taught the Math 250A (Fall, 2001), there were 41 students at the end of the class; 11 were undergraduates. The abstract algebra course is a good place for undergraduates to get their first taste of graduate mathematics. If you are contemplating graduate study in mathematics and want to see what it's like, then Math 250A is a good choice. If you haven't taken Math 114, you may find that the Galois theory in Math 250A is so slick that you won't get a feel for what's really going on. My view is that you can still take Math 250A, but you should work out lots of examples in a book like Stewart's "Galois theory". Obviously, you should ask your questions also to Bjorn Poonen, who'll be teaching the course.
Reply: The main reason for using the text by Friedberg et. al. is that it's the most common choice for Math 110. I want students to be able to transfer out of the honors course without having to scramble to catch up with the work that has been done in their new section. In the other direction, a strong student from a non-honors section will have at least a fighting chance of being able to transfer into H110 without going completely crazy since the textbooks, at least, will be the same. The Friedberg text has a somewhat concrete perspective, but it has complete proofs of all the major results. I like the fact that it stresses elementary row and column operations, which are tremendously useful for practical applications but neglected by most purely theoretical books. My guess is that I will follow the book closely but will go fast when there are sections that are purely computational. For homework assignments, it'll be easy to include problems that are not taken from the book if the book's problems are too elementary for the class.
Reply: The class limit was bumped up to 30, so there's currently no waiting list. I am guessing that some of the 30 students who are enrolled did not realize that they were signing up for an honors class. If so, there will be room for a few further students even if the class limit of 30 is not raised any further.
Reply: It's hard to estimate long it will take students, on average, to do the weekly homework. If you look at the web sites from my two previous upper-division courses 110 and H113, you'll see how many questions I tend to assign each week. In H110, I imagine that I'll assign no more problems than in regular 110 but that the problems will be harder and more theoretical. When you plan out your courses, you should consider the more inclusive question: "How much time outside of class will I be spending on H110?" You will need to spend time before and after each lecture to internalize the concepts that are discussed in the lecture. I will follow the book, so you will know before each lecture what will be on the table. If you read the book carefully ahead of time, you'll be in a good position to focus on those aspects of the topic that seemed unclear when you first read over the material. For advice on issues like this, students new to Berkeley's math major might benefit from a chat with more seasoned students and with staff members Catherine Pauling and Alison Thompson. (Dexter Stewart has moved to another department; Alison is taking over Dexter's responsiblities.)
Reply: Keith, I have no clue as to who wrote the comment. About the "nice proof": I found it on the author's preprint page http://www.math.lsa.umich.edu/~hderksen/preprint.html.
Reply: Thanks for the report. Fixed now, I hope.
Reply: For some reason, I'd be happier if students tried to figure this out on their own. I think that it will help you organize the material more if you have to think about where we are, where we've been and where we might be going. On the other hand, if we're about to skip over some material or do things in a non-obvious order, I'll let you know by announcing it in class and/or sending e-mail to all students in H110.
Incidentally, I really think that attending class is of capital importance. The "course" includes all of our face to face interactions. Frequently there will be discussions in class that couldn't have been anticipated before the lecture.
Reply: My impression is that problem 7 really was for vectors in n-space and not just for plane vectors. The parallelogram law on page 2 is surely true even in the degenerate cases that you cite if it's interpreted appropriately.
Reply: As far as notation goes, it seems to me that there are books that try to distinguish vectors from scalars in various ways: some have vectors in bold letters and scalars in roman letters, while some have sclars in Greek letters and vectors from a to z. In this book, the authors don't have any general scheme, but I think that they do enough to make sure that the reader won't confuse vectors and scalars in any given situation. It gets more complicated when there are vector spaces and matrices around. As far as vagueness in this chapter is concerned, all I can say is that you should try to be as clear as possible. The concept of "vector space" is defined formally in the second section of the book, so you shouldn't have long to wait before the vagueness goes away.
Reply: OK, I give up: in problem #7 of § 1.1, it's a good idea to assume that the vectors that form your parallelogram or not actually parallel. (Thus, you assume that they span something planar and not just a line sigment.)
As far as #21 of § 1.2 goes, the Z that's being formed is indeed the direct product of V and W. From the point of view of Math 113, you could say that a vector space is an abelian group plus an extra structure, which you could characterize as an action of F on the abelian group. The abelian group that underlies Z is just the direct sum or product (they're the same thing here) of V and W as abelian groups. The problem defines an action of F on this direct sum: it's the componentwise action. You have to check that Z with this action is an F-vector space. You could plausibly begin your treatment by saying that we know from Math 113 that it's an abelian group; this means that you're claiming as already known the first handful of vector space axioms. Then you still have to check that more is true, namely that the action of F that's in the picture makes it so that the rest of the axioms hold.
Reply: Take a continuous function f that isn't differentiable and let F(x) be a definite integral of f, e.g., the integral of f from 0 to x. Then, by the fundamental theorem of calculus, F is differentiable with derivative f. Then F" is no better defined than f' is. If f is nowhere differentiable, then F" exists nowhere. If f'(0) doesn't exist, then F"(0) doesn't exist.
Reply: Thanks very much. I'll try to get homework assignments prepared sufficiently far in advance that students will have a full week to work on them. As I've said before, I think that the good strategy is for students to start thinking about problems as soon as they're assigned. Hard problems become a lot easier after you've had time to work on them for a while.
Reply: The question sounds as though it were written by someone wanting less abstract algebra! I'll do my best not to have extensive tangential discussions like the one relating a vector space structure on an abelian group to a ring homomorphism. On the other hand, I believe that it's a good idea to mention direct analogies with Math 113 when appropriate. Take the example of the construction of a quotient vector space, which I started to discuss on Friday. This is the exact same construction that one performs to make a quotient group. Not to say this would be almost negligent. If I knew that all my students would be taking Math 113 after my course, I wouldn't bother to mention the connection -- I'd leave it for the Math 113 instructor. It happens, though, that students can take 113 first and then go on to 110. Students who do this will benefit from the comment that they're seeing a specific construction for the second time. Students who take 110 first will get some insight into a construction that they'll see later on in 113.
Reply: Linear algebra is a part of abstract algebra. The majority of abstract algebra textbooks contain substantial discussions of linear algebra. Like most universities, UC Berkeley thinks that the material in Math 110 is important enough to deserve a course all its own. Moreover, we've decided that students can study abstract linear algebra before studying abstract algebra. This is all fine, but I think that students, and especially honors students, will benefit from extra perspective. The significance of the relation between vector spaces and abelian groups is that vector spaces are abelian groups -- they're abelian groups with some extra structure (an action of a field on the abelian group). Linear transformations between vector spaces are, in particular, homomorphisms between the vector spaces, thought of as abelian groups. They're homomorphisms with the extra property of "commuting" with field multiplication.
I can sense here that you'd rather hear less than more about abstract algebra. I'll steer away from abstract algebra for a while. Note that the construction of quotient spaces is done out in problem 31 on page 23 of the book.
Reply: The book has it right: a set is a subspace if and only if it's equal to its own span. On the other hand, all textbooks have misprints, and most authors are eager to hear about misprints in their books. We should make a list of misprints that we find that are not noted already in the authors' list of errata. If you flag them by typing into the comment box, I'll make sure that the authors get the correction. I'll wait until December and then send them everything that we've found.
Reply: To say that S is linearly dependent is to say that you can write 0 in a non-trivial way as a linear combination of vectors in S. The non-triviality means two things. The first is that you must use distinct vectors -- otherwise you could write 0 = v + (-1)v with v in the set. The second aspect of the non-triviality is the requirement that some coefficient in the linear combination be non-zero. (If the coefficients are all 0, you're just writing 0=0.) When S is linearly dependent, you can use high-school algebra to transform a non-trivial expression of 0 as a linear combination of distinct elements of S into an equation that writes one of those vectors in terms of the others. Conversely, if v is a linear combination of the u_i as in the statement of the problem, then you could write 0 in terms of v and the u_i in such a way that you exhibit the linear dependence of the set S. Conclusion: the problem seems fine as written.
Some students asked me this question in office hours. It seems to me that one should prove that W has a finite number of generating sets if and only if W has only a finite number of elements. One should then establish necessary and sufficient conditions for W to have finite cardinality. Clearly, W is finite if it is the 0-vector space {0}. Clearly, W is infinite if the field F is infinite and W contains a non-zero vector (because it contains all scalar multplies of the vector). If the field F is finite, W has finitely many elements if and only if it has a generating set with only finitely many elements. I don't know if one can say anything more intelligent than this.
Reply: Well, I'll be happy to include some more difficult problems in the future. (I think that HW #2 had harder problems than HW #1.)
Meanwhile, be sure that you're doing the best possible job on the problems that have been assigned. The grader, John Voight, has finished the first assignment. You'll get back your papers on Friday. He had this to say:
...it looks like most of the students are adequately prepared. There is, however, a lot of sloppy work.... About half of the class failed to check that addition and scalar multiplication were closed on the set of real-valued real differentiable functions (§ 1.2 #10) and another half of the class thought that § 1.2 #16 was asking to show that M_mn(Q) is a vector space over Q, not that M_mn(R) itself is a vector space over Q.John told me that he finds it easier to write up model solutions than to make extensive marks on students' papers. You can download his solutions from John Voight's home page, /~jvoight/H110/.
One more thing: Please staple your assignments together before handing them in. John is worried about losing pages.
Reply: I hope that the assigned problems will begin to strike you as more honor-ish. This particular problem may have struck you as silly, but it did actually make a point, at least implicitly: If you have two fields k and K, with k stuck inside K, every K-vector space can be regarded as a k-vector space by remembering only the action of the smaller field k. Passing from K to k is called "restriction of scalars"; it's important in many contexts.
I thought that the problem was phrased unambiguously. In mathematics courses, one of the things that you're required to do is to read carefully and to express yourself precisely. This issue comes up quite often in Math 55, the discrete math class, in combinatorial questions, especially when probability is involved. You have to decide when two things are different, when order counts, when replacement is allowed, and so on. In our class, it's rare that there are questions of this subtlety, but you still have to pay attention sometimes.
Reply: Our grader, John Voight, wrote up the solutions to HW #1 last week. His H110 page offers them in three different formats! If I understand correctly, John will usually type up solutions as he grades papers. He says that it's easier for him to do this than to make marks on individual students' papers.
Reply: For the first problem, suppose that the finite field is the field of integers mod p, where p is a prime. Think about the case of a vector space of dimensions 0, 1, 2, 3. Start with small examples and see if a pattern emerges. For the second problem, you can suppose that S and T are finite if you want to. To figure out what's going on, consider the case where the two fields are R asnd C. Also, there was a misprint, which I just corrected: the products st should be ts instead -- s is a vector and t is a field element; we usually write the scalars to the left of the vectors.
By the way, someone sent me e-mail to ask whether we'd be having a quiz today (Wednesday, September 10). We're not. I wasn't planning to have quizzes in this class. Also, I would never give a quiz without announcing it ahead of time. The person who wrote me the e-mail message had to miss class today and was worried about missing a quiz. Please don't miss class unless there is some overwhelming and compelling reason.
Reply: No. We didn't cover that section and the "corollary" is something of a fake. (It depends on an extra axiom of set theory.) I infer that you're talking about problem 21 in § 1.6. Please note that "infinite dimensional" does not mean "has a basis that happens to have an infinite number of elements". The term is defined on page 47, at the top. It means "has no finite basis." As far as we know from this definition, an infinite-dimensional vector space might have no basis at all.
Reply: There are all kinds of different strategies. A frequent one is to prove that P implies Q, that Q implies R and that R implies P. (It can be helpful to reorder the statements first.) If you end up with a series of implications that enable you to show, by following links in the series, that every statement implies every other one, then you've established the equivalence.
Reply: All problems are written up. Just look at John's 110 page.
Reply: According to John, HW #1 had a mean of 16, standard deviation of 3; HW #2 had a mean of 18, standard deviation of 2.
Reply: The vector space structure on V/W is as you say; for example, the sum of x+W and y+W is (x+y)+W. If you use this without comment on your homework paper, no one will complain. If x+W = y+W, then it's true that x-y is in W. Again, if you use this without comment, no one will complain. I do want to stress, however, that this is not an assumption. Rather, it's an easy consequence of the definitions.
Reply: The homework due on February 26 will be more theoretical than previous homeworks. Also, there are enough external problems, with mathematical notation, that I wrote this assignment in a separate .pdf file. If you would like me to print you out a copy on a sheet of paper, let me know by e-mail and I will do this for you. To some extent, I am making the homework more theoretical in response to students' comments. If you want more routine problems but have been afraid to say so, please express yourself by writing comments into the box.
Reply: I agree that students who don't do homework tend to hurt their final grades. I compute grades by adding all components together. In a situation where the mean scores have been quite close to the maximum possible score of 20, a student who gets a 0 for an assignment is not behaving rationally.
As for applications and motivation, I don't think that it would be a good idea for me to stress applications in a specific area (e.g., economics, electrical engineering, cryptography) because students have very varied backgrounds. Some of you are here (in H110) because you like the beauty of this abstract subject. Lots of you are here because you need the theorems of linear algebra as tools in your majors. (Several of you have come to my office hours to show me what you're doing with linear algebra in your other classes.) Most of you have some idea of the way that linear algebra can be applied because you've seen applications in Math 54. Students who want to see a variety of applications should look at the book Linear Algebra by Peter Lax or other books that stress applications. (One by Kenneth M. Hoffman and Ray Kunze is very well regarded, but I don't have a copy to refer to.)
Reply: Try the web page Getting Started with TeX, LaTeX, and friends.
Reply: The first non-book problem asks you to generalize "the first assertion of the problem." Here, "the problem" is 2.3.13. This got garbled because I inserted problem 10 of section 2.6. In 2.3.13, matrices are square. In the non-book generalization, they are allowed to be non-square.
Reply: I intended to assign 2.6.10 and not 2.5.10. What must have happened is that the reference to 2.6.10 get left in at the beginning; I was probably intending to move it down to the bottom.
Reply: I'd prefer to think of this a bit more intrinsically. An endomorphism of a vector space V is a linear map from V to itself. When V has finite dimension, we can take the trace of an endomorphism of V and get a number. With this AB business, we should think that we are given a linear map T:V->W and a linear map U:W->V and that we are computing the traces of the endomorphisms UT of V and TU of W. We get the same answer both times and the question is why this is so.
The two functions "trace(UT)" and "trace(TU)" are linear functions of T and U, so we can replace T and U by simple linear transformations that are chosen from generating sets of the vector spaces L(V,W) and L(W,V). If you have a vector w in W and a functional f in V^*, you can make a linear transformation T by defining T(v):=f(v).w, the product of the scalar f(v) and the vector w. This T depends on f and w, of course. It's easy to see that these T generate L(V,W). In fact, if you choose a basis v_i of V and a basis w_j of W, and if you choose f=f_i (from the basis dual to the chosen basis of V) and select w=w_j, then you'll end up with the T whose matrix has a 1 in the ji-th place and 0's elsewhere. Hence, in studying this issue, we can take T to be the T determined by a pair (f,w) and take U to be the analogous element of L(W,V) that is defined by a pair (g,v) with g in W^* and v in V. Explicitly, U(w) is g(w).v.
What could trace(UT) be in this situation? There is an extremely natural answer, namely the product of the two numbers f(v) and g(w). If you choose a basis for V and compute the trace, you will actually get this answer. (I did this on a piece of paper but am reluctant to try to type the computation into this comments page.) In some sense, I think that I've now provided the explanation that you were seeking. Namely, the trace represents a bilinear (= linear in each argument) function on L(V,W) x L(W,V), and it turns out to be the most obvious such function. It happens also that this function has a nice symmetry: f(v)g(w) is unchanged if we exchange the two pairs (f,w) and (g,v). This symmetry is equivalent to the identity trace(AB) = trace(BA) that you verified by direct calculation. Does this help?
Reply: I don't think that it would be all that helpful to review things on Friday. Students should try to organize the material for themselves and distill essential things down to a single sheet of paper (2-sided). Doing this with a study group might be a good idea.
If you look at some of my old exams (110 last fall and H113 last spring), you'll get an idea of the style of my questions. An important thing to consider is that last year's exams were 80-minute exams and our midterms this semester are 50-minute exams. In 50 minutes, I can't ask you very much! Also, the midterm can't require too much reflection because you won't have much time to reflect! I might come up with some questions that are either recycled homework questions or variants of questions that you've had on the homework or things that we've done in class.
If you wonder whether a certain kind of question might appear on the exam, ask yourself what would happen if it were there. Would it be a fair question? Would it be reasonably easy to grade? Would only a few students get the answer or would everyone get it? I'd love to be able to make up an exam with three questions: one that almost everyone would get, one that 2/3 of the class would get, and one that few people would get. For various reasons, I doubt whether I can achieve this goal in our short exam. I think that the main purpose of this midterm will be to provide feedback to people who are unsure whether they should continue in this class or transfer into a regular 110 section. If your score on this exam is significantly below that of the main mass of students, then you should think about changing sections.
Reply: I've asked John whether he'll be able to make this happen, but I don't know his schedule. Check back a few times over the weekend.
Reply: YMMV
What we have here is a classic bimodal distribution. There were 11 scores at 23 or above, no scores between 18 and 22, and 20 scores that were 17 or below. There were 10 students who got 10 or below and 10 students whose scores were between 11 and 17.
Reply: "YMMV" is an expression that you see on the net; I understand it to mean "your experience may turn out to be different from mine." While some students found the midterm to be easy, only half the class got more than half the possible points. The goal of the exam was to enable students to decide whether or not they're losing control of the class material. If you find that you're no longer following what's going on in the class (meaning in lectures, in the reading and in the homework), then you should consider moving into a non-honors Math 110. Whether or not you actually decide to move is 100% up to you.
Reply: Sounds like a bad idea because the proof of Theorem 4.7 involves consideration of elementary matrices. You want to avoid circular reasoning.
Reply: I found this as well after class on Wednesday. I added a link to Axler's page on the main H110 page. You'll find the link in the general vicinity of his photo.
Comment: I think that a big drawback of colored chalk is that it's hard to erase. I typically walk out of 2 Evans with un-erased boards; this is fairly normal for math courses. (Instructors begin their classes by erasing the boards left by previous instructors.) I'd feel terribly guilty if I left colored chalk on the board for Crystal to erase.
Reply: In the article, there's P(K,d,r), which I suppose is what you mean. It's defined on page 2 of the article. The definition is sufficiently strange that it's helpful to consider the two examples that appear in Conrad's treatment. In Conrad's paper, one notes that every endomorphism of a vector space of odd dimension over the field of real numbers has an eigenvector. This is because odd-degree real polynomials have real roots (as follows from the intermediate value theorem of calculus). To see that every endomorphism on an odd dimensional vector space has an eigenvalue is to assert P(R,2,1). Conrad shows that every pair of commuting endomorphisms of such a vector space has a common eigenvector; this is P(R,2,2). Here, a pair of commuting endomorphisms basically is the same thing as a pair of square matrices of size n that commute with each other. Conrad's proof that P(R,2,2) follows from P(R,2,1) shows more generally that P(K,d,1) implies P(K,d,2) for every field K and every positive integer d. Conrad uses this implication when K is the field of complex numbers and d is a power of 2. Derksen treats also the assertion P(K,d,r) with r greater than 2, but this is not necessary for the proof of the fundamental theorem of algebra.
Reply: Who says that it's not immediate? I think that the problem is easy if you view it in the right way, but it would be a mistake to get lulled by the notation and assert without justification that if W_1 and W_2 are each direct sums of subspaces of V, and if the sum of W_1 and W_2 is direct, then you can subsitute in blithly and say that W_1 + W_2 is the direct sum of the various subspaces that make up W_1 and W_2.
Reply: I just spoke with the student who posted this comment. The nub of the problem seems to be the union sign in the condition. The ordered bases gamma_i are intended to be lists of vectors. The union of these lists is the concatenation of them: you make a big string that lists the vectors in gamma_1, the vectors in gamma_2, and so on. If W_1=W_2 and gamma_1=gamma_2, and if W_1 is non-zero, then a list that begins with the vectors in gamma_1 and then repeats those vectors will not constitute an ordered basis of V.
The two posted problems are very good.
Reply: I added some stuff an hour or two ago.
Reply: Are you referring to the question of why the value of f(t)= det(A-tI) at lambda is the determinant of A-lambda.I? You have to know somehow that taking the determinant of a matrix with polynomials in it and then pluging in a number amounts to the same thing as plugging in the number and then taking the determinant. This is certainly true, but you need to provide a proof of some sort. It's not stated explicitly as a theorem in the textbook.
Reply: Sounds reasonable to me.
Reply: My usual rule is that I don't ask questions about the lecture that was just before the exam but I can (and sometimes do) ask about material that was covered two lectures before the exam. In this situation, § 6.4 will not be on the exam, but the first three sections of Chapter 6 will be on the exam.
Reply: The next homework assignment is due on Friday, November 7. However, papers that are turned in on November 10 will not be considered as late. (Mortgage payments are typically due on the first of the month but are considered as delinquent only if they are turned in after the 15th of the month.)
Reply: There are two #10s in this assignment. The second one, in § 6.4, starts off with the statement that V is finite-dimensional. I infer, therefore, that you're talking about § 6.2, #10. I'm pretty sure that the V here is not assumed to be finite-dimensional. If you can do the problem only when V is finite-dimensional, try to analyze how this assumption is used and then see if you can work around it.
Reply: The second problem bears only a superficial resemblance to Schur's lemma. Schur's lemma is about inner product spaces and orthonormal bases. Its proof makes use of the adjoint of a linear transformation. The exam problem is algebraic: it has nothing to do with inner products and does not require that the field of scalars be the real or complex field. The idea of the proof is to pass from an n-dimensional space to a space of smaller dimension by using quotient spaces and to use one of the problems in the assignment that was due on October 17. It is true that proving Schur's lemma gave me the idea to ask the question. On the other hand, the best approach to solving the problem would have been to note at the outset that the solution couldn't involve Schur's lemma because that lemma wouldn't be covered by the exam.
Reply: The idea is very good, but I'd go further and say that there is no need to introduce an inner product. It's probably simplest just to keep V and V^* separate from each other and not identify them by bringing an inner product into the picture. The crux of the proof that we did in class is that there is a subspace of V of dimension dim(V) - 1 that is stable under T. Once you know this, the desired result follows by invoking the induction hypothesis: the restriction of T to the subspace is upper-triangular in some basis, and you get T upper-triangular on all of V just by completing the basis in any old way that you like. How to find the subspace? Look at the action of the transpose T^t of T on V^*. If you make a basis of V, then T is given by a matrix A. The matrix of T^t in the basis dual to the chosen basis is the transpose of A. Since we know that a matrix and its transpose have the same determinant, the characteristic polynomial of T^t is the same as the characteristic polynomial of T. It follows that T^t has an eigenvector f. Since f is a non-zero linear transformation V->F, the null space W of f is a subspace of V of dimension n-1, where n is the dimension of V. It is clear that W is stable under T. Indeed, if w is in W, then f(T(w)) = (T^t(f))(w). Now T^t(f) is a multiple of f because f is an eigenvector, so the expression (T^t(f))(w) is a multiple of f(w), which is 0. Hence T(w) lies in the kernel (= null space) of f, which is W.
Reply: Thanks for pointing this out. I fixed the typo (while perhaps introducing others!) and also removed problem 6 from the list of problems in §6.5.
Reply: The student who posted this comment came to my office and we had some discussion. If I recall correctly, f'(0) turns out to be some simple expression involving inner products. To say that it's 0 is to say that this simple expression vanishes. Accordingly, you get some identity involving inner products: two are equal, or maybe negatives of each other, or one of them is the sum of two others -- you get the idea.
Reply: Yes, thanks much for pointing this out!
Reply: There's nothing wrong with the concept, but the vector space of such functions might not be worthy of any special interest. It's the vector space Z*, where Z is the tensor product of V with W.
Reply: The latter, I believe.
Reply: The next homework will be due after Thanksgiving. It'll probably be the last homework as well. I'll make it up over the weekend and expect to post it by the end of the day on Sunday.
Reply: The homework problems were taken from old math department preliminary exams. What can make those exams hard is that there is no indication going in as to the nature of each problem. There can be very hard questions mixed in with very easy questions, silly questions with serious questions, and so on. The "computational" problem, for example, might be solvable by "pure thought" -- without any significant computation. Think about the characteristic polynomial and the generalized eigenspaces -- what can they be like? It's likely possible to find the canonical form by listing some a priori possibilities and then ruling out all but one of them. The commutation question is not one you'd expect, but there it was on the prelim, so now there it is on your homework! Sorry if it's silly. If you've done question #9 earlier, you can either do it again or cite the place where you did it before.
Reply: The problem has nothing to do with the field of complex numbers, but you do need to assume that the characteristic polynomial of T splits as a product of linear factors. If you took T to be a non-trivial 2x2 rotation matrix over the field of real numbers, the vector space R^2 would have no invariant subspaces except itself and 0. Thus there would always be a W'. Nonetheless, T has no eigenvectors.
What's good about the problem as stated is that the optimal hypothesis is not presented. Thus the problem echos the sort of situation that you might have in honest applications of linear algebra: you have to figure out an argument to prove what you need, and only at the end do you have the luxury of trying to sort out how general your argument is.
Reply: How about 10:45 to 12:45 today (Monday)?
Reply: If people come in between 2:30 and 3:30 today (Tuesday), I'll be happy to talk with them.
Reply: Yes, of course.
Reply: I'm going to have to read a lot of solutions starting this evening. I'm not all that eager to read what the read wrote about 6.6.5(b). My attitude toward the problem, when I discussed it with students in office hours, is that I'd do out the real case on the board and leave the complex case for them to mull over by themselves. The complex case can't be seriously harder than the real case, right If I have some time today, I'll write down the solution in the real case.
OK, look: if there are w_1 and w_2 such that
Reply:
Yes, because of the holiday season.
Reply:
Suppose that T is the projection of V onto X along Y.
Then
V is the direct sum of
X and Y; each v is uniquely of the form x+y. The map T
takes x+y to x. The null space of T
is Y. The space X is the set of vectors that are fixed by the projection,
i.e., the ones that are mapped back to themselves by the projection.
So, as I said, V is the direct sum of the null space of T and the
null space of T-I.
I'm amazed that only two or three of you chose to do problem 8. It
was very easy!
Reply:
I posted
the grades
around 90 minutes after this comment was posted.
Tue Dec 16 01:44:16 2003 : oops, it ate my brackets. let me try again, this time with inner product denoted as [ , ].
i think that there might be an error in one of the homework solutions: near the end of the solution for problem 6.6.5(b) in HW #13, the reader writes, "If [w1,w2]!=0, then choose c < -||w2||^2/Re[w1,w2]." But why can we divide by Re[w1,w2]? Why can't [w1,w2] be imaginary, in which case [w1,w2]!=0 holds but Re[w1,w2]=0? Or am I missing some easy observation?
Tue Dec 16 14:02:22 2003 : We are dropping one homework grade, right?
Tue Dec 16 21:12:20 2003 : according page 398 of the text, I don't think the statement in the solution to number 2 of the final is correct.
"If T^2 = T, where T is a linear operator on a vector space V , then we know well that V is the direct sum of the null space of T and the space of vectors that are fixed by T."
Don't we only know that T is a projection?
Tue Dec 16 21:14:45 2003 : also for number 6, I think one of them should be T[v - w]
Thu Dec 18 11:28:51 2003 : will you be posting our grades like you did for 113?