ANSWERS TO QUESTIONS ASKED BY STUDENTS in Math 245, taught from my notes "An Invitation to General Algebra and Universal Constructions", /~gbergman/245, Spring 2014. ---------------------------------------------------------------------- You ask why, in the middle of the long paragraph on p.13, I say that \iota_T(x.y) would be x.y^{-1}, rather than (x.y)^{-1}. I say earlier in the paragraph that "We need to be careful". This example illustrates how a rule that one might naively propose for defining terms and term operations as strings of symbols and operations on those strings could go wrong. The rule in question defines \iota_T to simply append the symbol ^{-1} to whatever symbol one plugged into it; and as this example shows, it would not have the properties one wants. In the next paragraph I talk about using parentheses as part of one's string of symbols, which does give what one wants. (It uses more parentheses than the minimal number needed, but at least it works.) ---------------------------------------------------------------------- Your pro-forma question was why, on p.21, conditions (2.2.4) and (2.2.5) needed to be specified to be sure T/~ was a group; specifically, why they didn't follow from the other relations. The idea of your answer was right -- that instead of assuming that "~" is a relation of a naturally occurring sort, which one could expect to satisfy (2.2.4) and (2.2.5), one should think in terms of coming up with an "artificial" relation, which would have no reason to satisfy those conditions merely because it satisfies the others. One can, in fact, describe a concrete way of getting such "artificial" relations. Start with a relation ~_0 which does come from a map v of X into a group G; and then let ~ be an equivalence relation containing ~_0, gotten by choosing two equivalence classes [p] and [q] of ~_0 and "joining them into one" under the new relation. You should find it easy to show that ~ will then satisfy (2.2.1)-(2.2.3) and (2.2.6)-(2.2.8), but, assuming ~_0 had more than just the two equivalence classes [p] and [q], that ~ will not satisfy (2.2.4) so that (2.2.9)-(2.2.11) will not give well-defined operations on G. You also asked whether (2.2.4) (with the other conditions) would at least imply (2.2.5). You thought it probably wouldn't; but in fact it will. One can show this using the fact that from (2.2.4) and the other conditions, T/~ acquires a structure with a well-defined multiplication operation satisfying the consequences of (2.2.1)-(2.2.3). It follows from (2.2.3) that for each element [p] of that structure, [p^{-1}] will be an inverse, and then arguing that inverses must be unique. So what is the point of including (2.2.5)? To give a procedure which, without such subtle arguments, shows the existence of free groups, and which incidentally can be used with other sorts of algebraic structures to which those subtle arguments might not be applicable. ---------------------------------------------------------------------- You ask whether, if we replace "compact Hausdorff" with "locally compact Hausdorff" in the definition of the Stone-Cech compactification (p.85), we get a "local Stone-Cech compactification" construction, which turns Q into R. Unfortunately, no. To see this, consider any irrational number \alpha, and let f be the inclusion-map of Q into R - \{alpha\}. The space R - \{\alpha\} is locally compact (every point of that space has compact neighborhoods), but it has no point "where \alpha should go", so the map f does not factor through the inclusion of Q in R. Intuitively, this shows that in making Q locally compact, there is no need to insert an element "where \alpha should go". Since \alpha was an arbitrary irrational number, there is in fact no extra point that "has to be inserted". Yet if we don't insert any point, we don't get local compactness; so I don't think there is a universal local-compactification. For a little more intuition, let \beta be the cube root of 2. Let h: Q --> Q be the function such that f(x)=x for x not between \beta and \beta^2, but f(x) = 2/x for x in that interval. This self-homeomorphism of Q turns that interval upside-down, while leaving the rest of Q unchanged, showing that the topology of Q is far from determining its order-structure. Hence there can be no natural way to construct from the topological space Q the space R, whose topology does almost determine its order-structure. (It determines it up to reversal.) However, if we regard Q as a metric space, then that metric space certainly does determine the metric space R, namely, as its completion; and the completion construction can indeed be regarded as a universal construction on metric spaces. ---------------------------------------------------------------------- Regarding Definition 4.1.3 on p.96, you ask whether there is a reason why we speak of "isotone maps" of partially ordered sets, rather than "homomorphisms". When we get to category theory (Chapter 6), we'll introduce the general term "morphism", covering the various sorts of maps that come up in different areas of mathematics: homomorphisms of algebras, isotone maps of partially ordered sets, continuous maps of topological spaces, etc.. Till then, we are using the traditional terms; and "homomorphism" is traditionally used for functions that respect operations on algebras. If we have entities that mixes algebraic and non-algebraic structure, such as topological groups, we may say "continuous homomorphism" or "homomorphism as topological groups"; but one rarely uses "homomorphism" when there are not some operations to be respected. ---------------------------------------------------------------------- You ask about the motivation for the concept of Gelfand-Kirillov dimension, developed in Exercises 4.2:2-9 on pp.101-103. It originates in ring theory rather than monoid theory, where the growth function is defined as in the last paragraph of p.102. If one looks at easy examples such as polynomial algebras k[x], k[x,y], etc., one finds that the first grows linearly in i, the second quadratically, etc.; and for more complicated structures, commutative or noncommutative, one encounters similar patterns -- the dimension tends to grow either as i^d for some d, or exponentially in i (e.g., for a free associative algebra). GK(R) is defined so as to "capture" the number d such that R grows like i^d if there is one; it gives infinity if one has exponential growth. If one tries, one finds that one can construct noncommutative rings for which GK(R) is not an integer or infinity; but it still gives some real number. Exercise 4.2:8 shows that the same growth rates occur for algebras as for monoids, so in a text like this, which assumes little ring-theoretic background, it is convenient to devote most of the development to the monoid case. ---------------------------------------------------------------------- You note that the well-ordered sets (p.115 et seq) can be characterized as the totally ordered sets all of whose reverse-well ordered chains are finite, and ask whether there is a nice characterization of of the totally ordered sets all of whose reverse-well ordered chains are countable, noting that this class includes the well-ordered set of real numbers. I don't know. I wonder whether they are those that can be embedded in a lexicographic product \alpha\times\mathbb{R}, where \alpha is an ordinal, and \mathbb{R} is the ordered set of real numbers? (Here one could replace \mathbb{R} by, say, the open or closed real unit interval (0,1) or [0,1], since the former is order-isomorphic to \mathbb{R}, while \mathbb{R} and [0,1] are mutually embeddable.) ---------------------------------------------------------------------- Regarding Exercise 6.7:2 (p.203), you ask why this doesn't show that the category theoretic notion of an isomorphism between objects deviates from the normal mathematical notion, since in concretization T, T(f) is a bijection, but in the others it is not. I think you're assuming that the "normal mathematical notion" of an isomorphism is a homomorphism that is bijective on the underlying set. But that works only for those sorts of objects where such maps have the property that their set-theoretic inverses are also morphisms. For a case where that is not so, see Exercise 4.1:1 (p. 94). You'll see there that the concept of isomorphism agrees with the category-theoretic version, not with that of being a bijection on underlying sets. ---------------------------------------------------------------------- > In proposition 7.9.11 on p.289, you write of categories being > generated by a set of morphisms. Is this more generally a useful > way of thinking about a category, or is it mostly just the condition > which makes the proposition true? ... I think it can be useful when the category is used to "index" something; e.g., when it is a diagram category over which we will take a limit or a colimit. For instance, of the two categories illustrated by centered displays on p. 178, the first one is generally pictured without showing the diagonal arrow, because that is the composite of other arrows, and the second one is shown (there and in general) without showing the composites of the short arrows, for the same reason. And the point is not merely that one gets a less cluttered picture, but that in defining a cone to or from such a category, it is enough to make sure the arrows of the cone make commuting triangles with the generating morphisms; it then follows that they make commuting triangles with all morphisms. Also, people sometimes look at how results on monoids can be generalized to results on categories; and how results on rings can be generalized to results on Ab-categories; and since generation properties are of interest in those fields, they should be of interest in the category-theoretic generalizations. However, I have not seen the concept used except by myself -- in this section, and a paper I wrote based on it. ---------------------------------------------------------------------- In connection with the observation on p.284 that direct limits commuting with products is what allows us to make a direct limit of algebras an algebra, you point out that a family of operations on a set can be regarded as a map from a coproduct of products of the set into the set, and you ask whether the fact that direct limits respect coproducts is useful in this connection. Interesting question. I don't see it as directly important -- it seems easiest to just regard each of the family of operations as carrying over to a direct limit. But if we consider a construction that does not respect coproducts, such as that of direct product, we get some useful insight. Consider, for utmost simplicity, algebras consisting merely of a set with two unary operations, \alpha and \beta. If X and Y are two such structures, then the direct product of their underlying sets, |X| x |Y|, can be made an algebra of the same sort in the obvious way, writing \alpha(x,y) = (\alpha(x),\alpha(y)) and \beta(x,y) = (\beta(x),\beta(y)). But regarding the combination of \alpha and \beta as maps X \coprod X --> X and Y \coprod Y --> Y, we see that together they induce a map (X \coprod X) x (Y \coprod Y) --> X x Y i.e., (X x Y) \coprod (X x Y) \coprod (X x Y) \coprod (X x Y) --> X x Y, i.e., four rather than two unary operations on X x Y, which turn out to be (x,y) |-> (\alpha(x),\alpha(y)), (x,y) |-> (\alpha(x),\beta(y)), (x,y) |-> (\beta(x),\alpha(y)), (x,y) |-> (\beta(x),\beta(y)). This kind of phenomenon is of interest -- e.g., if M and N are R-modules, though we usually make M x N an R-module, sometimes we prefer to make it an (R x R)-module instead. ---------------------------------------------------------------------- > Does the term "derived operation" defined on p.330 have anything > to do with derived functors and derived categories? ... I don't think so. Vague everyday words like "normal", "derived", "regular", etc. get borrowed over and over again into mathematics, with generally unrelated meanings. ---------------------------------------------------------------------- You ask about many-sorted algebras, mentioned in the last paragraph of p.351. These are very much like 1-sorted algebras. Instead of an underlying set, one has an S-tuple of underlying sets, where S is the set of "sorts"; and the arity of each operation is a list (with repetitions allowed) of the "sorts" of the argumens, and a specification of the "sort" of the output. When one defines a free algebra, instead of having a single free generating set, one has an S-tuple of generating sets, so that the algebra is free on a family of generators of specified sorts. One of the most natural examples is given by graded rings. Such an object (graded, let us say, by the natural numbers, for simplicity) is usually described in ring theory as a ring R that is given with a direct sum decomposition R = \sum_i R_i, the summands R_i being called the "homogeneous component of degree i", subject to the condition that any product of an element of R_i and an element of R_j is an element of R_{i+j}. This definition is adequate, but it is really most natural to regard the graded ring as a system of abelian groups R_i with multiplication maps R_i x R_j --> R_{i+j} satisfying appropriate identities. In ordinary one-sorted algebras, people can (although I don't like to) exclude the empty algebra, and make ad hoc definitions to get around the difficulties this produces, without losing a nontrivial amount of information about the theory of such algebras. But in a many-sorted algebra, any subset of the set of sorts may be empty, so that requiring that every sort should be nonempty loses a nontrivial amount of information. (Though this is not illustrated by graded rings, since the additive structure of each R_i leads to a zeroary operation with output "0_i" in each R_i). There is an article about some of the resulting complications in the theory, "The point of the empty set" by Michael Barr, Cahiers Topologie Géom. Différentielle 13 (1972), 357-368, MR 48 #2216, though I haven't read it. (From the MR review, it requires the theory of "tripleability", which we haven't covered.) ----------------------------------------------------------------------