On the other hand, I recently come to realize that there’s a more technical way in which one could argue ‘math is a language’. And if my previous post might have made some people turn their nose because of the handwaving, philosophical remarks, this time we are talking actual mathematics, or at least metamathematics.

Traditionally, math is thought founded in/on sets. This means that the entities you talk about in math are assumed to be sets. In the most orthodox set theories, everything is really a set, even things we don’t usually think of as sets, for instance *numbers*. This is called material set theory, and I think of it as axiomatizing the ‘atoms’ of which mathematical matter is made of. Since there’s no dialectics going on between atoms and the forms they make, the first are always the same, immutable and do not see the bigger picture.

Material theories are not bad *per sè*, though I would argue they are far from the mathematical practice, i.e. we do not think of *everything* as sets. Some things are sets, sure, but some are just not. Numbers do not make sense as sets. Yeah, maybe counting numbers, right. But real numbers? Whose intuition is grounded on the concept of real numbers as Dedekind cuts? I know of no one.

This is even more apparent by the fact that (a) most people are basically oblivious of this fact and (b) nevertheless, we really don’t care about ‘the structure of set’ on most of the objects we use. For example, when you describe a map between, say, rings, you may prove it is a well-defined map of rings, but I’ve never seen anyone checking it is a well-defined set as well. It’d be trivial, of course. Yet nobody even mentions it, which allows us to build a case against a material foundation as natural foundation for mathematics.

That said, facts (a) and (b) can also be read in favor of materal views. In fact a good foundation ‘stays out of the way’, so to speak, meaning it doesn’t obstruct the study of your object of research with annoying technicalities or bookkeeping. Can this be said, say, of type theory?

Both this cases, however, have a common point: mathematical practice is usually not concerned with foundations, as long as they are solid enough to not fail us, and as long as they provide the necessary tooling to carry on working on the objects we are interested with. In other words, we could say that most of mathematics is ‘foundations invariant’, i.e. it doesn’t really bother to switch from, say, ZFC to NBG.

What is preserved, then, in changing foundations? The answer is quite easy once we conceeded ourselves sufficient meditation. **It’s language**.

The point is that *soundness and power of tooling are properties of the language we use to describe mathematical theories*. Sets have a powerful and (hopefully) sound language, which allows mathematicians to go on undisturbed much of their time. But since they never endorsed sets explicitly, we arrive to the conclusion that if we were to switch to an equivalently powerful foundation nobody will notice.

This was quite liberatory when I realized it. In fact sets impose quite a strong ontological view on the universe of discourse of mathematics, thus it is liberating to see mathematics is actually independent from them. It is awkward to think mathematics can only be made with sets, that algebra, geometry, analysis and so on are just ’emergent properties’ of sets. Why would it be so?

Instead, it is now clear tht theories are independent and meaningful on their own. Given a sufficiently powerful foundation, a theory can thrive on its own.

All of this becomes more contentful in light of **topos theory**. A topos is a category whose internal language is sufficiently powerful to support many of the theories of everyday mathematical practice. The major drawbacks of a general topos are (1) the lack of nonconstructive principles such as LEM or AC and (2) the lack of infinite sets like the natural numbers. These could startle the reader as too big of an obstacle to ever take seriously the option of moving from sets to other toposes, yet this is nonsense as we cannot be castrated by having more choice than what we have now. If we need infinite objects, we just declare it. If we seriously need LEM/AC, we do it as well.

I’m not arguing for rebasing all of mathematics on an arbitrary topos, or for structural set theories like ETCS. I’m just noticing a simple fact: we mathematicians talk, and the objects we deal with are made, first of all, by our discourses.

]]>It might seem weird, but to a pure mathematician ‘apply’ sounds like ‘spoil’. Applications are a kind of low rank pursue for a mathematician, something ‘easy’ and very much unexciting. I happily embarked this line of reasoning very early in my studies, giggling about the superb degree of purity my career was going to have.

That’s also something that the general public, *the muggles*, seem to get. Mathematics is all about abstraction, and abstraction means getting far from reality. The more abstract we mathematicians soar, the more we enjoy it, refreshing ourselves with pristine, unspoilt Platonic ideas.

The starting point of this reflection is the existential question of math, that is: *is this of any utility whatsoever? *

It’s actually a causality issue: **if we stopped pursuing very abstract and very theoretical mathematics, will we really miss any practical application?** When is abstract too much?

I often struggled to find a single example of application for more math than not. And even if the charm of it still has a big effect on me, it started to be not enough. Of course you start to deal with these questions when you get in touch with really whimsical stuff like ‘pointless spaces’, whose very name hints to something *really* difficult to apply to anything whatsoever.

So, why all the fuss?

I guess the answer lies more in the form than in the substance. A good example to illustrate my point comes from category theory. It was conceived as a good taxonomical tool for algebraic discourse, to formalize *general abstract nonsense*. Yet, it turns out, categorical concepts pop out everywhere. ‘Adjoints are everywhere’ said someone. Would you ever see them if no one ever defined what a functor is?

In the same fashion, a big chunk of mathematics might be justified just by appealing to its form. Topology is useful because topological concepts are indeed ubiquitous in other mathematical tools. Brouwer’s fixed point theorem makes a lot of sense when stated about morphism of topological spaces, and when proved using the classical algebraic topology argument. Can you wonder how quirky would it sound if stated without any reference to topology?

This made-up example is actually what happened with Abel-Ruffini’s theorem: Ruffini concocted an unbearably long proof about the unsolvability of quintics (so long, almost no one was brave enough to read it all). Fast forward less than 50 years later, and Abel’s proof is neat and short: why? Because it used powerful concepts from the new-born science of abstract algebra, which made much more evident what all the question was about: the structure of the symmetric groups .

**The moral is, a good part of math is simply there to make other chunks look reasonable** [0]. Category theory is the royal example: as put by Freyd, ‘the purpose of category theory is to show that what is trivial is trivially trivial’.

This insight leads us to a much deeper one, that **mathematics is actually a language**. What I mean by *language* is a set of symbols and rules on how to assemble them to convey meaningful messages.

Clearly mathematics *has* a language [1], yet I’m arguing here mathematics *is* itself a language.

The main observation is that mathematics is highly hierarchical and fractal-like. Higher mathematics is of course ‘made of’ lower mathematics (e.g. you need linear algebra to grasp tensor algebra), but at the same time any significantly developed mathematical theory finds itself mirrored in some other, either completely and rigorously so or just partially (e.g. the duality between geometry and algebra). Undoubtedly, finding similarities between different areas of mathematics is considered as an highly desirable, elegant and fruitful achievement [2].

**The symbols of mathematics are its own concepts**, which should be intended in a broadly and elastic sense: ‘group’ is a concept, and so is the subject of topology as a whole. A better word is *ideas*: groups embody the idea of modeling symmetries algebraically, while topology is the idea of studying a space by defining what is ‘near’ to a given point [3]. Theories are ideas, too: e.g., Morse theory is the idea that singular points of a manifold must tell something about its topology.

**The rules of the language of mathematics are simply any meaningful way to put together mathematical ideas**. This is too quite blurry, so let’s make some examples: singular homology theory is a *composed idea*, which is made from the idea of probing a (topological) space with maps from simplices and the idea of building an algebraic gadget out of this process. Both ideas can be generalized separately, respectively to get fundamental groups (we study maps from spheres) and homology theory (we study the same algebraic idea applied to different constructions, e.g. cubes).

Of course homology theory is also an idea itself. This the power of mathematics as a language: any composed idea can become itself a ‘simple’ idea upon which we can build more complex ideas, and so on. I believe this explains both why abstraction is so powerful and how mathematicians can work on increasingly advanced topics as easily (or as hardly) as an undergraduate works on linear algebra: both are just surfing the wave of recursive complexity.

Until this point, I seem to have described not mathematics but a wider generalization of it: *thought*. We need to ensure our language is tied down to a formal, rigorous system (or deontology), so that a ‘successful’ idea is one which can be morphed into a provable statement, or at least to a statement we can judge logically. So this distinguishes the idea of considering the zeroes of Riemann zeta function as eigenvalues of a suitable (self-adjoint) Hamiltonian and actually proving the Riemann Hypothesis.

This view, moreover, explains my previous claim that some math *is just about math*, just as some parts of English are just about grammar (like the word ‘grammar’ itself). It does say something about why it seems so abstract, too: its composition rules produce the fractal structure of the mathematical edifice, thus moving quickly into ever involved concepts and long chains of generalizations. Mathematics has the ability to summon a whole theory by just observing a particular property in an objects: algebraic sets satisfies the properties of a lattice of closed sets? Behold as topology rushes in! Suddenly, you’re speaking about compactness and separability in a context which was mostly ‘polynomials and ring algebra’.

To draw a fictional comparison, picture a (spoken) language in which *entire debates* are condensed in a single word, and then proceed to be used in new debates. Clearly meanings add up and you start to feel dizzy as a ten-words conversation spirals out in a twenty-volumes reference to previous discussions. In some sense we do this in everyday language, but in a lot less meticulous way as in math: nobody (actively) discusses the validity of Euclid’s fifth axiom anymore while the same can’t be said about communist theories (notice both where ‘clarified’ around the same time!). In a sense, the rigour imposed upon mathematical ideas makes the whole edifice solid and trustworthy. This is a luxury not even hard sciences have.

This makes mathematics extremely unwordly, because it is various strata of meaning above ‘real stuff’, yet phenomenally powerful. Mathematicians routinely handle behemoth ideas by hiding it under an even more gargantuan pile of, let’s face it, abstraction. This can be exploited to reflect a similar feature of reality: **things are simple in theory, not so much in practice**.

This is something that is not extraneous to science, intended as the human endevour of modeling reality with math: by its very definition, scientists never argue to have a perfect model of reality, just a working, ‘good enough’ one. As science progresses, so do the accuracy of models. And we can only do this by building up on previous models, using smaller and smaller discrepancies from the old ones to guide the introduction of a new one. Naturally, models tend to get less straightforward with each iteration, as to capture a phenomenon more faithfully you’ll need to consider more complex interactions, higher order effects, and nitty-gritty details. **Hence to handle complex situations, we need to be able to work with complex theories**.

Wrapping up then, yes, we’ll miss a lot if we stop pursuing pure math. The feeling of dissatisfaction with more and more abstract math is a symptom of something else: calling applied math sour grapes. Instead of facing the daunting task of modeling complex tasks, we prefer to turn around and pretend applied math is some trivial and inferior endeavour.

[0] Another moral is that concepts in mathematics can’t be thought, yet alone taught, as independent chunks. They need to be properly motivated (and historical background is great at this), and inserted in their rightful position in the mathematical edifice.

[1] Math has languages on formal and informal levels. On one hand, every mathematical proposition can be regarded as written down in a formal system of some sort; while on the other hand, mathematicians use a common language made up of naming conventions, common notations, canonical subdivisions of disciplines, and a very distinctive prosaic style.

[2] We could go as far as saying any mathematical progress could be decomposed in a *vertical* component (‘going deeper’ into the subject or ‘building higher’) and an *horizontal* one, linking the subject to other areas. This picture fits nicely with the informal entity of the ‘mathematical edifice’.

[3] I’m referring here to the definition of a topology using neighbourhoods. Other definitions also embody specific ideas about which aspect of ‘being a (topological) space’ should be fundamental. The very fact we have strikingly different yet equivalent definitions is highly interesting, and makes topology a useful and strong theory. In the fractal analogy, topology exhibits a lot of self-similarity.

]]>Here’s a Bash script distilling the lessons I learned. The starting point is obviously Agda (quite cumbersome) documentation. I couldn’t manage to run anything installed from `apt`

, therefore I went for the first method.

apt-get update # libraries necessary to run Cabal later (I found these names by 'trial and error', YMMV) apt-get install zlib1g-dev libncurses5-dev libgmp3-dev libnuma-dev -y # install Haskell apt-get install ghc ghc-prof ghc-doc -y # install Cabal apt-get install cabal-install -y cabal update cabal install Cabal cabal-install # install Agda dependencies cabal install alex happy Agda # install the standard library ( from https://github.com/agda/agda-stdlib/blob/master/notes/installation-guide.md ) mkdir ~/.agda cd ~/.agda wget -O agda-stdlib.tar https://github.com/agda/agda-stdlib/archive/v1.0.1.tar.gz tar -zxvf agda-stdlib.tar cd agda-stdlib-1.0.1 cabal install cd .. echo "~./agda/agda-stdlib-1.0.1/standard-library.agda-lib" > libraries # this allows references to the standard library in every Agda project echo "standard-library" > defaults # install Emacs apt-get install emacs -y # setup Agda tool for Emacs agda-mode setup agda-mode compile

I take this opportunity to say why I’m installing this.

I’m currently taking a course in type theory, and it ended with two guest lectures by prof. Philip Wadler, who taught us the basics of Agda. He followed his (and Kokke’s) new book *Programming Language Foundations in Agda*. It’s an “interactive” book, written in “literate Agda”, which is a way to write notebooks with Agda snippets inside.

The lectures were great and ended prof. Wadler performing a funny stunt:

]]>Luckily, LaTeX provides a pair of commands, `\footnotemark’ and ‘\footnotetext’, to disjoin the two parts of a ‘”footnote declaration”. The first produces a superscript, and the second links the last superscript with the text of the footnote. In this way you just drop a \footnotemark wherever you need to, and later specify the text of the footnote with \footnotetext.

So far, so good. The problem arises when you need to do this for multiple footnotes.

In fact, \footnotetext associates the text you pass to it to the last \footnotemark, hence if you had two footnote marks and then you provide two footnote texts, both will bind to the last mark and the first will remain orphan.

The solution is usually some awkward hotfix to the counters, but today I found a more elegant way. To be fair, we still have to manually intervene on the counters, but in a much safer way, as we’ll see. The trick is the following: label the footnotetexts with their own \label, and use \getrefnumber to bind it to the mark retroactively. To encapsulate the strategy, I wrote myself the following macros:

which can be used like this:

If you think the lengthy command names are as disrupting as the foonotes they replace, just rename them with your favorite abbreviation (like \dfm or whatever).

I said it is safer than other methods since the counters are manipulated just once, in a totally predictable way, and just when it’s needed (when we specify the footnote text). Moreover, as the binding mark-text is now explicit and named, it is very hard to mess with.

I’m probably more excited than I should be, but I hope it will help (and satisfy) someone else, too!

]]>Anyway, in Spivak’s book the following exercise was proposed:

Let be the Cantor set. Show that is homeomorphic to the surface:

We’ll call it . We recall that the Cantor set is the “limit” of the following iteration: starting from the unit interval, delete the middle third of each connected component. So the first three iterations are:

We’ll call it . Remember that **the Cantor set is comprised only of the points we get at the end**, and indeed it is made of points and not thick lines like in figure. So is a plane with a peculiar set of holes cut in it (we can assume to be embedded in the line , but feel free to choose your favorite embedding).

Anyway, it’s easy to spot the similarity between these two structures. It is as if represented the process of making . This is somewhat true. First, let’s shrink to an open disk around our chosen embedding of . They’re homeomorphic, so no big deal. Yet this helps us with the visualization.

Let’s start from the whole open disk and remove the points of the first iteration of the Cantor set: once we make some room around the holes, we get

Notice the boundaries are not part of our surface, since we started with an open disk (hence no outer border) and the inner boundaries were victims of the piercing too. We then invoke the power of the third dimension and start to deform our space as to lift the holes from the plane we were stuck in until now, at the same time shrinking the outer border as to get a more symmetrical shape. We get the following surface, called a *pair of pants*:

Voilà! Notice again how this surface describes the first iteration in the construction of the Cantor set: from the unit interval (= one big hole) we remove the middle third and get two smaller intervals (= two holes). Since the construction is somehwat recursive[2], this means that to get to the second iteration, we can just glue the waist of two more pair of pants to the cuffs of the one we already got.

Can you see where this is going? Exactly to ! I already find this amazing, yet something even greater is lurking just behind the corner.

We know the plane and the sphere are really close topologically speaking, t**hey differ just by a single point**. What I mean is

where denotes the Alexandrov compactification of a space. That the isomorphism above holds is best seen on the other way around: if we remove a point from we can then widen the puncture to a big hole, flatten everything out, and we remain with an open disk, which, as we’ve already seen, is just a scaled-down version of . **The Alexandrov compactification of is the opposite procedure**: we add a point to the plane, and we wrap everything up to get to a sphere (imagine reversing the previous procedure until you get to the punctured sphere: adding the new point fills the puncture).

But then

So compactifying should give us a sphere with a Cantor set removed. Let’s see this: take and close the top hole (the waist of the first pair of pants we used) with a disk. This corresponds to adding the “missing point” of , therefore what we get is indeed a sphere minus a Cantor set. The second and third equivalences are apparent.

To get a whole sphere now, we need to close the “ends”, which has to be done carefully in order to be sure we’re doing sensible things. The first problem is, they’re out of reach: we need to bring them back from infinity (stacking pants brought us very far!). To do this, imagine shrinking each “level” of , for example by making each level exactly half as big as the previous one. Then we know the total height of will now be finite: so we have all those ends at reach, finally! Even better, the ends have become just punctures, thus the Cantor set perfectly falls in place to fill them. In the end we get the equivalence

Another way to see this is to “open” the tree by moving the two main branches of on opposite sides[3]. Then closing the ends by rescaling, as we did before, we get a branchy surface which, however, is closed and simply connected (think about a “finite” version of it to convince yourself), hence homeomorphic to a sphere.

But what if we moved the branches while we do this? I mean, there’s nothing that forces us to keep laid out straight, we could curl it as to tangle the ends on the right with the ends of the left. Actually, if we’re careful enough, we can tangle them in such a way as to form a link! See this:

We happened to have built *Alexander’s horned sphere:*

This pathological (and slightly eerie) space is a famous counterexample to a generalization of Jordan-Schoenflies’ theorem:

The complement of any simple, non-degenerate loop on the sphere has exactly two connected components, both homeomorphic to a disk.

Which is the (apparently[4]) pretty obvious statement that cutting a sphere in half will give you two (deformed) disks. But when you translate “simple, non-degenerate loop” to the equivalent “an embedding of , the statement begs to be generalized:

The complement of any embedding of into has exactly two connected components, both homeomorphic to .

Yet the horned sphere shows this generalization is doomed to fail: while its complement has indeed two connected components[5], the unbounded one (the exterior) is not homeomorphic to the three-dimensional ball. The reason is that we sucessfully tangled the two ends of as to make them inextricable: if you wind a loop around one “horn” (a branch of ), there’s no way to make it slip pass the tangle:

Then we cannot hope to shrink it to a point, thus we’ve shown the exterior of the horned sphere to not be simply connected, while is.

I actually learned about Alexander’s horned sphere’s role as counterexample in Topology class two years ago, yet the teacher didn’t say *why* it was a counterxample. So when today I realized the connection to Problem 17, I was deeply pleased. And as I never would have imagined such a simple, conceptual proof, I found it worthy of a post here. I hope you enjoyed it as well!

[0] An *homeomorphism*, or isomorphism of topological spaces, is a bicontinuous transformation, i.e. a continuous deformation whose inverse is also continuous. This means: no cuts/rips, no gluing, no “complete thinning out” (while a rod is contractible to a segment, the contraction can’t be inverted so it is not an homeomorphism, it’s instead an *isotopy*).

[1] In mathematical lingo, an (n-)*sphere* is the boundary of an (n+1-)ball, which is instead inteded to be solid. To clarify: an orange is a 3-ball, while its peel is 2-sphere.

[2] *Monadic* would be a better word.

[3] Because it’s an infinite tree, this is the same thing as gluing to a copy of itself along the waist, which, interestingly, corresponds to making a sphere by gluing two open disks along their boundary.

[4] It is indeed one of those very frustrating situations in which something looking very simple is actually quite tricky to prove. The difficulty is given by the fact that a continuous curve can be nonetheless very pathological: jagged, curled up, non-rectifiable, whatever. Assuming the loop to be piecewise linear or smooth, the statement becomes indeed quite easy to prove. Yet continuity alone is a very short lever to push on.

[5] This is the Jordan part of the Jordan-Schoenflies theorem, which succeeds to generalize to higher dimensions.

]]>One of my New Year’s resolutions is to write more on this blog (at least weekly, says the list), so here I am. My biggest impairment in doing so has been the feeling of incompetence about a lot of the stuff that interests me, hence my good intentions crashed against the Impostor Syndrome wall even before being abandoned in the ‘Drafts’ tab of WordPress (which is now very much akin to a graveyard).

I think the solution should be writing about things I actually feel competent about, such as undergraduate math. And since I’d like my blog to be about my ‘original’ ideas, I think I’ll write about some different perspectives on some of the topics that come up to my attention.

Since I’m giving a lot of private lessons lately, mainly to engineers trying to pass Calculus, I’ll start with some insight on something that, at the time, perplexed me quite a lot: **how come integrals such**

**and **

**have a strikingly different result?**

Let’s see. The first, having , can be solved by using *partial fractions decompositions*: we proceed by first factoring the denominator

and then we postulate that

for some constants to determine with an easy linear system[0], we get and the integral is magically simplified to

Cool!

What about the second one? Well, there’s no hope to factor the denominator as we did, since it is a irreducible polynomial. Indeed, we know this ‘immediate’ integral:

But **why on Earth changing a single number brings us to a completely different and seemingly unrelated part of mathematics?**

What if I told you *logarithms and the inverse tangent are both part of the same mathematical conspiracy*? What if I told you there’s a world beyond what you know, where the two are just the two sides of the same coin? And what if I told you the key to unlock this door it’s the same that makes every polynomial reducible?

Enter complex numbers. Using the magical powers of the imaginary unit, we can dispense with the distinction between reducible and irreducible polynomials: the polynomial above now factors into and thus we can proceed with (complexified) partial fractions[1]!

.

And then

.

Therefore we get to the identity

$latex \displaystyle \arctan x = \frac{i}2 \ln\frac{x+i}{x-i} + C.

So you might now think that the expression above is equivalent to for real $x$, but hold on a second! There’s an integration constant hanging there!

Let’s check, algebraically, what should $\arctan z$ be. We know, from Euler’s formula, that

,

hence

.

If we set $s= \tan z$, we can try to express $z$ in terms of $s$ to get the expression of $\arctan s$. I’ll leave this funny exercise to you (it’s not hard, follow your nose) and we get to

which is off by a minus sign from what we got from the integral! Fear no more, because Constant of Integration is coming to the rescue:

And since $C$ can actually be chosen *ad arbiter*, our result is indeed consistent with the known integral on the real axis.

[0] Indeed imposing to be equal to is actually the same as asking the two polynomials to have the same coefficients: then we get two equations

By knowing some linear algebra we also can avoid to remember all the complicated stuff for more involved cases, namely when the not all the roots of the denominators are simple (just increase the degree of the unknown polynomial over the fractions corresponding to multiple roots until your linear system doesn’t get overdetermined, that is, until the difference in degree is exactly ).

[1] It is indeed interesting to note we don’t actually need 4 variables, as the non-real roots of a polynomial with real coefficients always come up in conjugate pairs. This means we’ll always get and .

]]>Everything[0] you read typically has a *10x* multiplier on words, and usually boils down to one/two interesting perspectives on the topic, a lot of ill-motivated arbitrary assumptions/opinions and zingers to other colleagues or thesis. Authors seem more worried about convincing you of their competence and authoritativeness than to actually expose their ideas in an honest, concise manner. Eventually they sound artificial and shady.

Let’s address this immediately: I’m not saying humanities are BS, and I’m not saying people writing about it (*humans?*) are impostors. Most importantly, I’m not saying those reads are uninteresting or devoid of meaning and novel ideas. *Quite the opposite!* Lakatos work is very interesting, but it’s nicely summarized in its Wikipedia page. Reading the paper, which is not even the complete work, turns out to be just a very inefficient way to learn about his ideas.

Call me an hopeless mathematician, but the inherent honesty of the mathematical (but I could probably say ‘scientifical’ as well) prose is very distinctive, and I’m especially compassing non-technical expositions here. The remark on non-technicality is mandatory because proper (peer-reviewed) mathematical papers obviously feature strict formality. Yet this formality, which I’d rather call *naivety*, is vastly present also in less technical writings. Even in somewhat ‘biased’ expositions, I’ve never found authors boasting about their great understanding of the topic instead of actually bringing the ideas to the reader.

This attitude is very probably a side-effect of the mathematical mindset itself. After all, proofs[1] are just a way to explain facts to people, thus we could call them an incredible honest and naïve form of communication[2].

[0] Usual ‘false generality’ warning here: read as ‘>90% of’.

[1] Or better, what Lakatos would call *pre-formal proofs.*

[2] Actually, the most honest possible: every mathematical fact is inherently true, while usually philosophy is true *up to interpretation*.

I’m far from being an expert in category theory, although I enjoy fiddling with it and looking through the categorical lens at the material of my courses. Needless to say, I find it all over the place (*when you have a hammer, everything looks like a needle*). I’m now studying algebraic topology and the categorical point of view is, unsurprisingly so, impressive. My understanding of what categories are [0] evolved a lot since I began to study the subject, nonetheless because I’ve been ~~reading~~ studying a lot more material about it [1].

The two main roles of categories are arguably that of *diagrammatic metalanguage* and of *mathematical universes *(*topos theory intensifies*). Actually, the first dwarfs the second in importance, but my initial understanding of the theory was reversed. I think this is due to the diffused habit of giving ‘big’ categories (such as Set) as the first examples, and relegating small categories in a zoo of nice, whimsical, examples. It took me some time to understand diagrammatic language is the true hero of category theory, and that a good introduction to category theory should emphasize this immediately (MacLane does this, but fails [from a beginner perspective] in other ways).

Introducing categories as diagrams highlights also another subtle aspect, that is they are algebraic structures not less than groups and rings are, and as much as a ring can be regarded as a group with more structure, categories can be regarded as even more fundamental in this hierarchy, in fact so much fundamental that they’re expressive power goes over the roof. This perspective is not only a curious fact or a way to make definitions succinct (‘A group is one-object category where all morphisms are invertible’), but brings into light a swathe of structures whose importance seems undervalued in elementary expositions, such as posets/lattices, monoids and grupoids.

As any (good) abstraction, it provides a way to read things in different contexts, and it seems foolish to ignore it when they arise so naturally and so often.

For example, it is standard to introduce the fundamental group as following: you first talk about path connectedness, and show that concatenation of paths fails miserably to be an interesting operation. Then you go on with homotopy (of paths, usually) and magically recover a group from the concatenation operation, but not before having changed completely the subject from ‘paths’ to ‘loops with an arbitrarily chosen base point’! What happened to our nice, intuitive idea of concatenation of arbitrary paths? Why are we fixing a point now? [2] We are effectively moving the whole discussion from Top to Top*, without an apparent reason. We interrupted a smooth narrative with a sudden hip, and so happened to the natural flow of understanding.

But there’s a better way: just saying that concatenation, up to homotopy, forms a groupoid, which is still a very nice object! Moreover, now it makes sense to fix a base point, because that’s how you recover groups from groupoids. It makes more evident some stuff about isomorphism of fundamental groups (ah! that’s why they are not canonically isomorphic!) and you can still prove a lot of the same stuff, like a version of Seifert-Van Kampen theorem which is not more difficult than the usual one and immediately let’s you find the fundamental groups of the spheres, *circumference included*. Bonus: you will turn into a cool kid.

And what’s a groupoid? A category with inverse for every morphism, which is a very suggestive abstraction if you know something about groups: they’re description of symmetries, processes of a single-state system. A groupoid is then a way to describe a system with more than one state, but with the same reversibility properties.

Eventually, categories are a (or the?) natural way to provide abstractions for systems with state and transitions, and that’s how diagrams arise: they’re the most simple way to represent that, at least for us puny humans. Also ‘processes and states’ is a very familiar category of thought (in a philosophical sense) for humans, akin to ‘movements and positions’, which nicely fits the way most people [3] reason about concepts in mathematics [4].

To conclude I’d like to write down this table:

One object | Multiple objects | |
---|---|---|

Every morphism is invertible | Group | Groupoid |

Not every morphism is invertible | Monoid | Category |

Which, as a corollary, gives us a great name for categories: *monoidoids*.

[0] It happens frequently in mathematics that despite the conciseness of formal definitions, the true nature of objects really transcends them and needs a good amount of contemplation to really permeate into your mind and become an intuitive understanding. Moreover, I’m convinced that the more simple and fundamental objects are, the more contemplation is required to achieve such profound knowledge of them, since simple abstractions are applicable to a greater breadth of contexts.

[1] Some words about the books which helped me. The first is P. Aluffi’s book *Algebra: Chapter 0*, which may seem rather clumsy in the beginning but then takes you a long way in building the right ideas to apply category theory to algebra. I attribute a big part of the effectiveness to the first chapter about sets, developed in a categorical way, and to the sobriety of the second chapter, in which he does not try to introduce the whole arsenal of category theory all at once but sticks to some concepts thoroughly developed. Remarkable is the choice of delaying for almost half of the book the introductions of functors!

[2] Also we managed to avoid speaking about points until now, if not in small deviations of the narrative! Now we are forced to drop our pointless (lel) point of view and bring them in again.

[3] Read that as ‘me’.

[4] Educational moral here: *movement *is a powerful metaphor to convey meaning, hence should be used a lot. Spatial cognition is one of the most developed intellectual abilities in humans, so it makes sense to exploit it for reasoning, as much as we exploit visual and language cognition.

~~Two~~ Three interesting thing in the linked page:

- The generalization of the game: multiple colours, infinite prisoners. I also guess ultrafilters appear, in disguise, when talking equivalences on infinite sequences.
- The comments thread, with some giants like Trimble arguing ontologically about AC. Weird but cool.
- The comment of Tao (that made me cross that ‘two’ up there) about relating this ‘paradox’ to the Banach-Tarski-type oddities coming from the interplay between measure theory and group theory.

Being speaking (or better, echoing) about AC and having mentioned Banach-Tarski, it seems a good opportunity to talk about a poster of mine I did for the *C0eM∀τ* [0], a simple but exciting congress students arrange each year here at the ULL math department.

When they told be about the initiative I was so happy to have the opportunity to produce something creative (*quick scolding glance at Italian education*[1]) that I instinctively said *yes* even before thinking about a proper topic. In hindsight that was risky! I was still going around all day trying to find a shelter and finishing the paperwork to get enrolled at the university, and I didn’t have a lot of time to do this. Also, I had never made a poster before. But still, I managed, and I think having something pleasant to do helped me passing those days.

Anyway, my original idea was a not-better-specified *something* about category theory. I would have liked to produce something self-contained, at reach for attendants (mainly undergrads) and curious, yet not too trivial (‘hey Rubik’s cube is a group!’ [2]). I quickly realized *cats* were too broad for this. But suddenly and unexpectedly, Banach-Tarski popped into my mind.

At the beginning of the past semester, I read extensively about it. I decided to look into it after becoming a little more savvy about AC and measure theory in the past two years. I already knew the paradox, of course, but I had never *really* understood it, which for a mathematician means I never understood the proof. Actually, I didn’t even try, because when I first learned about it I think I was still in high school and quite unarmed for the subject [3]. Eventually it was very pleasant and fulfilling to come back at it and magically see it unlocking before my mind: the excitement of understanding it was the most intense math-induce emotion I ever experienced. I got really interested in the topic for a while, even getting to read the first three (or something) chapters of Wagon’s outstanding book about it. Then it got too combinatorial for my taste and I dropped it. But still, six months after, my brain regurgitated it and I immediately thought it was the perfect topic for my poster: self-contained yet far-reaching, approachable yet stimulating, and, not less importantly, I was already familiar with it and related topics [4]. *Bonus*: I had some good expositions available (the aforementioned book & a great Wikipedia entry).

That said, after a good amount of procrastination and fights with TikZ (but shout out to tikz-poster for the incredibly ~~constrained~~ helpful package!), I managed to produce a thing. I used Google Translator and the help of one of the organizers to translate it to Spanish, and that’s the version that was exposed and the congress. I had to explain it to an ‘evaluation committee’ and to people, I thought I wasn’t going to be able to do it because my Spanish skills were (and still are) garbage, and so I got a bilingual student by my side to help me with translations. Eventually, I stepped in the flow and thanks to an absurdly amount of similarity between Italian and Spanish (mathematical and not) language, I queried my sidekick just a couple of times, only to hear him say ‘yeah, it’s the same word in Spanish’ [6]. I got embarrassed just for one question, when a teacher pointed out that my sentence ‘there’s no additive and isometrically invariant measure on R^3’ was completely wrong (rudeness mine) being Lebesgue measure additive and isometrically invariant. She’s completely right, by the way, although my use of the term *measure* was ‘classical’, hence I assumed it to be full (in which case Lebesgue fails to be a counterexample). I should have used the more ambiguous word *volume*, thereby shielding myself from any critique [7].

In the days following the expositions I perfected the English version of the poster, and this is the one I want to share with you. Yet for intellectual honesty I must also provide the original Spanish version, now hanged in my room.

[0] Unfortunately this link will be broken in less than a year from now, due to the absence of a permalink. *Gnaffe!* There’s my name on the ‘posters’ page, wow.

[1] But I have to be just here and get some guilt. I’ve been a student rep for ~2 years and never did anything close to this.

[2] My 2 cents on this topic: those things are great to instill curiosity but rapidly fall back into ‘lame’ if you don’t calibrate precisely your aim:

[3] Maybe I knew what a group was, but a group action? Choice-set?

[4] I need to write this somewhere: the ‘savvyness’ I got about AC in the past two years is completely due to the fascinating book of Moore’s about it. I was very lucky to run into his book early in my studies because it bolted my ‘AC senses’ early on, making it much easier to understand where and when and why it’s used. Before understanding it, you’re essentially blind to issues concerning constructability, and it’s nice to sharpen your sense as soon as possible in order to avoid becoming *blind to your own blindness *[5]. The historical perspective he uses is also so good at doing this work, because mathematicians at the beginning of the past century were essentially as clueless as the reader, and so the narrative helps a lot.

[5] A ‘bad boy!’ to undergrad math education which repeats the same things about naïve set theory in every course but fails to give you this intuition (even at a basic level).

[6] That doesn’t mean I’m not grateful to him! He was still a nice safety net to have, and more likely than not his presence made me much more confident and hence helped me being so effective (hyperbole) in the exposition.

[7] My favourite flavour of *proof by intimidation*.

Si tratta di una playlist breve, che non raggiunge l’ora (58 minuti), e questo sicuramente contribuisce a darle coerenza. In linea con il trend degli ultimi mesi, la percentuale di canzoni italiani è ampissima (12/17), con un’importante ma attesa presenza di De Andrè. Fondamentali le notte di colore di Calcutta e dei Pixies. Grande ritorno di Bersani, con un pezzo di sottovalutata espressività. Inaspettata presenza di *For What It’s Worth It*, direttamente dalla colonna sonora di un weekend al sud con altri compagni di erasmus.

Complessivamente interessante, voto: *8.5*