Why should I like math? (Forever work in progress?)
I'm sure many reading this can relate to enjoying puzzles. The math that I do is more akin to solving puzzles than, say, computing the square root of 288369 or solving some mundane word problem. While it's true that most modern math involves some level of calculation, the bulk of the work lies in justifying why we're doing said calculation. The following is an explanation of what (I think) math is really about, and I will give many examples, some accessible and some important. (Generally, the overlap of these is small.) The article is rather long, perhaps several articles on one page.
Inductive vs. deductive and a definition of mathematics
There are two major factions of science: experimental and theoretical (there are probably other factions, but these are where the majority lies). Experimentalists do experiments to inductively verify their hypotheses. Experimentalists follow the scientific method by coming up with falsifiable claims which can be tested. They gather data in attempt to provide statistically significant evidence that justifies their claims. Then, they interpret their results to learn things about the real world and formulate new ideas. The problem with experimental science is that experiments cannot be performed without some level of error or uncertainty. However, theorists like me use logical reasoning to deductively verify their hypotheses. We typically don't need to gather data beyond showing what we've proven is actually interesting and important. Unlike experimentalists, theorists can be absolutely certain of their results. Even if we were a brain in a vat, what we verify is true. There is no space for bias or sources of error in the products of theoretical fields (unless the author makes a mistake in their logic, but peer-reviewers are sometimes decent at catching this). The problem with theoretical research is that, in many cases, one still cannot be certain that the axioms we've chosen are consistent with the real world (or, in fact, even with themselves). In particular, theorists often make simplifying assumptions to make their theory easier to work with that also make their models less accurate to the real world.
Since inductive reasoning is the main focus in science classes, it may sound like theory is either impossible or so far removed that it isn't useful. This is not true. In some cases, theoretical work needs to be checked by experimentalists to verify it is accurate to how the real world works. This is typically true for theories of chemistry or physics. The hypotheses about the real world that stem from theory tend to be notoriously hard to test, e.g., string theory and general relativity. In other cases, the work doesn't need to be verified experimentally at all. For example, a computer scientist is typically, by my definition, a theorist and not an experimentalist, since their work is typically logical and not experimental. Yet, programming is essential to the world today. A huge chunk of theory is applied to cryptography, computer graphics, artificial intelligence, etc., all of which do not strictly need experimental verification, as every step of the way is based in logic.
Now that I have made this distinction of experimental and theoretical, I definemathematics to be the umbrella term for all theoretical fields under this construction. We refer to the logical reasoning needed to verify hypotheses as a proof. We typically refer to important, proven hypotheses in mathematics as theorems. We refer to less important proven hypotheses, such as those needed to prove theorems as lemmas or propositions. Finally, we refer to unproven hypotheses as conjectures or just hypotheses. There is no universally agreed-upon definition of math. However, I believe this is the best way to distinguish math from other fields. It is often argued whether math is a science or its own thing. If we define science to be any topic which uses the scientific method, then mathematics is not a science. However, by my definition, if math isn't a science, then theoretical physics isn't either, since it is a subset of mathematics. Yet many theoretical physicists describe themselves as scientists.
Mathematical objects
Mathematics studies a wide variety of things. Anything that can be put on a fully logical basis is study-able by mathematicians (though, some topics may not be as interesting as others). As a blanket term, the actual objects we study are called mathematical objects. Objects can be anything from numbers to functions to geometric shapes to games/puzzles to algebraic structures. Again, anything with a precise, logical description can be interpreted as a mathematical object.
Structure
In some mathematical objects, there is some notion of internal structure. I am interested in so-called algebraic structures, which are the central focus of the field of mathematics known as algebra (it does not look much like the algebra you see in high school). In particular, I study modular tensor categories, topological quantum field theories, and weak Hopf algebras.
The foster cat named Kitty/Booper who loved to go into a different room to scream and then come back.
I will now describe a single example (the dihedral group of order 16) of a single example (groups) of a mathematical structure. This is much like explaining the behavior of all animals by using my screaming foster cat. It will certainly tell you something, but, to fully appreciate the amazingness of the animal kingdom, you need a much more broad understanding. We will start with the example and then slowly try to generalize.
The prototypical example of an algebraic structure is a group. They will be referenced quite a bit in this article. I have designed the article so that one doesn't really need to understand what a group is to read, so, if this explanation doesn't make sense, feel free to ignore it.
Groups are an abstraction of the notion of symmetry. Imagine a flat octagon living in 3D space. How can we rigidly move the octagon around so that it maintains the same endpoints? How do these symmetries interact? See the picture with a visual description of this ``group'' of symmetries of the octagon using a stop sign.
All possible orientations of an octagon using rotations and flips (taken from Wikipedia).
We can consider groups of symmetries of all sorts of objects: regular polygons, circles, Rubik's cubes, crystallographic compounds, and even groups themselves. We can make the following key observations about symmetries:
Combining two symmetries back-to-back gives another symmetry;
``Do nothing'' is always a symmetry;
Symmetries can always be undone;
The order in which symmetries are applied does sometimes matter (think about flipping and rotating the octagon)!
More formally, a group can be interpreted as a collection of ``symmetries'' with the following properties:
If $g$ and $h$ are symmetries in the group, then combining them into $gh$ is another symmetry in the group;
There is a ``do nothing'' symmetry $e$ in the group, i.e., something that satisfies $eg=g$ and $ge=g$ for any symmetry $g$;
For any symmetry $g$, there is an undoing symmetry $g^{-1}$, i.e., something that satisfies $g^{-1} g=e$ and $gg^{-1}=e$;
The symmetries combine associatively, i.e., $(gh)k=g(hk)$ for any symmetries $g,h,$ and $k$ (figure out why this is intuitive).
Now, let's write our properties one more time, in the way a mathematician would. A group is a collection $G$ with the following properties:
If $g$ and $h$ are in $G$, then $gh$ is in $G$;
There is some $e$ in $G$ so that, if $g$ is in $G$, then $eg = g$ and $ge = g$;
If $g$ is in $G$, then there is a $g^{-1}$ in $G$ satisfying $gg^{-1} = e$ and $g^{-1}g = e$;
For any $g,h,k$ in $G$, $(gh)k=g(hk)$.
By formalizing our observations, we arrived at a very abstract notion of a group. You can see this boils down to a collection with some basic operations and properties. Yet, these basic operations and properties somehow perfectly capture our notion of symmetry. This is one goal of mathematicians; we take general observations and transform them into mathematical objects. In general, a combination of a collection with underlying operations and properties is called a mathematical structure. Mathematical structures are almost always abstractions of our (mathematicians') intuition for some idea. We will go more into detail about how mathematical objects come to be soon.
What does math look like?
I described math as anything that uses raw logic and deductive reasoning, but I wasn't clear what exactly that meant. Mathematicians work with a set axioms, which precisely describe what we assume, and then try to derive useful/interesting consequences of those assumptions. For example, an axiom could be as simple as ``any logical expression is either true or false'' (the law of excluded middle). Some axioms are/were more controversial, like ``any collection of sets has a choice function'' (axiom of choice). I won't explain what this means, but mathematicians have largely accepted this axiom because of its immense utility, despite weird consequences such as
Banach-Tarski Paradox. You can break a sphere into finitely many pieces which, by only rotating and translating the pieces, produce two spheres of exactly the same size as the original.
Theorems
The following is a list of famous theorems from many fields of mathematics. Don't read too much into the meaning of the math. Scan them to see if you can identify any similarities.
Rouché's theorem from complex analysis: If $f,g:D\to\bbC$ are holomorphic functions and $|f|>|g|$ on $\partial D$ where $\partial D$ is a closed contour, then $f$ and $f+g$ have the same number roots (with multiplicity) in $D$.
Van Kampen's theorem from homotopy theory: Suppose $X$ is a path-connected. If $X = \bigcup_{i\in I} U_i$ for some collection of path-connected open sets $\{U_i\}_{i\in I}$ with nonempty total intersection such that $U_i\cap U_j$ is path-connected for all $i,j\in I$, then there is a surjective group homomorphism $\phi:\coprod_{i\in I}\pi_1(U_i)\to \pi_1(X)$.
The five lemma from homological algebra: If
is a commutative diagram in an abelian category with exact rows, $f_1$ is an epimorphism, $f_5$ is a monomorphism, and $f_2$ and $f_4$ are isomorphisms, then $f_3$ is an isomorphism.
Hilbert's Nullstellensatz from algebraic geometry: Let $k\subseteq K$ be a field extension with $K$ algebraically closed. If $I$ is an ideal of $k[X_1, X_2,\dots, X_n]$, denote the ``algebraic set'' of solutions to the polynomials in $I$ by $V(I)=\{{\bf x}\in K^n\supseteq k^n | f({\bf x})=0, \forall f\in I\}$. Then, if $p\in k[X_1,\dots,X_n]$ is a polynomial for which $p({\bf x})=0$ for all ${\bf x}\in V(I)$, then some power of $p$ is in $I$.
You may have observed that all the theorems boil down to one or more ``if $X$, then $Y$'' statements in some form. Great theorems are those where ``if $X$'' is very frequently satisfied and ``then $Y$'' is extremely useful. For example, the Whitney embedding theorem applies to every smooth manifold which is the main object of study in differential geometry, an enormous field of mathematics. The theorem's conclusion, guaranteeing an embedding, allows for a much more concrete understanding of smooth manifolds since smooth manifolds are defined very abstractly.
Important theorems with many applications within mathematics tend to have designated names, after their creator (or someone else for inexplicable reasons), based on the meaning of the theorem, or sometimes both. For example, ``if $f:G\to H$ is a group homomorphism, then $G/\ker\{f\}\cong f(G)$'' is known as the first isomorphism theorem since it is an important theorem about ``isomorphisms'' (more on this later).
Proofs
Proofs are perhaps the crown jewel of modern mathematics. As discussed before, a proof is a fully rigorous argument that definitively shows a logical statement is true. Proofs vary in appearance quite substantially. The majority of proofs are written in plain english with some equations or mathematical symbols interspersed where necessary. However, there exist proofs without words. Here is an example of a proof that hopefully is accessible:
Theorem. If $a$ and $b$ are numbers and $ab=0$, then either $a=0$ or $b=0$ (possibly both).
Proof. Our goal is to prove either $a=0$ or $b=0$. One approach is to simply assume $a\neq 0$ and show that this implies that $b=0$. Take a moment to convince yourself that's enough to prove the whole thing...okay, assume $a\neq 0$. Then, it's okay to divide by $a$, so $\frac{ab}{a}=\frac{0}{a}$. By simple division rules, $\frac{ab}{a} = b$ and $\frac{0}{a}=0$, so
$$b=\frac{ab}{a}=\frac{0}{a}=0,$$
and thus $b=0$.
Next, I show an example of a short proof that isn't accessible for general audiences (it is an exercise from the standard textbook Algebraic Topology by Allen Hatcher). I have included this to show what proofs might look like at a higher level (albeit, not research-level and atypically short); it isn't worth studying in detail.
Theorem. If a path-connected, locally path-connected space $X$ has $\pi_{1}(X)$ finite, then every map $X\to S^{1}$ is nullhomotopic.
Proof. Since $\pi_{1}(X)$ is finite, it is torsion. Therefore, the only homomorphism from $\pi_{1}(X)\to\pi_{1}(S^{1})=\bbZ$ is the zero map because $\bbZ$ is torsionfree. In particular, if $f:X\to S^{1}$, then $f_{*}(\pi_{1}(X)) = \{0\}\subseteq p_{*}(\pi_{1}(\bbR))$, where $p:\bbR\to S^{1}$ is the standard covering map. Thus, there is a lift $\tilde f:X\to \bbR$. Since $\bbR$ is contractible, $\tilde f$ is null-homotopic, so $p\tilde f = f$ is null-homotopic.
In experimental science, there are many ways to design experiments to inductively verify hypotheses, and, in mathematics, there are many approaches to proving a theorem. Unlike experimental science, the different ways of proving a theorem are technically equally valid. Yet, mathematicians still tend to have similar opinions on which proofs are ``better.'' Some qualities which are used to assess the quality of a proof are as follows:
Generalizability of argument
Constructive vs. non-constructive
Aesthetics
Conciseness
The best proofs are those which introduce new techniques for proving something. A friend once told me, ``the goal of a research mathematician is to have one original thought.'' This is not to say that mathematicians are unimaginative. Instead, we tend to rely on established techniques for proofs. While we often have interesting ways to apply them, coming up with completely original techniques is quite challenging. Mathematics is thousands of years in the making, so the low-hanging fruits have almost all been taken.
Examples, counterexamples, and discovering definitions
As I have shown so far, mathematicians care a lot about providing examples. I once said to my REU advisor, ``I don't want to do examples, I just want to do generality.'' He responded, ``the most important parts of math are the examples.'' It took me quite a while to understand his claim. While mathematicians do try to prove extremely general statements, there is no point to generality if the examples aren't interesting/useful. Most mathematics comes from building intuition from examples.
In mathematics, intuition can be a strong tool. However, it is certainly not perfect. Quite often, mathematicians discover what they've been attempting to prove is actually false. That is, there is an example of a mathematical object which satisfies the ``if'' part of a conjecture, but not the ``then'' part. The existence (or potential existence) of counterexamples to conjectures we want/need to be true often leads to the definition of properties, i.e., axioms we need to make certain theorems true. Properties are often chosen to be as broadly applicable as possible but strong enough to make the important math work as desired. Some properties seem self-explanatory, due to obvious counterexamples. However, more obscure properties are often chosen to avoid obscure counterexamples. Properties restrict mathematical objects to useful special cases. For example, we define prime numbers the way we do because of important theorems such as ``every positive integer has a unique prime factorization.''
The above description is on discovering ``properties'' from counterexamples. In a similar vein, definitions of ``structure'' come from useful examples. We briefly described structure earlier, but I didn't explain in detail why mathematicians care about these particular structures. While doing mathematics, we may stumble upon an interesting object, for now, let's say we are studying the integers: $\dots, -3, -2, -1, 0, 1, 2, 3, \dots$. What can we do with integers? Well, we can always add and subtract them. We can also multiply them, but we can't always divide integers. For example, $1/2$ is not an integer. There are lots of other mathematical objects which have addition, subtraction, and multiplication. For example, real numbers, rational numbers, polynomials, $n\times n$ matrices, and so on. This may tell us to consider a structures which have addition, subtraction, and multiplication and see if we can prove any general results about all of them simultaneously. In order to do this without new counterexamples arising, we need to investigate how these operations interact. From this, we can create new properties and so on.
Sometimes, the definitions of structures and properties don't come from examples but rather arise naturally from theory or experimentation. For example, there are many open problems in math of the form ``can such-and-such object actually exist?'' (e.g., non-pivotal fusion categories). Even before we know for sure, we can often build a lot of theory about such-and-such objects. Then, if an example is found, we already know a lot about it. Otherwise, we may eventually reach a contradiction within our theory which would tell us that such-and-such objects can't exist. It's also common that there is a line in a proof of a conjecture that isn't obviously true but would make the argument cleaner if it were. To fix this, we add something that boils down to ``this line of the proof is true'' to our axioms. This may feel like cheating, but so long as the additional assumption is easy to satisfy, not much is lost.
Sameness
If I were to ask you whether $1+1$ was the same as $2$, you'd respond in a heartbeat. Whether or not your response was sarcastic, you'd know the answer. However, I didn't tell you in what sense to interpret same-ness. Perhaps, I meant to ask whether the symbols $1+1$ and $2$ were the same. In that case, these are very much not the same. If I were to ask you whether the list $1,2,3$ was the same as the list $3,6,4$, at first you would think not. However, this being a trick question, you might notice that both lists contain 3 things in them, so maybe that's what I mean by same.
An example of a sudoku. The goal of the game is to place the digits 1-9 in each cell so that each occurs in each row, column, and $3\times 3$ box exactly once. Some starting digits are given to ensure that the solution is unique.
Let's look at one final example: sudoku. Consider the figure on the right. (Note: if you are unfamiliar, the rules are given in the figure.) If we were to replace all the 1s with 2s and all the 2s with 1s, what would happen? Pause before reading on and think about same-ness...
Sure, the puzzle and its solution would visually be different. However, the only change the entire way through is that the 1s and 2s are swapped. Is that a different puzzle? Not really. What if we rotated the puzzle by 90 degrees or reflected it vertically? Once again, the only change between the solutions would be that they are rotated or reflected. While they are visually different, they are again essentially the same puzzle. If we were to count all the different sudokus, should we distinguish rotations, reflections, or number swaps? Probably not.
We can abstract the notion of same as follows:
$A$ must always be the same as $A$;
if $A$ is the same as $B$, then $B$ is the same as $A$;
if $A$ is the same as $B$ and $B$ is the same as $C$, then $A$ is the same as $C$.
All of these are obvious in any intuitive notion of ``same''. Mathematicians call such a ``same'' relationship an equivalence relation and say two things are equivalent instead of saying they are the ``same''. It is also standard to use notations that look somewhat similar to equality, like, if $A$ is the same as $B$, we might write $A\equiv B$ or $A\cong B$ or $A\sim B$. I will use this terminology/notation from now on. There are many notions of equivalence which are essential to mathematics. Knowing when two mathematical objects are equivalent often tells us everything we need to know about their internal workings.
Isomorphisms are important examples of equivalences between two mathematical structures of the same type. In particular, if two mathematical structures are isomorphic, they are essentially the same. Special types of isomorphisms include: isomorphisms between groups and homeomorphisms between topological spaces. Isomorphisms are transformations which perfectly preserve structure. In general, transformations between mathematical objects don't need to be perfect. We have certain axioms for ``good'' transformations for many types of mathematical object. These ``good'' transformations are called morphisms, and they encode what we care about and forget everything else.
Invariants
Some equivalences are, in some sense, stronger than others. For example, equality of whole numbers is stronger than saying two numbers are equivalent if they have the same parity (i.e., both odd or both even). Equality can differentiate more things than parity. That said, weaker notions of equivalence can be very useful, especially if it's easier to determine if two things are equivalent under the weaker notion. As an example, any two equivalent (or, in the terminology of topology, homeomorphic) topological spaces have the equivalent (or, in the terminology of algebra, isomorphic) cohomology groups. However, determining if two topological spaces are homeomorphic directly is often a lot harder than determining if they have isomorphic cohomology groups.
If we have two notions of equivalence, say $\cong$ and $\sim$, then we say $\sim$ is an invariant of $\cong$ if $A\cong B$ implies $A\sim B$. In our first example, if $a$ and $b$ are whole numbers and $a=b$, then $a$ and $b$ have the same parity. Therefore, if we know $a$ and $b$ have different parities, they can't be equal. Parity is an invariant of equality. In our second example, if two topological spaces $A$ and $B$ are homeomorphic, then the cohomology groups of $A$ and the cohomology groups of $B$ are all isomorphic. Cohomology groups are an invariant of topological spaces.
I will give one final example of an invariant: a puzzle that you can try for yourself. Let's say we start with the set of numbers $\{1,2,3,4,5,6\}$, and we can change the set with the following operation: if $a$ and $b$ are two (not necessarily different) numbers in our set, we can replace $a$ with $a - 2b$. Is it possible to obtain a set where all the numbers the same (e.g., all 216s)? Here's an example of doing the process a few times:
$\{1,2, 3, 4, 5, 6\}\to\{-7, 2, 3, 4, 5, 6\}$ if we change $1$ to $1-2(4)=-7$.
$\{-7,2, 3, 4, 5, 6\}\to\{-7, 16, 3, 4, 5, 6\}$ if we change $2$ to $2-2(-7)=16$.
$\{-7, 16, 3, 4, 5, 6\}\to\{7, 16, 3, 4, 5, 6\}$ if we change $-7$ to $-7-2(-7)=7$. Remember $a$ and $b$ don't have to be different.
Hint (click here to expand):
Harder puzzle: what if our replacement was $a$ with $a+b$ but we required $a$ and $b$ to be different? What if we didn't make that requirement?
The
Going back to our normal notion of equality of numbers for a bit, when we study the number $2$, we don't need to distinguish between $2$ and $1+1$. They're the same. There's really only one number that we call $2$. This is the $2$. We could just as easily say there is only one number $1+1$, the $1+1$. These are just representatives of the same thing, so referring to either with the word ``the'' is inclusive of the other, as well as all other numerical expressions equal to $2$ or $1+1$. When mathematicians say the, we often mean in the sense of equivalence. For example, if I said the collection with 3 things. I mean any collection with three things, where I interpret all such collections as the same.
In a similar vein, we may say an object $x$ is unique ``up to $\sim$'' if any other such object $y$ satisfies $x\sim y$. For example, we might say $4$ is the unique even number ``up to parity.'' More usefully, we might say there is one group of order $p$ for each prime number $p$ ``up to isomorphism.'' Sometimes we don't add ``up to $\sim$'' at all if it isn't important to the context. Moreover, we may also write $a=b$ for our equivalence relation, though we may not have equality in a strict sense (e.g., authors often write $\pi_1(S^1)=\bbZ$, which is not true in a strict sense; the two groups on either side of the expression are just isomorphic and not equal).
The overarching goals of mathematics
Now that we have some important notions of mathematics, it is time to describe what mathematicians do. There is really just one vague overarching goal of mathematics; when we have a type of mathematical object that seems useful, we try to learn and prove everything we can about its properties. If you read any research paper in mathematics, it will somehow be of this form. It is standard in mathematics to prove or generalize as much as possible so that the resulting theorem can be applied more easily. Consider the following proposition: ``There are only finitely many $9\times 9$-sudoku board configurations.'' This is true because the valid sudoku boards are a subset of all possible $9\times 9$-grids of the digits 1-9. By that reasoning, we could generalize this statement to simply ``subsets of finite sets are finite.'' This is a very useful statement (and oddly enough, some people don't accept it). We will now discuss a few big focuses in mathematics.
Existence and Uniqueness
In many fields of mathematics, it is common to seek the broadest possible conditions which guarantee existence and uniqueness of a mathematical object (up to equivalence). Again, this is easiest to describe with puzzles. When you make a puzzle, you generally want exactly one solution. For example, the sudoku given above has only one possible solution. In fact, that sudoku is minimal in guaranteeing a unique solution. If we took away any given digit, there would be multiple solutions to the puzzle. For sudokus, it is quite hard to guarantee existence or uniqueness without actually solving the puzzle. In mathematics, you don't always have to know the answer to know things about the answer. In particular, in, say, the theory of differential equations, we can sometimes determine if a certain equation has a unique solution without knowing precisely what the solution is (e.g., the Picard-Lindelöf theorem). Some theorems can tell us how to find or approximate the solution whereas others give us no information other than a unique solution exists.
At first, it may not seem useful to know something exists without knowing what it is. Constructive mathematicians (those who refuse the law of excluded middle) indeed deny the efficacy of this approach. However, mathematicians like me find some value in nonconstructive mathematics. In particular, existence results let us clarify the essential aspects of a mathematical structure. If you gave me an arbitrary differential equation, it need not have a solution. However, if the differential equation satisfies certain simple continuity constraints, we can still declare there is a unique solution. This tells us that these conditions are important, but where do we go next? There are a few paths forward:
Are these conditions absolutely necessary? What happens if we omit or lessen some condition?
Given these conditions, what can we say about the solution? Is it nice enough?
What additional restrictions do we need to get an even ``better'' solution?
Of course, nobody denies the benefits of having a ``constructive'' existence and uniqueness proofs. They give you a way to find the solution and often give more information than is possible with a raw existence proof. However, in certain areas of math, constructive proofs might be harder to come up with. If we only need that a certain fact is true, then a constructive proof may not be necessary.
Classification
There is a parallel notion to existence and uniqueness. In some mathematical structures, we seek classification. In other words, we want to pin down exactly what the mathematical structures can be (up to isomorphism). Unlike, say, finding all valid $9\times 9$ sudoku grids, algebraic structures tend to have infinitely many possible options. Even for some of the most basic algebraic structures, classification is very difficult. As such, we might restrict to just classifying the simplest possible cases that can serve as building blocks for the more general case. For example, we have classified all finite simple groups. While we haven't exactly classified all finite groups, we do know that all of them can be constructed from finite simple groups through a finite composition series. This is all to say that we know a lot about finite groups because we know how to build them out of finite simple groups, and we know all the finite simple groups. Note how I have prefaced every reference to ``group'' with the word ``finite.'' There are infinite groups, and they are a lot harder to grasp. This is quite common. If we have a notion of size on our algebraic structure, classifying big or infinite things is typically a lot harder than classifying small, finite ones.
We will now describe a common process used to classify a type of structure from scratch. We will use von Neumann algebras (which are important to quantum mechanics and many other fields) as an example. (Caveat for those who care: when I say Hilbert space, I mean separable Hilbert space.) If you aren't familiar, don't pay attention to the meaning of the math, just observe how the example aligns with the process.
Find tons of examples of the structure to get a good idea of what they look like and how they work. Prove that the examples exist and are not isomorphic (i.e., they are different). Find ways to combine examples into new examples.
Any algebra of bounded operators on a Hilbert space is a von Neumann algebra.
The ring of essentially bounded measurable functions is a von Neumann algebra.
We can combine von Neumann algebras in a direct integral or tensor product to create a new one.
Prove things about the structure as a whole, especially establishing equivalent conditions for the definition or how to break the structures into simple substructures.
There are several equivalent ways to define a von Neumann algebra: $*$-subalgebras of bounded operators containing the identity closed under the weak or strong operator topologies, or $*$-subalgebras of bounded operators equal to their double commutant, or a $C^*$-algebra with a predual. If an object satisfies any of these three things, then it satisfies all of them and is a von Neumann algebra.
Any von Neumann algebra can be broken down into a direct integral of von Neumann algebras with trivial center. As such, von Neumann algebras with trivial center allow us to construct any von Neumann algebra. We can classify this simpler case and this will teach us a ton about what von Neumann algebras can be. We will call von Neumann algebras with trivial center ``factors'' (this is like how you can factor whole numbers into products of primes or polynomials into linear terms).
Create some way to separate/partition the structure (or simple substructures) into cases. For example, we might separate the structures by size or special properties. Also, find invariants of the structure.
Factors can be separated into three types which have a lot of properties in common. Type I factors have a minimal projection. Type II factors have no minimal projections but do have a nonzero finite projection. Type III factors have no nonzero finite projections.
Come up with examples of each case and prove (if possible) that some cases aren't possible.
The algebra of all bounded operators on a Hilbert space is a type I factor.
The free group (on at least 2 generators) von Neumann algebra is a type II factor.
It was hard to determine if type III factors existed at first, but they do exist. The crossed product of the real line with the group of all rational non-constant affine transformations is an example.
Prove things about the special cases, find things that can help you determine whether two examples are not equivalent. Attempt to classify some special cases. If classification is still unfeasible, either find a new way to separate into cases or split the cases into subcases.
Type I factors can be completely classified as the algebras of all bounded operators on a Hilbert space.
Type II factors can be split into subcases type II$_1$ and type II$_\infty$ based on whether the identity operator is finite. There are tons of great results in each case, though especially for type II$_1$ factors. However, there is currently (and probably always will be) no complete classification of either.
Again, type III factors are difficult, but we can separate them into subtypes based on their Connes spectrum and prove some general results using this. It is known that every type III factor can be written as a crossed product of a type II$_\infty$ factor and the real numbers.
We can also study other subcases of von Neumann algebras, like hyperfinite von Neumann algebras. There is a unique hyperfinite type II$_1$ factor and a unique hyperfinite type II$_\infty$ factor up to isomorphism.
TODO make the list of goals longer
Mathematics is beautiful
Hopefully, this broad description gives some context on how someone could find math enjoyable. Most people never get to see the real mathematics. Many stop at high school algebra or calculus, never seeing what most research mathematicians actually find interesting.
The real math starts when you are thinking more about proofs than computations, when you have sudden strokes of inspiration for hard problems hours after you've given up, when you have 30 synonyms of ``therefore'' used in your daily speech (listed in order of memory: ``thus'', ``then'', ``it follows'', ``hence'', ``in particular'', ``so'', ``this implies'', ...).