What does algebra feel like? What do algebraists care about?

The assumptions for this article are good grasps on sets and functions. In this article, I will exhibit what algebra feels like by investigating a pretty simple algebraic structure, which I've decided to call, to the dismay of operator algebraists, unital $*$-sets. We will attempt to study them in the way algebraists study more interesting algebraic structures, and hopefully this will give a good answer to both of the questions in the title.

Finding motivation

Algebraists don't just study abstraction for the sake of abstraction; they are motivated by general phenomena they've seen in a variety of contexts. Let's investigate a few common functions: These all have a few things in common, firstly if you apply them twice, you get back where you started. For example, for the first function, $x\mapsto 2 - x \mapsto 2 - (2 - x) = x$. They also all satisfy the property that $1\mapsto 1$ (where $1 = I_3$ for the $3\times 3$-matrices). For example, for the third function, $1 = 1 + 0i\mapsto 1 - 0i = 1$. There are probably more interesting things these functions have in common, but let's focus on these. We can abstract these two properties as follows:
Definition. A unital $*$-set is a set $A$ with a function $*:A\to A$, written $x\mapsto x^*$ such that
  1. for any $x\in A$, the following equality holds $(x^*)^* = x$,
  2. there is an element $1\in A$ such that $1^* = 1$.
All the functions described before satisfy the properties we require of $*$. For example, for the fourth example, taking $A = M_3({\mathbb{R}})$ and making $A^* = A^T$ and $1 = I_3$ gives us a unital $*$-set. Note that I am not just talking about $*$ as a function; I also care about the set itself. This is important as we will see in the following section.

Basic examples

Now that we have a formal definition of a unital $*$-set, the next thing an algebraist would do is find very basic examples of the structure that give an intuition for properties we might expect. We start by looking at small finite sets and seeing what possible $*$'s we can put on them. Let's denote our unital $*$-set $A$ and investigate what happens when $A$ has small cardinality.
  1. If $|A| = 0$, then we violate axiom 2. There can't be an element $1\in A$ such that $1^*=1$ because $A$ has no elements!
  2. If $|A| = 1$, then $A = \{1\}$ because it must have $1\in A$. By axiom 2, we must have that $1^* = 1$. This is also forced because this is the only element of $A$. There's nothing else to map to.
  3. If $|A| = 2$, then $A = \{1, x\}$ because again it must have $1\in A$. Again, by axiom 2, we must have that $1^* = 1$. Now, we have two cases $x^* = 1$ and $x^* = x$. If $x^* = 1$, then $(x^*)^* = 1^* = 1\neq x$, which violates axiom 1. Thus, the only option is $x^* = x$.
  4. If $|A| = 3$, then $A = \{1, x, y\}$. Again, by axiom 2, we must have that $1^* = 1$. As we saw before, we can never have $a^* = 1$ if $a\neq 1$ because then $(a^*)^* = 1$. Now, we have two cases $x^* = x$ and $x^* = y$. If $x^* = x$, then $y^* = y$ because otherwise $(y^*)^* \neq y$. However, if $x^* = y$, then $y^* = x$ to make sure $(x^*)^* = x$. We have our first nontrivial structure!
  5. If $|A| = 4$, then $A = \{1, x, y, z\}$. You will find that there are 4 ways to set up the $*$:
    $1^*$ $x^*$ $y^*$ $z^*$
    $1$ $x$ $y$ $z$
    $1$ $y$ $x$ $z$
    $1$ $z$ $y$ $x$
    $1$ $x$ $z$ $y$
    Note that three of these are nearly identical. The only difference between them is which elements they swap, but that shouldn't matter because our choice of $x,y,z$ was completely arbitrary. There are only 2 different ways to have a $*$ on a 4 element set if we take this into account. We will show in the next section how to make this idea precise.

Homomorphisms

In algebra, it is typical to study structure-preserving maps of structures more than the structures themselves. This idea is closely related to category theory, which forgets the structures completely and only studies these sorts of maps (in further generality). Structure-preserving maps in algebra are called homomorphisms. For example, for (real) vector spaces, a homomorphism is a linear map. That is, a function $f:V\to W$ satisfying $f(x+y) = f(x) + f(y)$ and $f(\lambda x) = \lambda f(x)$ for all $x, y\in V$ and $\lambda\in{\mathbb{R}}$. Addition and scalar multiplication completely describe the structure of a vector space. It is only natural that the homomorphisms are required to respect these operations. The precise definition varies by the structure of interest, but, for today, the definition is as follows:
Definition. A unital $*$-set homomorphism is a function $f:A\to B$, where $A$ and $B$ are both unital $*$-sets satisfying
  1. for any $x\in A$, the following equality holds $f(x^*) = f(x)^*$,
  2. $f(1)=1$.
Examples:

Isomorphisms

When a homomorphism is also bijective, we call it an isomorphism. Isomorphic objects are considered essentially the same. For example, two (real) vector spaces of the same dimension are always isomorphic, and as we know, they can essentially be identified with ${\mathbb{R}}^n$ where $n$ is that dimension. For example, when doing linear algebra, we treat polynomials of degree at most $n$ the same as we treat ${\mathbb{R}}^{n+1}$ because they are the same in the eyes of linear algebraists.
Example: The following is an isomorphism $f:A\to B$ from $A = \{1, x, y, z\}$ with $x^* = x$, $y^* = z$ and $z^* = y$ to $B = \{1, x, y, z\}$ with $x^* = y$, $y^* = x$ and $z^* = z$: let $f:1\mapsto 1, x\mapsto z, y\mapsto x, z\mapsto y$. This is clearly bijective because the arrows can just be reversed in this definition. Moreover, it is a unital $*$-set homomorphism, so this is an isomorphism. There is no isomorphism if $B = \{1, x, y, z\}$ but $b^* = b$ for all $b\in B$ because the $*$s behave differently.
When two structures $X, Y$ are isomorphic, we write $X\cong Y$.

New objects from old objects

Having many notions of ``new objects from old objects'' has proven extremely useful in many contexts. Often, when algebraic structures arise in applications, there's relationships between these sorts of operations. For example, for the fundamental group in topology, for sufficiently nice topological spaces, $\pi_1(X\times Y)\cong \pi_1(X)\times \pi_1(Y)$ and $\pi_1(X\vee Y)\cong \pi_1(X)*\pi_1(Y)$.

Subobjects

If $A$ is a unital $*$-set and $B\subseteq A$ satisfies, if $b\in B$, then $b^*\in B$ and $1\in B$. Then, $B$ is also a unital $*$-set and inherits the structure of $A$.

Products

Given two unital $*$-sets $A, B$, we can create a unital $*$-set structure on $A\times B$ (the Cartesian product) as follows: We should verify that this is indeed a unital $*$-set by checking the axioms. Most other common structures have a notion of product (e.g., groups, rings, vector spaces). An example of a product is ${\mathbb{C}}\times M_3({\mathbb{R}})$, where ${\mathbb{C}}$ and $M_3({\mathbb{R}})$ with the earlier $*$s. In this case $(a+bi, M)^* = (a-bi, M^T)$ and $1_{{\mathbb{C}}\times M_3({\mathbb{R}})} = (1, I_3)$. These products satisfy a ``universal property.'' Note that the functions $\pi_A:A\times B\to A$ and $\pi_B:A\times B\to B$ given by $\pi_A:(a, b)\mapsto a$ and $\pi_B:(a, b)\mapsto b$ are both unital $*$-set homomorphisms. Moreover, if you have unital $*$-set homomorphisms $f:X\to A$ and $g:X\to B$, there's a canonical map $f\times g:X\to A\times B$ given by $x\mapsto (f(x), g(x))$. This is the unique unital $*$-set homomorphism which makes the following diagram ``commute'':
todo
Note: a diagram is said to commute if following any two paths from one object to another gives the same answer. In this case, this is saying that $\pi_A((f\times g)(x)) = f(x)$ and $\pi_B((f\times g)(x)) = g(x)$.

Coproducts

In category theory, the prefix ``co-'' means reverse the arrows. In particular, the universal property for coproducts should look like
todo
because this reverses the arrows for the diagram for products. By doing a bit of work, one can show that the coproduct of two unital $*$-sets $A, B$ is their disjoint (meaning that if $x\in A\cap B$, we treat it as two separate elements) union $A\sqcup B$ except that we identify $1_A\sim 1_B$. We will denote this by $A\vee B$. The $*:A\vee B\to A\vee B$ is defined by $a^{*_{A\vee B}} = a^{*_A}$ for $a\in A$ and $b^{*_{A\vee B}} = b^{*_B}$ for $b\in B$. The $\iota_A:A\to A\vee B$ and $\iota_B:B\to A\vee B$ in the diagram are the maps that just send $a\mapsto a$ and $b\mapsto b$.
Our classification will be in terms of coproducts because it will turn out that every unital $*$-set can be written in terms of coproducts of very simple objects. For example, the unital $*$-set on 4 elements $A = \{1,x,y,z\}$ where $1^* = 1$, $x^* = x$, and $y^* = z$ is the same as $\{1,x\}\vee \{1,y,z\}$.

Quotients

Sometimes structures are too complicated for us to easily study. Quotients allow us to simplify a structure to the essential features we care about. Moreover, there are many theorems describing quotients in terms of homomorphisms or other quotients. To form a quotient, we first need an equivalence relation.
Definition. An equivalence relation on a set $X$ is a binary relation $\equiv$ which assigns true ($a\equiv b$) or false ($a\not\equiv b$) to any pair of elements in $X$ such that The equivalence class of an element $x\in X$ is the set $[x]=\{y\in X | y\equiv x\}$. The set of equivalence classes of a relation is denoted $X/\equiv$ and is called the quotient.
Having an arbitrary equivalence relation in algebra is generally not good enough in the same sense that an arbitrary function is not good enough. Instead, we need the equivalence relation to respect the structure in question. These are called congruence relations. In our case, we have the following definition:
Definition. A congruence relation on a unital $*$-set $X$ is an equivalence relation such that
Given a congruence relation $\equiv$ on a unital $*$-set $A$, we can form a unital $*$-structure from the equivalence classes of the relation. Namely, for $[a]\in A/\equiv$, the $*$ is given by $[a]^* = [a^*]$ and $1_{A/\equiv} = [1_A]$. Note that this natural structure on the quotient is only possible for congruence relations. Otherwise, it will be ill-defined because it will depend on the choice of representative $a$ in $[a]$.
Example: Consider the equivalence relation on ${\mathbb{Z}}$ so that $n\equiv m$ if they have the same parity (both odd or both even). Then, ${\mathbb{Z}}/\equiv$ has two elements corresponding to odd integers and even integers. If we choose $n^* = 2-n$, then $\equiv$ is a congruence relation. Moreover, since it's a two element set, we only have one option, by our discussion earlier. We must have $[n]^* = [n]$ for all $n$. We can verify this by noting that $2 - n$ has the same parity as $n$.

Aside for those who know some algebra already

Those who know of normal subgroups may not have seen the following theorem that motivates the study of normal subgroups:
Theorem. There is a natural bijection between the congruence relations on a group and the normal subgroups of that group.
There are similar theorems for many other algebraic structures you may have seen:
Theorem. There is a natural bijection between the congruence relations on a ring and the ideals of that ring.
Theorem. There is a natural bijection between the congruence relations on a module and submodules of that module.
In general, no nice subset of an algebraic structure may exist. This is the case with our weird unital $*$-sets. However, somehow, the first isomorphism theorem still applies. Given a function $f:A\to B$, there's a natural equivalence relation we can put on $A$: $a_1\equiv_f a_2$ if $f(a_1)=f(a_2)$. You can check that this is indeed a congruence relation. Then, every first isomorphism theorem you will ever see can be written as $$(A\,/\equiv_f) \cong f(A).$$ This may not look the same as what you've seen. A congruence relation is not literally the same as a normal subgroup. However, the equivalence class of $e$ under $\equiv_f$ is a normal subgroup, and the map $\equiv\ \mapsto~[e]_\equiv$ is the bijection we referred to! Under this map $\equiv_f~\mapsto~\ker\{f\}$, where this is kernel in the usual sense.
Going back to my claim that ``every first isomorphism theorem can be written like the above.'' Let's actually investigate this claim, and we'll see that it is nearly true by definition.
First isomorphism theorem. If $A, B$ are algebraic structures of the same type and $f:A\to B$ is a structure-preserving homomorphism, then $$(A\,/\equiv_f)~\cong~ f(A)$$ as algebraic structures of the same type as $A$ and $B$.
Poof. Consider the map which sends an equivalence class $[a]_{f}$ to $f(a)$. This is well-defined because every element in the equivalence class has the same image. Moreover, it's a bijection; its inverse sends $f(a)$ to the equivalence class of elements in $A$ whose image is $f(a)$, i.e., $[a]$. Finally, it's structure preserving because $f$ is.
There are similar statements for the other isomorphism theorems!

Classification

Classification is sort of a vague notion. There is no formal definition. We call a structure classified if we believe we have a complete description of all possible examples. However, the phrase ``complete description'' varies by context. In some sense, we have classified all finite groups. However, our description takes the form of a composition series of simple groups, and composition series in general are quite hard to pin down. Generally, we want a description that is easy to conceptualize while allowing us to explicitly test theorems. In our case, we will classify unital $*$-sets as a coproduct of a bunch of small unital $*$-sets.
Classification is nice for many reasons. It gives us a clear understanding of our structure of interest. It allows us to formulate proofs very explicitly. Finally, when examples of our objects arise in applications, we can identify a very precise description, which is often useful for these applications. Algebraic structures are not the only objects that mathematicians classify. Most fields of mathematics study classification in one way or another.
Classification for general algebraic structures is very difficult. The systematic classification for finite simple groups has only recently been completed and we are a longshot from a complete understanding of finite groups. However, there are some structures with very simple descriptions. For example, vector spaces are completely characterised by their dimension.

Breaking the object into better behaved subobjects

There are really two things that can happen with any given element in a unital $*$-set. Either $x^* = x$ or $x^* = y$ and $y^* = x$ where $y\neq x$. Let's consider the extreme cases: $A$ is a unital $*$-set where $x^* = x$ for all $x\in A$, and $A$ is a unital $*$-set where $x^* \neq x$ for all $x\neq 1$. We'll call the first case a ``boring'' unital $*$-set and the second a ``cool'' unital $*$-set.
Every unital $*$-set $A$ has a boring and cool unital $*$-subsets: $$B(A) = \{x\in A | x^* = x\}$$ $$C(A) = \{x\in A | x^* \neq x\}\cup \{1\}.$$ Note that we needed to add 1 into $C(A)$ to make it an actual unital $*$-set. This wasn't necessary for $B(A)$ because $1^* = 1$. Finally and most importantly, $$A = B(A)\vee C(A).$$ Thus, if we can classify boring and cool unital $*$-sets in terms of coproducts, then we will classify arbitrary unital $*$-sets in terms of coproducts.

Classification of unital $*$-sets

We've finally arrived at a point which allows us to classify these unital $*$-sets. We need to make a few observations first.
  1. For any element $x\in A$, either $x^* = x$ or $x^* = y\neq x$ and $y^* = x$. That is, our elements essentially either isolate themselves or pair up. These correspond to boring and cool unital $*$-sets respectively.
  2. As an example, suppose $A = \{1,2,3,4,5\}$ is boring. Then, $A = \{1, 2\}\vee\{1,3\}\vee\{1,4\}\vee\{1,5\}$. In particular, boring unital $*$-sets, can be written as a bunch of coproducts of boring 2-element unital $*$-sets. In particular, it is the coproduct of all sets $\{1, x\}$ so that $x\neq 1$. We can write this as $$\bigvee_{\substack{x\in A\\x\neq 1}} \{1, x\}.$$
  3. As an example, suppose $A = \{1,6,7,8,9\}$ is cool and the elements pair up as $\{6,8\}$, $\{7,9\}$. Then, $A = \{1, 6,8\}\vee\{1,7,9\}$. In particular, cool unital $*$-sets, can be written as a bunch of coproducts of cool 3-element unital $*$-sets. In particular, it is the coproduct of all sets $\{1, x, x^*\}$ so that $x\neq 1$. We can write this as $$\bigvee_{\substack{\{x, x^*\}\subset A\\x\neq 1}} \{1, x, x^*\}.$$ Note that we are using paired subsets to make sure we don't double-count. The sets $\{x, x^*\}$ and $\{x^*, (x^*)^*\}$ are the same.
These observations combined tell us the following, any unital $*$-set $A$ (with at least 2 elements) can be written as $$A = B(A)\vee C(A) = \bigvee_{\substack{x\in A\\x^*=x\\x\neq 1}} \{1, x\}\vee \bigvee_{\substack{\{x, x^*\}\subset A\\x^*\neq x}} \{1, x, x^*\}.$$ Thus, every unital $*$-set is the coproduct of a bunch of copies of the boring 2-element unital $*$-set and a bunch of copies of the cool 3-element unital $*$-set.