\documentclass[11pt]{article}
\usepackage{amsfonts}
\usepackage{amsthm}
\usepackage{amsmath}
\usepackage{multicol}
\usepackage{latexsym}
\usepackage[pdftex]{graphicx}
\usepackage{url}
\usepackage{mathdots}
\usepackage{mathrsfs}
\usepackage{hyperref}
\hypersetup{colorlinks=true}
\setlength{\oddsidemargin}{.25in}
\setlength{\evensidemargin}{.25in}
\setlength{\textwidth}{6in}
\setlength{\topmargin}{-0.4in}
\setlength{\textheight}{8.5in}
\def\zero{\multicolumn{1}{>{\columncolor{white}}c}{0}}
\def\colCell#1#2{\multicolumn{1}{>{\columncolor{#1}}c}{#2}}
\input{preamble.tex}
\newcommand{\wqueen}{\includegraphics[width=.15in]{MC2012_LatinSquares_miniqueen.pdf}}
\begin{document}
\lecture{1:Limits, Sequences and Series}{Week 1}
This is the first week of the Mathematics Subject Test GRE prep course! We start by reviewing the concept of a \textbf{limit}, with an eye for how it applies to sequences, series and functions. There are more examples here than we had time for in class, so don't worry if you don't recognize everything here!
\section{Sequences: Definitions and Tools}
\begin{defn}
A \textbf{sequence} of real numbers is a collection of real numbers $\{a_n\}^\infty_{n=1}$ indexed by the natural numbers.
\end{defn}
\begin{defn}
A sequence $\{a_n\}^\infty_{n=1}$ is called $\textbf{bounded}$ if there is some value $B \in \mathbb{R}$ such that $|a_n| < B$, for every $n \in \mathbb{N}$. Similarly, we say that a sequence is \textbf{bounded above} if there is some value $U$ such that $a_n \leq U, \forall n$, and say that a sequence is \textbf{bounded below} if there is some value $L$ such that $a_n \geq L, \forall n$.
\end{defn}
\begin{defn}
A sequence $\{a_n\}^\infty_{n=1}$ is said to be $\textbf{monotonically increasing}$ if $a_{n} \leq a_{n+1}$, for every $n \in \mathbb{N}$; conversely, a sequence is called \textbf{monotonically decreasing} if $a_{n} \geq a_{n+1}$, for every $n \in \mathbb{N}$.
\end{defn}
\begin{defn}
Take a sequence $\{a_n\}_{n=1}^\infty.$ A \textbf{subsequence} of $\{a_n\}_{n=1}^\infty$ is a sequence that we can create from the $\{a_n\}_{n=1}^\infty$'s by deleting some elements (making sure to still leave infinitely many elements left,) without changing the order of the remaining elements.
For example, if $\{a_n\}_{n=1}^\infty$ is the sequence
\begin{align*}
0,1,0,1,0,1,0,1,0,1,\ldots,
\end{align*}
the sequences $0,0,0,0,0\ldots$ and $1,1,1,1,1,\ldots$ are both subsequences of $\{a_n\}_{n=1}^\infty,$ as is $0,1,0,0,0,0,\ldots$ and many others.
\end{defn}
\begin{defn}
A sequence $\{a_n\}^\infty_{n=1}$ converges to some value $\lambda$ if the $a_n$'s ``go to $\lambda$'' at infinity. To put it more formally,
$\lim_{n \to \infty}a_n = \lambda$ iff for any distance $\epsilon$, there is some cutoff point $N$ such that for any $n$ greater than this cutoff point, $a_n$ must be within $\epsilon$ of our limit $\lambda$.
In symbols:
\begin{align*}
\lim_{n \to \infty} a_n = \lambda \textrm{ iff }( \forall \epsilon )(\exists N )(\forall n > N)~ |a_n - \lambda| < \epsilon.
\end{align*}
\end{defn}
Convergence is one of the most useful properties of sequences! If you know that a sequence converges to some value $\lambda$, you know, in a sense, where the sequence is ``going,'' and furthermore know where almost all of its values are going to be (specifically, close to $\lambda$.)
Because convergence is so useful, we've developed a number of tools for determining where a sequence is converging to:
\subsection{Sequences: Convergence Tools}
\begin{enumerate}
\item \textbf{The definition of convergence}: The simplest way to show that a sequence converges is sometimes just to use the definition of convergence. In other words, you want to show that for any distance $\epsilon$, you can eventually force the $a_n$'s to be within $\epsilon$ of our limit, for $n$ sufficiently large.
How can we do this? One method I'm fond of is the following approach:
\begin{itemize}
\item First, examine the quantity $|a_n - L|$, and try to come up with a very simple upper bound that depends on $n$ and goes to zero. Example bounds we'd love to run into: $1/n, 1/n^2, 1/\log(\log(n)).$
\item Using this simple upper bound, given $\epsilon > 0$, determine a value of $N$ such that whenever $n > N$, our simple bound is less than $\epsilon.$ This is usually pretty easy: because these simple bounds go to 0 as n gets large, there's always some value of $N$ such that for any $n > N$, these simple bounds are as small as we want.
\item Combine the two above results to show that for any $\epsilon$, you can find a cutoff point $N$ such that for any $n > N$, $|a_n-L| < \epsilon$.
\end{itemize}
That said: if you find yourself needing to resort to the $\epsilon-N$ definition for the limit of a sequence on the GRE test, something has likely gone wrong. Far more useful are results like the following:
\item \textbf{Arithmetic and sequences}: These tools let you combine previously-studied results to get new ones. Specifically, we have the following results:
\begin{itemize}
\item \textit{Additivity of sequences}: if $\lim_{n \to \infty} a_n, \lim_{n \to \infty} b_n$ both exist, then $\lim_{n \to \infty} a_n + b_n = (\lim_{n \to \infty} a_n) + (\lim_{n \to \infty} b_n)$.
\item \textit{Multiplicativity of sequences}: if $\lim_{n \to \infty} a_n, \lim_{n \to \infty} b_n$ both exist, then $\lim_{n \to \infty} a_n b_n = (\lim_{n \to \infty} a_n) \cdot (\lim_{n \to \infty} b_n)$.
\item \textit{Quotients of sequences}: if $\lim_{n \to \infty} a_n, \lim_{n \to \infty} b_n$ both exist, and $b_n \neq 0$ for all $n$, then $\lim_{n \to \infty} \frac{a_n}{b_n} = (\lim_{n \to \infty} a_n) / (\lim_{n \to \infty} b_n)$.
\end{itemize}
\item \textbf{Monotone and bounded sequences}: if the sequence $\{a_n\}_{n=1}^\infty$ is bounded above and nondecreasing, then it converges; similarly, if it is bounded above and nonincreasing, it also converges. If a sequence is monotone, this is usually the easiest way to prove that your sequence converges, as both monotone and bounded are ``easy'' properties to work with. One interesting facet of this property is that it can tell you that a sequence converges without necessarily telling you what it converges to! So, it's often of particular use in situations where you just want to show something converges, but don't actually know where it converges \textit{to}.
\item \textbf{Subsequences and convergence}: if a sequence $\{a_n\}_{n=1}^\infty$ converges to some value $L$, all of its subsequences must also converge to $L$.
One particularly useful consequence of this theorem is the following: suppose a sequence $\{a_n\}_{n=1}^\infty$ has two distinct subsequences $\{b_n\}_{n=1}^\infty, \{c_n\}_{n=1}^\infty$ that converge to different limits. Then the original sequence cannot converge! This is one of the few tools that you can use to directly show that something diverges, and as such is pretty useful.
\item \textbf{Squeeze theorem for sequences}: if $\lim_{n \to \infty} a_n, \lim_{n \to \infty} b_n$ both exist and are equal to some value $l$, and the sequence $\{c_n\}_{n=1}^\infty$ is such that $a_n \leq c_n \leq b_n$, for all n, then the limit $\lim_{n \to \infty} c_n$ exists and is also equal to $l$. This is particularly useful for sequences with things like $\sin(\textrm{horrible things})$ in them, as it allows you to ``ignore'' bounded bits that aren't changing where the sequence goes.
\item \textbf{Cauchy sequences}: We say that a sequence is \textbf{Cauchy} if and only if for every $\epsilon > 0$ there is a natural number $N$ such that for every $m> n \geq N$, we have
\begin{align*}
|a_m - a_n| < \epsilon.
\end{align*}
You can think of this condition as saying that Cauchy sequences ``settle down'' in the limit -- i.e. that if you look at points far along enough on a Cauchy sequence, they all get fairly close to each other.
The Cauchy theorem, in this situation, is the following: a sequence is Cauchy if and only if it converges.
Much like the $\epsilon-N$ definition, if you find yourself showing something is Cauchy to show it converges, you have probably made a mistake in your choice of methods. That said, sometimes definition-centric questions will crop up that amount to ``do you remember this concept;'' Cauchy is certainly one such concept you could be asked to recall.
\end{enumerate}
\section{Sequences: Examples of Convergence Tools}
In this section, we work some examples of these tools.
\begin{claim}
(Definition of convergence example:)
\begin{align*}
\lim_{n \to \infty} \sqrt{n+1} - \sqrt{n} = 0.
\end{align*}
\end{claim}
\begin{proof}
When we discussed the definition as a convergence tool, we talked about a ``blueprint'' for how to go about proving convergence from the definition: (1) start with $|a_n - L|$, (2) try to find a simple upper bound on this quantity depending on $n$, and (3) use this simple bound to find for any $\epsilon$ a value of $N$ such that whenever $n > N$, we have
\begin{align*}
|a_n - L| < (\textrm{simple upper bound}) < \epsilon.
\end{align*}
Let's try this! Specifically, examine the quantity $| \sqrt{n+1} - \sqrt{n} - 0|$:
\begin{align*}
| \sqrt{n+1} - \sqrt{n} - 0| &= \sqrt{n+1} - \sqrt{n} \\
&= \frac{(\sqrt{n+1} - \sqrt{n})(\sqrt{n+1} + \sqrt{n})}{\sqrt{n+1} + \sqrt{n}}\\
&= \frac{n+1 - n}{\sqrt{n+1} + \sqrt{n}}\\
&= \frac{1}{\sqrt{n+1} + \sqrt{n}}\\
&< \frac{1}{\sqrt{n}}.\\
\end{align*}
All we did here was hit our $|a_n - L|$ quantity with a ton of random algebra, and kept trying things until we got something simple. The specifics aren't as important as the idea here: just start with the $|a_n - L|$ bit, and try everything until it's bounded by something simple and small!
In our specific case, we've acquired the upper bound $\frac{1}{\sqrt{n}}$, which looks rather simple: so let's see if we can use it to find a value of $N$.
Take any $\epsilon < 0$. If we want to make our simple bound $\frac{1}{\sqrt{n}} < \epsilon$, this is equivalent to making $\frac{1}{\epsilon} < \sqrt{n}$, i.e $\frac{1}{\epsilon^2} < n$. So, if we pick $N > \frac{1}{\epsilon^2},$ we know that whenever $n > N$, we have $n > \frac{1}{\epsilon^2}$, and therefore that our simple bound is $< \epsilon$. But this is exactly what we wanted!
In specific, for any $\epsilon > 0$, we've found a $N$ such that for any $n> N$, we have
\begin{align*}
| \sqrt{n+1} - \sqrt{n} - 0| < \frac{1}{\sqrt{n}} < \frac{1}{\sqrt{N}} < \epsilon,
\end{align*}
which is the definition of convergence. So we've proven that $\lim_{n \to \infty} \sqrt{n+1} - \sqrt{n} = 0.$
\end{proof}
\begin{claim}
(Monotone and Bounded example:) The sequence
\begin{align*}
a_1 &= 2,\\
a_{n+1} &= \sqrt{3a_{n}^2 - 1}
\end{align*}
converges.
\end{claim}
\begin{proof}
This is a recursively-defined sequence; that is, the terms of this sequence are not explicitly stated, but rather defined in terms of earlier terms! This is a bit of a headache for us in terms of determining where this sequence \textbf{goes}. So: let's not do that yet! Instead, let's just try to determine if it goes anywhere at all first; that is, let's see if we can determine whether or not it converges!
If we want to show a sequence converges without knowing where it converges to, there are relatively few tools we have (basically Monotone+Bounded, or Cauchy.) Cauchy is \ldots not very pleasant-looking, so let's see if this is monotone and bounded!
From inspection ($a_1 = 2, a_2 = \sqrt{5}, a_3 = \sqrt{3\sqrt{5} - 1} > \sqrt{5}, \ldots$) with a calculator, it seems like our terms are increasing --- that is, $a_{n+1} \geq a_n$, for all $n$! We can prove this formally by induction:
Base case: $a_2 = \sqrt{3\cdot 2 - 1} = \sqrt{5} > \sqrt{4} = 2 =a_1$.
Inductive step: assume that $a_{n+1} \geq a_{n}$; we will use this assumption to prove that $ a_{n+2} \geq a_{n+1}$. To do this, simply look at $a_{n+2}$. By definition + our inductive hypothesis, we have
\begin{align*}
a_{n+2} = \sqrt{3a_{n+1} - 1} \geq \sqrt{3a_n - 1} = a_{n+1},
\end{align*}
and have thus proven our claim!
Bounded is not too hard, as well: if you plug a bunch of values here into a calculator, you'll see that everything here looks like it is some value less than 3! This suggests that 3 might be an upper bound, which we can again show by induction:
Base case: $a_1 = 2 < 3$.
Inductive step: assume that $a_n < 3$. We seek to prove that $a_{n+1} < 3$. This is not hard; again, use your definition + the inductive assumption to see taht
\begin{align*}
a_{n+1} = \sqrt{3a_n - 1} < \sqrt{3\cdot 3 - 1} = \sqrt{8} < 3.
\end{align*}
So: by the monotone+bounded theorem from before, a limit exists! How can we \textbf{find} it?
Well: let's say that our sequence converges to some value $L$, say: then we have
\begin{align*}
\lim_{n \to \infty} a_n = L.
\end{align*}
If we use our definition for $a_n$, we can see that this limit is als
\begin{align*}
\lim_{n \to \infty} \sqrt{3a_{n-1} - 1} = L.
\end{align*}
When presented with a limit like this, our first reaction should probably be ``square roots are irritating.'' Our response, therefore, should be to get rid of them! In other words, let's square both sides; i.e. by using arithmetic and limits, we get
\begin{align*}
\lim_{n \to \infty} (\sqrt{3a_{n-1} - 1})^2 = (\lim_{n \to \infty} \sqrt{3a_{n-1} - 1} )(\lim_{n \to \infty} \sqrt{3a_{n-1} - 1} ) = L \cdot L.
\end{align*}
This is nicer! In particular, after squaring, we can manipulate the LHS to get that
\begin{align*}
3\lim_{n \to \infty} a_{n-1} = L^2 + 1.
\end{align*}
Because our limit is taken as $n$ goes to infinity, the $a_{n-1}$'s are just $a_{huge}$ for appropriately huge values of huge; in other words, the limit on the left just goes to the same place that $\displaystyle\lim_{n \to \infty} a_n$ goes to, i.e.\ $L$.
Therefore, we actually have
\begin{align*}
3L = L^2 + 1;
\end{align*}
in other words $L^2 - 3L + 1 = 0$. This is a quadratic equation; we can solve for its two roots and get
\begin{align*}
\frac{3 \pm \sqrt{9 - 4}}{2} = \frac{3 \pm \sqrt{5}}{2}.
\end{align*}
Which of these is the limit? Well: if we look at $\frac{3 - \sqrt{5}}{2}$, this number is strictly less than $\frac{3}{2}$, which is in turn less than 2. But our sequence starts at 2 and increases; so this cannot be the limit! Therefore, we know that our limit $L$ must be the other value above: namely,
\begin{align*}
\frac{3 + \sqrt{5}}{2} \approx 2.62.
\end{align*}
\end{proof}
\begin{claim}
(Squeeze theorem example:)
\begin{align*}
\lim_{n \to \infty} \frac{\sin\left({n^2\cdot \pi^{n^e - 12n}} \cdot n^{n^{\iddots^n}}\right)}{n} = 0.
\end{align*}
\end{claim}
\begin{proof}
The idea of squeeze theorem examples is that they allow you to get rid of awful-looking things whenever they aren't materially changing where the sequence is actually going. Specifically, in our example here, the $\sin(\textrm{terrible things})$ part is awful to work with, but really isn't doing anything to our sequence: the relevant part is the denominator, which is going to infinity (and therefore forcing our sequence to go to 0.
Rigorously: we have that
\begin{align*}
-1 \leq \sin(\textrm{terrible things}) \leq 1,
\end{align*}
no matter what terrible things we've put into the $\sin$ function. Dividing the left and right by $n$, we have that
\begin{align*}
-\frac{1}{n} \leq \frac{\sin(\textrm{terrible things})}{n} \leq \frac{1}{n},
\end{align*}
for every $n$. Then, because $\lim_{n \to \infty} -\frac{1}{n} = \lim_{n \to\infty} \frac{1}{n} = 0$, the squeeze theorem tells us that
\begin{align*}
\lim_{n \to \infty} \frac{\sin\left({n^2\cdot \pi^{n^e - 12n}} \cdot n^{n^{\iddots^n}}\right)}{n} = 0
\end{align*}
as well.
\end{proof}
\section{Series: Definitions and Tools}
We define series as follows:
\begin{defn}
A sequence is called \textbf{summable} if the sequence $\{s_n\}^\infty_{n=1}$ of partial sums
\begin{align*}
s_n := a_1 + \ldots a_n = \sum_{k=1}^n a_k
\end{align*}converges.
If it does, we then call the limit of this sequence the \textbf{series} associated to $\{a_n\}_{n=1}^\infty$, and denote this quantity by writing
\begin{align*}
\sum^\infty_{n=1} a_n.
\end{align*}
We say that a series$ \sum^\infty_{n=1} a_n$ \textbf{converges} or \textbf{diverges} if the sequence $\left\{ \sum_{k=1}^n a_k \right\}^\infty_{n=1}$ of partial sums converges or diverges, respectively.
\end{defn}
Just like sequences, we have a collection of various tools we can use to study whether a given series converges or diverges. Here are two such tools:
\begin{enumerate}
\item \textbf{Comparison test}: If $\{a_n\}^\infty_{n=1}, \{b_n\}^\infty_{n=1}$ are a pair of sequences such that $0 \leq a_n \leq b_n$, then the following statement is true:
\begin{align*}
\left( \sum^\infty_{n=1} b_n \textrm{ converges} \right) \Rightarrow \left( \sum^\infty_{n=1} a_n \textrm{ converges} \right) .
\end{align*}
When to use this test: when you're looking at something fairly complicated that either (1) you can bound above by something simple that converges, like $\sum 1/n^2$, or (2) that you can bound below by something simple that diverges, like $\sum 1/n$.
\item \textbf{Ratio test}: If $\{a_n\}^\infty_{n=1}$ is a sequence of positive numbers such that
\begin{align*}
\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = r,
\end{align*}
then we have the following three possibilities:
\begin{itemize}
\item If $r < 1$, then the series $\sum^\infty_{n=1} a_n$ converges.
\item If $r > 1$, then the series $\sum^\infty_{n=1} a_n$ diverges.
\item If $r = 1$, then we have no idea; it could either converge or diverge.
\end{itemize}
When to use this test: when you have something that is growing kind of like a geometric series: so when you have terms like $2^n$ or $n!$.
\item \textbf{Integral test}: If $f(x)$ is a positive and monotonically decreasing function, then
\begin{align*}
\sum^{\infty}_{n=N} f(n) \textrm{ converges if and only if } \int^\infty_N f(x)dx \textrm{ converges.}
\end{align*}
When to use this test: whenever you have something that looks a lot easier to integrate than to sum. (In particular, this test instantly proves that $\displaystyle\sum_{n=1}^\infty \frac{1}{n^c}$ converges for $c>1$ and diverges for $c\leq 1$. In particular, this test I think answers every problem the ``$p$-series test'' solves, if that is one you remember from your calculus/analysis classes!)
\item \textbf{Alternating series test}: If $\{a_n\}^\infty_{n=1}$ is a sequence of numbers such that
\begin{itemize}
\item $\lim_{n \to \infty} a_n = 0$ monotonically, and
\item the $a_n$'s alternate in sign, then
\end{itemize}
the series $\sum^\infty_{n=1} a_n$ converges.
When to use this test: when you have an alternating series.
\item \textbf{Absolute convergence $\Rightarrow$ convergence}: Suppose that $\{a_n\}^\infty_{n=1}$ is a sequence such that
\begin{align*}
\sum_{n=1}^\infty |a_n|
\end{align*}
converges. Then the sequence $\sum_{n=1}^\infty a_n$ also converges.
When to use this test: whenever you have a sequence that has positive and negative terms, that is not alternating. (Pretty much every other test requires that your sequence is positive, so you'll often apply this test and then apply one of the other tests to the series $\sum_{n=1}^\infty |a_n|$.)
\end{enumerate}
To illustrate how to work with these definitions, we work a collection of examples here:
\section{Series: Example Calculations}
\begin{claim}
(Comparison test example): If $\{ a_n \}^\infty_{n=1}$ is a sequence of positive numbers such that the series $\sum^{\infty}_{n=1} a_n$ converges, then the series
\begin{align*}
\sum^{\infty}_{n=1} \frac{\sqrt{a_n}}{n}
\end{align*}
must also converge.
\end{claim}
\begin{proof}
To see why, simply notice that each individual term $\frac{\sqrt{a_n}}{n}$ in this series is just the \href{http://en.wikipedia.org/wiki/Geometric_mean}{geometric mean} of $a_n$ and $\frac{1}{n^2}$. Because both of the series$\sum_{n=1}^\infty a_n$ and $\sum_{n=1}^\infty \frac{1}{n^2}$ are convergent, we would expect their average (for any reasonable sense of average) to also converge: in particular, we would expect to be able to bound its individual terms above by some combination of the original terms $a_n$ and $\frac{1}{n^2}$!
In fact, we actually \textbf{can} do this, as illustrated below:
\begin{align*}
& 0 \leq \left( \sqrt{a_n} - \frac{1}{n}\right)^2\\
\Rightarrow \qquad & 0 \leq a_n + \frac{1}{n^2} - 2\frac{\sqrt{a_n}}{n}\\
\Rightarrow \qquad & \frac{\sqrt{a_n}}{n} \leq \frac{1}{2}\left(a_n + \frac{1}{n^2} \right) < a_n + \frac{1}{n^2}.\\
\end{align*}
Look at the series $\sum_{n=1}^\infty \left( a_n + \frac{1}{n^2} \right).$ Because both of the series
\begin{align*}
\sum_{n=1}^\infty a_n, \sum_{n=1}^\infty \frac{1}{n^2}
\end{align*}
converge, we can write
\begin{align*}
\sum_{n=1}^\infty \left( a_n + \frac{1}{n^2} \right) = \sum_{n=1}^\infty a_n + \sum_{n=1}^\infty \frac{1}{n^2}
\end{align*}
and therefore notice that this series also converges; therefore, by the comparison test, our original series $\sum^{\infty}_{n=1} \frac{\sqrt{a_n}}{n}$ must also converge.
\end{proof}
\begin{claim}
(Ratio test example): The series
\begin{align*}
\sum^{\infty}_{n=1} \frac{2^n \cdot n!}{n^{n+1}}
\end{align*}
converges.
\end{claim}
\begin{proof}
Motivated by the presence of both a $n!$ and a $2^n$, we try the ratio test:
\begin{align*}
\frac{a_{n}}{a_{n-1}} & = \frac{ \frac{2^n \cdot n!}{n^{n+1} }}{\frac{2^{n-1} \cdot (n-1)!}{(n-1)^{n} }}\\
& = \frac{2^n \cdot n! \cdot (n-1)^{n}}{2^{n-1}\cdot (n-1)! \cdot n^{n+1} }\\
& = \frac{2 \cdot n \cdot (n-1)^{n}}{ n^{n+1} }\\
& = \frac{2 \cdot (n-1)^{n}}{ n^{n} }\\
& = 2 \cdot\left( \frac{n-1}{n}\right)^n \\
& = 2 \cdot \left( 1 - \frac{1}{n} \right)^n\\
\end{align*}
Here, we need one bit of knowledge that you may not have encountered before: the fact that the limit
\begin{align*}
\lim_{n \to\infty} \left( 1 + \frac{x}{n} \right)^n= e^x,
\end{align*}
and in particular that
\begin{align*}
\lim_{n \to\infty} \left( 1 - \frac{1}{n} \right)^n= \frac{1}{e}.
\end{align*} (Historically, I'm pretty certain that that this is how $e$ was defined; so feel free to take it as a definition of $e$ itself.)
Applying this tells us that
\begin{align*}
\lim_{n \to \infty} \frac{a_n}{a_{n-1}} = \lim{n \to \infty} 2 \cdot \left( 1 - \frac{1}{n} \right)^n = \frac{2}{e},
\end{align*}
which is less than 1. So the ratio test tells us that this series converges!
\end{proof}
\begin{claim}
(Alternating series test): The series
\begin{align*}
\sum^\infty_{n=1} \frac{(-1)^{n+1}}{n}
\end{align*}
converges.
\end{claim}
\begin{proof}
The terms in this series are alternating in sign: as well, they're bounded above and below by $\pm \frac{1}{n}$, both of which converge to 0. Therefore, we can apply the alternating series test to conclude that this series converges.
\end{proof}
\begin{claim}
(Absolute convergence $\Rightarrow$ convergence): The series
\begin{align*}
\sum^\infty_{n=1} \frac{\cos^n(nx)}{n!}
\end{align*}
converges.
\end{claim}
\begin{proof}
We start by looking at the series composed of the absolute values of these terms:
\begin{align*}
\sum^\infty_{n=1} \frac{|\cos^n(nx)|}{n!}
\end{align*}
Because $|\cos(x)| \leq 1$ for all $x$, we can use the comparison test to notice that this series will converge if the series
\begin{align*}
\sum^\infty_{n=1} \frac{1}{n!}
\end{align*}
converges.
We can study this series with the ratio test:
\begin{align*}
\lim_{n \to \infty} \frac{\frac{1}{n!}}{\frac{1}{(n-1)!}} = \lim_{n \to \infty}\frac{1}{n} = 0,
\end{align*}
which is less than 1. Therefore this series converges, and therefore (by the comparison test + absolute convergence $\Rightarrow$ convergence) our original series
\begin{align*}
\sum^\infty_{n=1} \frac{\cos^n(nx)}{n!}
\end{align*}
converges.
\end{proof}
\begin{claim}
The series
\begin{align*}
\sum_{n=1}^\infty ne^{-n^2}
\end{align*}
converges.
\end{claim}
\begin{proof}
First, notice that because
\begin{align*}
\left( xe^{-x^2} \right)' = e^{-x^2} - 2x^2e^{-x^2} = (1-2x^2)e^{-x^2} < 0, \forall x>1,
\end{align*}
we know that this function is decreasing on all of $[1,\infty)$. As well, it is positive on $[1,\infty)$: so we can apply the integral test to see that this series converges iff the integral
\begin{align*}
\int_{n=1}^\infty xe^{-x^2} dx
\end{align*}
converges.
But this is not too hard to show! -- by using the $u$-substitution $u = x^2$, we have that
\begin{align*}
\int_{n=1}^\infty xe^{-x^2} dx = \int_1^\infty \frac{e^{-u}}{2}du = -\frac{e^{-u}}{2} \Bigg|_1^\infty = \frac{e}{2},
\end{align*}
and that in particular this integral converges. Therefore
\begin{align*}
\sum_{n=1}^\infty ne^{-n^2}
\end{align*}
must converge as well.
\end{proof}
\section{Functions, Continuity, and Limits: Definitions}
\begin{defn}
If $f: X \to Y$ is a function between two subsets $X, Y$ of $\mathbb{R}$, we say that
\begin{align*}
\lim_{x \to a} f(x) = L
\end{align*}
if and only if \begin{enumerate}
\item (vague:) as $x$ approaches $a$, $f(x)$ approaches $L$.
\item (precise; wordy:) for any distance $\epsilon > 0$, there is some neighborhood $\delta > 0$ of $a$ such that whenever $x \in X$ is within $\delta$ of $a$, $f(x)$ is within $\epsilon$ of $L$.
\item (precise; symbols:)
\begin{align*}\forall \epsilon > 0, \exists \delta > 0 \textrm{ s.t. } \forall x \in X, (|x-a| < \delta) \Rightarrow (\left| f(x) - L \right| < \epsilon ).
\end{align*}
\end{enumerate}
\end{defn}
\begin{defn}
A function $f: X \to Y$ is said to be \textbf{continuous} at some point $a \in X$ iff
\begin{align*}
\lim_{x \to a} f(x) = f(a).
\end{align*}
\end{defn}
Somewhat strange definitions, right? At least, the two ``rigorous'' definitions are somewhat strange: how do these epsilons and deltas connect with the rather simple concept of ``as $x$ approaches $a$, $f(x)$ approaches $f(a)$''? To see this a bit better, consider the following image:
\begin{center}
\includegraphics[width=2.5in]{limit_e_d.pdf}
\end{center}
This graph shows pictorially what's going on in our ``rigorous'' definition of limits and continuity: essentially, to rigorously say that ``as $x$ approaches $a$, $f(x)$ approaches $f(a)$'', we are saying that
\begin{itemize}
\item for any distance $\epsilon$ around $f(a)$ that we'd like to keep our function,
\item there is a neighborhood $(a - \delta, a + \delta)$ around $a$ such that
\item if $f$ takes only values within this neighborhood $(a - \delta, a + \delta)$ , it stays within $\epsilon$ of $f(a)$.
\end{itemize}
Basically, what this definition says is that if you pick values of $x$ sufficiently close to $a$, the resulting $f(x)$'s will be as close as you want to be to $f(a)$ -- i.e. that ``as $x$ approaches $a$, $f(x)$ approaches $f(a)$.''
This, hopefully, illustrates what our definition is trying to capture -- a concrete notion of something like convergence for functions, instead of sequences.
In practice, on the GRE, you are probably in trouble if you try to use this definition (gorgeous as it is.) Instead, what you likely want to do is try some of the following tools:
\begin{enumerate}
\item \textbf{Squeeze theorem}: If $f,g,h$ are functions defined on some interval $I \setminus \{a\}$\footnote{The set $X \setminus Y$ is simply the set formed by taking all of the elements in $X$ that are not elements in $Y$. The symbol $\setminus$, in this context, is called ``set-minus'', and denotes the idea of ``taking away'' one set from another.} such that
\begin{align*}
&f(x) \leq g(x) \leq h(x), \forall x \in I \setminus \{a\}, \\
&\lim_{x \to a} f(x) = \lim_{x \to a} h(x),\\
\end{align*}
then $\lim_{x \to a} g(x)$ exists, and is equal to the other two limits $\lim_{x \to a} f(x), \lim_{x \to a} h(x)$.
\item \textbf{Limits and arithmetic}: if $f,g$ are a pair of functions such that $\lim_{x \to a} f(x)$, $\lim_{x \to a} g(x)$ both exist, then we have the following equalities:
\begin{align*}
\lim_{x \to a}(\alpha f(x) + \beta g(x)) &= \alpha \left(\lim_{x \to a} f(x) \right) + \beta \left(\lim_{x \to a} g(x) \right)\\
\lim_{x \to a}(f(x)\cdot g(x)) &= \left(\lim_{x \to a} f(x) \right) \cdot \left(\lim_{x \to a} g(x) \right)\\
\lim_{x \to a}\left(\frac{f(x)}{g(x)}\right) &= \left(\lim_{x \to a} f(x) \right) / \left(\lim_{x \to a} g(x) \right), \textrm{ if } \lim_{x \to a} g(x) \neq 0. \\
\end{align*}
\item \textbf{Limits and composition}: if $f: Y \to Z$ is a function such that $\lim_{y \to a} f(x) = L$, and $g: X \to Y$ is a function such that $\lim_{x \to b} g(x) = a$, then
\begin{align*}
\lim_{x \to b} f(g(x)) = L.
\end{align*}
Specifically, if both functions are continuous, their composition is continuous.
\item \textbf{L'H\^opital's rule}: If $f(x)$ and $g(x)$ are a pair of differentiable functions such that either
\begin{itemize}
\item $\lim_{x \to a} f(x) = 0$ and $\lim_{x \to a} g(x) = 0$, or
\item $\lim_{x \to a} f(x) = \pm \infty$ and $\lim_{x \to a} g(x) = \pm \infty,$
\end{itemize}
then $\lim_{x \to a} \frac{f(x)}{g(x)} = \lim_{x \to a} \frac{f'(x)}{g'(x)}, $ whenever the second limit exists.
\end{enumerate}
\section{Limits: Examples}
\begin{exmp}
\begin{align*}
\lim_{x \to 0} x^2\sin(1/x) =0.\\
\end{align*}
\end{exmp}
\begin{proof}
So: for all $x \in \mathbb{R}, x \neq 0$, we have that
\begin{align*}
& -1 \leq \sin(1/x) \leq 1\\
\Rightarrow & -x^2 \leq x^2 \sin(1/x) \leq x^2; \\
\end{align*}
thus, by the squeeze theorem, as the limit as $x \to 0$ of both $-x^2$ and $x^2$ is 0,
\begin{align*}
\lim_{x \to 0} x^2\sin(1/x) =0\\
\end{align*}
as well.
\end{proof}
\begin{exmp}
\begin{align*}
\lim_{x \to a} \sin(1/x^2) = \sin(1/a^2),
\end{align*}
if $a \neq 0$.
\end{exmp}
\begin{proof}
By our work earlier in this lecture, $1/x^2$ is continuous at any value of $a \neq 0$, and from class $\sin(x)$ is continuous everywhere: thus, we have that their composition, $\sin(1/a^2)$, is continuous wherever $x \neq 0$. Thus,
\begin{align*}
\lim_{x \to a} \sin(1/x^2) = \sin(1/a^2),
\end{align*}
as claimed.
\end{proof}
\begin{exmp}
Show that the limit
\begin{align*}
\lim_{x \to 0} \frac{(1-x)^x - 1 + x^2}{x^3}
\end{align*}
converges to $1/2$.
\end{exmp}
\begin{proof}
We bash this limit repeatedly with L'H\^opital's rule. First, before we can apply L'H\^opital's rule, we must check that its conditions apply. The functions contained in the numerator and denominator are all infinitely differentiable near 0, so this will never be a stumbling block: furthermore, because the numerator and denominator are both continuous/defined at 0, we can evaluate their limits at 0 by just plugging in 0: i.e.
\begin{align*}
&\lim_{x \to 0} (1-x)^x - 1 + x^2 = (1-0)^0 - 1 + 0^2 = 1-1 = 0, \textrm{ and}\\
&\lim_{x \to 0} x^3 = 0^3 = 0.
\end{align*}
So we've satisfied the conditions for L'H\^opital's rule, and can apply it to our limit:
\begin{align*}
\lim_{x \to 0} \frac{(1-x)^x - 1 + x^2}{x^3} &=_{L'H} \lim_{x \to 0} \frac{\frac{d}{dx} \left((1-x)^x - 1 + x^2\right)}{\frac{d}{dx}\left(x^3\right)}.\\
\end{align*}
At this point, we recall how to differentiate functions of the form $f(x)^{g(x)}$, where $f(x) > 0$, by using the identity
\begin{align*}
(f(x))^{g(x)} &= e^{\ln(f(x)) \cdot g(x)}\\
\Rightarrow \frac{d}{dx} (f(x))^{g(x)} &= \frac{d}{dx} e^{\ln(f(x)) \cdot g(x)} \\
&= e^{\ln(f(x)) \cdot g(x)} \cdot \left( \frac{g(x)}{f(x)} \cdot f'(x) + g'(x)\ln(f(x))\right).
\end{align*}
In particular, we can rewrite $(1-x)^x$ as $e^{\ln(1-x) \cdot x}$, which will let us just differentiate using the chain rule:
\begin{align*}
\lim_{x \to 0} \frac{(1-x)^x - 1 + x^2}{x^3} &=_{L'H} \lim_{x \to 0} \frac{\frac{d}{dx} \left((1-x)^x - 1 + x^2\right)}{\frac{d}{dx}\left(x^3\right)}\\
&= \lim_{x \to 0} \frac{\frac{d}{dx} \left(e^{\ln(1-x) \cdot x} - 1 + x^2\right)}{\frac{d}{dx}\left(x^3\right)}
&= \lim_{x \to 0} \frac{ e^{\ln(1-x) \cdot x}\cdot \left(\ln(1-x) + \frac{x}{x-1} \right) + 2x}{3x^2}
\end{align*}
Again, both the numerator and denominator are continuous, and plugging in 0 up top yields $e^{ln(1) \cdot 0} \cdot \left(\ln(1) + \frac{0}{1} \right) - 2\cdot 0 = 0$, while on the bottom we also get 0. Therefore, we can apply L'H\^opital's rule again to get that our limit is just
\begin{align*}
& \lim_{x \to 0} \frac{\frac{d}{dx} \left( e^{\ln(1-x) \cdot x}\cdot \left(\ln(1-x) + \frac{x}{x-1} \right) + 2x\right)}{\frac{d}{dx} \left(3x^2\right)}\\
=& \lim_{x \to 0} \frac{ e^{\ln(1-x) \cdot x}\cdot \left(\ln(1-x) + \frac{x}{x-1} \right)^2 + e^{\ln(1-x) \cdot x}\cdot\left( -\frac{1}{1-x} - \frac{1}{(x-1)^2} \right) + 2}{6x}
\end{align*}
Again, the top and bottom are continuous near 0, and at 0 the top is
\begin{align*}
e^{\ln(1-0) \cdot 0}\cdot \left(\ln(1-0) + \frac{0}{0-1} \right)^2 + e^{\ln(1-0) \cdot 0}\cdot\left( -\frac{1}{1-0} - \frac{1}{(0-1)^2} \right) + 2 = 0 - 2 + 2 = 0,
\end{align*}
while the bottom is also 0. So, we can apply L'H\^opital again! This tells us that our limit is in fact
\begin{align*}
& \lim_{x \to 0} \frac{ \frac{d}{dx} \left(e^{\ln(1-x) \cdot x}\cdot \left(\ln(1-x) + \frac{x}{x-1} \right)^2 + e^{\ln(1-x) \cdot x}\cdot\left( -\frac{1}{1-x} - \frac{1}{(x-1)^2} \right) + 2\right)}{\frac{d}{dx} \left(6x\right)}\\
=& \lim_{x \to 0} \frac{\begin{array}{cc} e^{\ln(1-x) \cdot x}\cdot \left(\ln(1-x) + \frac{x}{x-1} \right)^3 & + 3e^{\ln(1-x) \cdot x}\cdot\left(\ln(1-x) + \frac{x}{x-1} \right)\cdot \left( -\frac{1}{1-x} - \frac{1}{(x-1)^2} \right) \\ & +e^{\ln(1-x) \cdot x}\cdot \left(\frac{-1}{(x-1)^2}+ \frac{2}{(x-1)^3} \right) \end{array} }{6}.
\end{align*}
Again, the top and bottom are made out of things that are continuous at 0. Plugging in 0 to the top \textit{this time} gives us $-3$, while the bottom gives us 6: therefore, the limit is just
\begin{align*}
\frac{-3}{6} = -\frac{1}{2}.
\end{align*}
So we're done! (In class, I did a less-awful L'H\^opital bash than this to save time. I do this here to illustrate just how many times you may have to apply L'H\^opital to get an answer, though the average GRE problem will be less messy than this calculation!)
\end{proof}
\end{document}