April 20, 2017

Sequences, Hausdorff Spaces and Nets

Contents

  1. Sequences
  2. Hausdorff Spaces
  3. Nets

I'm now going to talk about sequences and nets, which often provide an alternative way of describing topological phenomena. I'll also talk about Hausdorff spaces, which have all sorts of nice properties. I was originally planning to include filters in this discussion as well, but I think if I did that this post might become long enough to break the internet.

Sequences

If you've taken a calculus class (or maybe even if you haven't) then you probably already have some notion of what sequences are. They're basically just lists of elements that go on forever. For instance,

$$\begin{gather}
\begin{aligned}
(0,1,2,3,4,5,6,7,\dotsc)\\
(1,1,2,3,5,8,13,\dotsc)\\
(\text{cat},\text{cat},\text{cat},\text{cat},\dotsc)
\end{aligned}
\end{gather}$$

are all sequences. The first two have entries in $\mathbb{N}$ and the third takes values in some set of animals.

Notice that there is always one entry for each natural number. That is, there is a zeroth entry, a first entry, a second entry, and so on. The order in which these entries appear does matter, so put them in parentheses rather than set brackets to distinguish them from sets. Sequences have two main differences from countable infinite sets: they are ordered, and the same point can appear more than once. This important point leads us to the following rigorous definition of a sequence:

Definition. A sequence in a topological space $X$ is a function $x:\mathbb{N}\to X$.

It is perhaps a bit confusing to actually think of sequences as functions. The definition above is simply meant to give the "ordered list of points" idea some rigorous footing. We generally write $x_n$, rather than $x(n)$, to denote the $n$th term in a sequence. This means we can write a sequence as $(x_0,x_1,x_2,\dotsc)$. This is sometimes shortened to either $(x_n)_{n=0}^\infty$ or $(x_n)_{n\in\mathbb{N}}$.

Next, let's talk about convergence. This can be a tricky business, and it is the bane of many Calculus II students' existence. The concept of convergence is not itself terribly complicated — it is the process of figuring out whether a specific sequence converges which can sometimes be unreasonably challenging. To start, let's look at convergence in metric spaces so that we can make use of the familiar notion of distance.

Definition. A sequence $(x_n)_{n\in\mathbb{N}}$ in a metric space $X$ converges to a point $x\in X$ if for every real number $\epsilon>0$ there is some natural number $N$ for which $d(x,x_n)< \epsilon$ whenever $n>N$.

Definition. If a sequence $(x_n)_{n\in\mathbb{N}}$ converges to a point $x$, we say that $x$ is the limit of that sequence and we write $\lim\limits_{n\to\infty}x_n=x$.

That's a bit of a mouthful, so let's spend a little bit of time making sure we know what we're getting ourselves into. Essentially what I mean when I say that a sequences converges to a point $x$ is that eventually everything in the sequence becomes as close to $x$ as I want. More precisely, given $\epsilon > 0$, I want everything beyond the $N$th entry in the sequence to be within the open ball $B(x,\epsilon)$, where I get to choose $N$. If I can find such an $N$ for every $\epsilon$, then the sequence converges. Generally, $N$ will need to be very large when $\epsilon$ is very close to zero.

Example. Consider the sequence $(x_n)_{n=1}^\infty$ in $\mathbb{R}$ where each $x_n=\frac{1}{n}$. We can visualize this sequence in the following manner:

Notice that the points in the sequence all lie on the graph of the function $f:\mathbb{R}^+\to\mathbb{R}$ defined by $f(x)=\frac{1}{x}$. This is not surprising, considering we originally defined sequences as functions themselves. That is, this sequence is really the restriction of $f$ to the positive integers, $f\negmedspace\mid_{\mathbb{Z}^+}:\mathbb{Z}^+\to\mathbb{R}$. If you have any experience with this function, you'll believe me when I say that it becomes extremely close to zero and always grows closer to it. It makes sense then that our sequence does the same, so we might guess that it converges to zero. Let's prove this!

Theorem. The sequence $(x_n)_{n=1}^\infty$ given by $x_n=\frac{1}{n}$ converges to $0$.

Proof. Choose $\epsilon>0$ and let $N>\frac{1}{\epsilon}$. If $n>N$, then certainly $n>\frac{1}{\epsilon}$. Thus,

$$\begin{aligned}
d(x_n, 0) &= \vert x_n - 0\vert \\
&= \tfrac{1}{n} \\
&< \epsilon.
\end{aligned}$$

You don't really need to remember the proof of this fact, although it's incredibly easy to reproduce — the candidate for $N$ in this case is more obvious than usual. Just remember that $\lim\limits_{n\to\infty}\frac{1}{n}=0$, which should hopefully make a lot of sense to you anyway. This is an important sequence which we will occasional use in the future.

Also, notice that the sequence we just looked at doesn't actually quite fit the definition I gave for sequences. That is, it doesn't have an entry for every natural number (in particular, there is no $x_0$). We could easily remedy that by rewriting each term as $\frac{1}{n+1}$ and shifting each entry's index down by one. I chose to write it the way I did because it looks a bit nicer. It is somewhat common to allow sequences to start at any index we like, as we can always translate it into starting at zero using a similar substitution.

Now, in a calculus or analysis class you would study lots of properties and characteristics of sequences in $\mathbb{R}$ and learn a bunch of tricks to help you show that certain types of sequences in $\mathbb{R}$ converge. However, all of that stuff bores me and I want to talk generally about convergent sequences in topological spaces, not just about $\mathbb{R}$ with the standard topology. This will require a slight reworking of the definition of convergence to eliminate the concept of distance that we have in metric spaces.

Definition. A sequence $(x_n)_{n\in\mathbb{N}}$ in a topological space $X$ converges to a point $x\in X$ if for every neighborhood $U$ of $x$, there is a natural number $N$ for which $x_n\in U$ whenever $n>N$.

This definition basically replaces open balls with neighborhoods, and shouldn't require too much explanation other than that. It should be clear that this definition, when $X$ is a metric space, is equivalent to the old one because open sets are just unions of open balls.

Definition. If a sequence $(x_n)_{n\in\mathbb{N}}$ in a topological space converges to a point $x$, we say that $x$ is a limit of that sequence and we write $\lim\limits_{n\to\infty}x_n=x$.

Notice that I've said "a limit," rather than "the limit" like I did for metric spaces. That's because a convergent sequence in a topological space might actually converge to multiple points.

😱

The simplest example of this phenomenon that I can think of is as follows:

Example. Let $X$ be any nonempty set equipped with the trivial topology.[1] Then for any point $x\in X$, the only neighborhood of $x$ is $X$ itself. Certainly for any sequence $(x_n)_{n\in\mathbb{N}}$ in $X$, all terms of the sequence are in $X$. If follows that every sequence in $X$ converges to every point of $X$.

This might strike you as a bit odd, and I'd agree with you. At the very least, this business of every sequence converging to every point is not very desirable behavior for a topological space. After all, we'd like limits of sequences to be unique. Luckily for us, there is a specific type of space for which this behavior is guaranteed!

Hausdorff Spaces

Definition. A topological space $X$ is Hausdorff[2] if for every pair of points $x,y\in X$ with $x\ne y$, there exists a neighborhood $U$ of $x$ and a neighborhood $V$ of $y$ such that $U\cap V=\varnothing$.

So in a Hausdorff space, distinct points have disjoint neighborhoods. This is clearly not true for spaces with two or more points under the trivial topology, so we're off to a good start. Before I show how this property guarantees uniqueness of limits, I will prove that every metric space is Hausdorff.

Theorem. Let $X$ denote a metric space with metric $d:X\times X\to\mathbb{R}$. Then $X$ is Hausdorff when equipped with the topology induced by the metric $d$.

Proof. Choose $x,y\in X$ with $x\ne y$. By the definition of a metric, $d(x,y)>0$. Let $r=\frac{d(x,y)}{2}$ and define $U=B(x,r)$ and $V=B(y,r)$. It suffices to show that $U$ and $V$ are disjoint, which we will argue by contradiction.

Suppose $U\cap V\ne\varnothing$. Then there exists some point $p\in U\cap V$, so $d(x,p)< r$ and $d(y,p)< r$ by the definitions of these open balls. Thus,

$$\begin{aligned}
d(x,p)+d(y,p) &< 2r \\
&= d(x,y),
\end{aligned}$$

which violates the triangle inequality. We have reached a contradiction, so the proof is complete.

This tells us right away that things like $\mathbb{R}$ in the standard topology are Hausdorff. Now if we can just show that convergent sequences in Hausdorff spaces have unique limits, then I will definitely have been justified earlier in claiming that metric spaces have unique limits. Let's prove this right now.

Theorem. Let $X$ be a nonempty Hausdorff space and let $(x_n)_{n\in\mathbb{N}}$ be a convergent sequences in $X$. Then $(x_n)_{n\in\mathbb{N}}$ has exactly one limit.

Proof. Since $(x_n)_{n\in\mathbb{N}}$ is convergent, we know that it has at least one limit. Thus, it suffices to show that it also has at most one limit. We proceed by contradiction.

Suppose $(x_n)_{n\in\mathbb{N}}$ converges to both $p_1$ and $p_2$, where $p_1\ne p_2$. Since $X$ is Hausdorff, there exist disjoint neighborhoods $U_1$ of $p_1$ and $U_2$ of $p_2$. From the definition of convergence, we have that $x_n\in U_1$ whenever $n>N_1$ and $x_n\in U_2$ whenever $n>N_2$ for some natural numbers $N_1$ and $N_2$ Let $N=\max\{N_1,N_2\}$. Then clearly $x_n\in U_1\cap U_2$ whenever $n>N$. This is a contradiction, since $U_1$ and $U_2$ are disjoint.

So Hausdorff spaces are desirable in that if a sequence converges, it does so as we'd generally expect it to. I won't go into this in too much detail right now, but all of the thinks we actually think of as "space" are Hausdorff. In fact, the definition of a manifold explicitly requires this property, which we shall see if I ever manage to get that far.

There are a few more properties of Hausdorff spaces which I'd like to prove before moving on, just because they're interesting. The first is the fact that singleton sets in Hausdorff spaces are closed. Its proof is quite straightforward.

Theorem. Let $X$ be a nonempty Hausdorff space. Then for every point $x\in X$, the set $\{x\}$ is closed.

Proof. Since $X$ is Hausdorff, for every $y\in X$ with $y\ne x$ there exist disjoint neighborhoods $U_y$ of $x$ and $V_y$ of $y$. It follows from the union lemma that

$$\bigcup\limits_{y\ne x}V_y = X-\{x\},$$

and this set is open because it is the union of open sets. Thus, $\{x\}$ is closed because its complement is open.

The next property is a little bit more interesting

Theorem. Let $X$ and $Y$ denote topological spaces and suppose $Y$ is Hausdorff. Then the graph of any continuous function $f:X\to Y$, given by

$$G=\left\{\big(x,f(x)\big)\mid x\in X\right\}$$

is closed in the product space $X\times Y$.

Proof. It suffices to show that $(X\times Y)-G$ is open in $X\times Y$. Choose $(x,y)\in (X\times Y)-G$. Clearly $y\ne f(x)$, so because $Y$ is Hausdorff there exist disjoint neighborhoods $U$ of $y$ and $V$ of $f(x)$. Furthermore, because $f$ is continuous we have that $f^{-1}[V]$ is open in $X$. Notice that $x\in f^{-1}[V]$ by definition.

Next, choose any point $\big(g,f(g)\big)\in G$, and let us consider separately the cases where $g\in f^{-1}[V]$ and $g\notin f^{-1}[V]$. If $g\in f^{-1}[V]$ then by definition $f(g)\in V$. Thus, $f(g)\notin U$ because $U$ and $V$ are disjoint. It follows that $\big(g,f(g)\big)\notin f^{-1}[V]\times U$. If, on the other hand, $g\notin f^{-1}[V]$ then it follows immediately that $\big(g,f(g)\big)\notin f^{-1}[V]\times U$ from the definition of the Cartesian product.

Either way, $\big(g,f(g)\big)\notin f^{-1}[V]\times U$ and so we have that $(f^{-1}[V]\times U)\cap G=\varnothing$. Clearly $f^{-1}\times U$ is open as it is the product of open sets. Thus every point $(x,y)\in (X\times Y)-G$ is contained in the open set $f^{-1}[V]\times U$, which is itself contained in $(X\times Y)-G$. It follows that $(X\times Y)-G$ is open in $X\times Y$, so $G$ is closed.

This is a pretty nice result, although it isn't too useful to us right now. At the very least, it tells us that continuous real-valued functions have closed graphs because $\mathbb{R}$ is Hausdorff. The next two theorems should immediately seem useful to you.

Theorem. Any subspace of a Hausdorff space is Hausdorff.

Proof. Let $A$ be a subspace of a Hausdorff space $X$ and choose points $x,y\in A$. Then there exist disjoint neighborhoods in $X$, $U$ of $x$ and $V$ of $y$. It follows that $A\cap U$ is a neighborhood of $x$ in $A$ and $A\cap V$ is a neighborhood of $y$ in $A$. Furthermore,

$$\begin{aligned}
(A\cap U)\cap (A\cap V) &= A\cap (U\cap V) \\
&= A\cap\varnothing \\
&= \varnothing,
\end{aligned}$$

so $A$ is Hausdorff.

Theorem. The product of two Hausdorff spaces is Hausdorff.

Proof. Let $X$ and $Y$ denote Hausdorff spaces and choose distinct points $(x_1,y_1)$ and $(x_2,y_2)$ in $X\times Y$. Without loss of generality (the other case is so similar) suppose $x_1\ne x_2$. Then because $X$ is Hausdorff, there exist disjoint neighborhoods $U_1$ of $x_1$ and $U_2$ of $x_2$ in $X$. Note that $U_1\times Y$ and $U_2\times Y$ are both open in $X\times Y$, and that $(x_1,y_1)\in U_1\times Y$ while $(x_2,y_2)\in U_2\times Y$. Furthermore,

$$\begin{aligned}
(U_1\times Y)\cap (U_2\times Y) &= (U_1\cap U_2)\times Y \\
&= \varnothing\times Y \\
&= \varnothing,
\end{aligned}$$

so $X\times Y$ is Hausdorff.

It can be shown by induction that the product of any finite number of Hausdorff spaces is Hausdorff. It is also possible to show, in fact, that the product of any collection of Hausdorff spaces is Hausdorff, but I try to avoid talking about infinite Cartesian products unless I have no other choice.

Given that products and subspaces of Hausdorff spaces inherit Hausdorffness from their parents, you might be tempted to guess that quotients of Hausdorff spaces are Hausdorff. This is wrong in general, although I won't provide a counterexample because this post is already very long and I haven't even started discussing nets yet.

Unfortunately, before I get to nets I have a few more things about sequences that I would like to talk about. In particular, it would be a shame for me not to prove the following beautiful theorem for you.

Theorem. Let $X$ and $Y$ denote topological spaces and let $(x_n)_{n\in\mathbb{N}}$ be a sequence which converges to the point $x\in X$. Then for any continuous function $f:X\to Y$, the sequence $\big(f(x_n)\big)_{n\in\mathbb{N}}$ converges to the point $f(x)\in Y$.

Proof. Choose any neighborhood $U\subseteq Y$ of $f(x)$. Since $f$ is continuous, $f^{-1}[U]\subseteq X$ is open and clearly $x\in f^{-1}[U]$, so $f^{-1}[U]$ is a neighborhood of $x$. Since $(x_n)_{n\in\mathbb{N}}$ converges to $x$, there exists $N\in\mathbb{N}$ for which $x_n\in f^{-1}[U]$ whenever $n>N$. It follows that $f(x_n)\in U$ whenever $n>N$. Thus, $\big(f(x_n)\big)_{n\in\mathbb{N}}$ converges to $f(x)$.

This theorem is great because it tells us that continuous functions preserve convergent sequences! It would be even better if the converse was true, because that would give us yet another alternative characterization of continuous functions. Unfortunately, this is not the case without additionally assuming that both spaces are first-countable (a property that I haven't mentioned yet, but that every metric space has). For general spaces, it is also possible for function which aren't continuous to preserve convergent sequences.

This hints that sequences might not be exactly the right tool to study continuity. The problem is that they are too specific a concept. Let's next look at a generalization of sequences that will solve all of our problems.

Nets

Before I start trying to explain nets to you, let me state the main theorem we eventually want to prove about them.

Theorem. Let $X$ and $Y$ denote topological spaces. A function $f:X\to Y$ is continuous if and only if for every net $(x_a)_{a\in A}$ that converges to $x$, the net $\big(f(x_a)\big)_{a\in A}$ converges to $f(x)$.

In stating this theorem of things to come, I've already given away a fair amount of information about the nature of nets. Namely, the fact that nets look almost exactly like sequences, except perhaps that their entries are indexed over sets other than $\mathbb{N}$. However, nets aren't indexed over just any kind of set — after all, we would still like the entries of a net to progress in some order. Thus, we will define them over sets with a specific type of relation:

Definition. A preorder on a set $X$ is a reflexive and transitive relation.

That is, a preorder on $X$ is a relation $\le$ such that $x\le x$ for every $x\in X$, and $x\le z$ whenever $x\le y$ and $y\le z$.

Definition. A directed set is a nonempty set $X$ together with a preorder $\le$ which satisfies the additional property that for any $x,y\in X$, there exists $z\in X$ such that $x\le z$ and $y\le z$.

A shorter way of describing this final property of directed sets might be to say that every pair of elements has an upper bound. This ensures that, although some pairs of elements may not be related to each other, they are at least related to some third element. In turn, this guarantees that strange behavior, as in the following example, does not occur.

Example. Just to make sure there's no confusion, this will be an example of a set with a preorder that is not a directed set, because pairs of elements will not necessarily have upper bounds.

We will define preorders $\le_1$ on the set $\mathbb{N}\times\{1\}$ and $\le_2$ on the set $\mathbb{N}\times\{2\}$ that act similarly to the standard "less than or equal to" relation on $\mathbb{N}$. Recall that we previously defined $\le$ on $\mathbb{N}$ so that $n\le m$ if and only if $m=n+k$ for some $k\in\mathbb{N}$.

Notice that every element of $\mathbb{N}\times\{1\}$ is of the form $(n,1)$ for some $n\in\mathbb{N}$. Thus it makes sense to define $\le_1$ using the rule that $(n,1)\le_1 (m,1)$ if and only if $n\le m$. Similarly, we define $\le_2$ using the rule that $(n,2)\le_2 (m,2)$ if and only if $n\le m$.

It is obvious that both $\le_1$ and $\le_2$ are preorders on their respective sets because they both inherit their reflexivity and transitivity from $\le$.

Let's use these to define a preorder on $(\mathbb{N}\times\{1\})\cup(\mathbb{N}\times\{2\})$. We can define $\le_3$ on this union using the rule that $n\le_3 m$ if and only if either $n\le_1 m$ or $n\le_2 m$. Using the rigorous set-theoretic definition of relations, we could alternatively define this by $\le_3=\le_1\cup\le_2$. Again, it's easy to see that $\le_3$ is a preorder because it inherits its reflexivity and transitivity from $\le_1$ and $\le_2$.

Basically what we have is two disjoint copies of things that act identically to $\mathbb{N}$, which have been glued together, but are related to each other in absolutely no way. In particular, if we choose $n_1\in\mathbb{N}\times\{1\}$ and $n_2\in\mathbb{N}\times\{2\}$, there is certainly no element of $(\mathbb{N}\times\{1\})\cup (\mathbb{N}\times\{2\})$ which serves as an upper bound for both $n_1$ and $n_2$. Thus, this example does not constitute a directed set.

Example. On the other hand, the set $\mathbb{N}$ of natural numbers equipped with $\le$, the standard "less than or equal to" relation, is a directed set. I proved in my post on quotient sets that this relation is reflexive and transitive, so it is certainly a preorder. The fact that all pairs of natural numbers have an upper bound is easy to show. For any $x,y\in\mathbb{N}$, choose $x=\max\{x,y\}$. Then clearly $x\le z$ and $y\le z$. This is a particularly easy example because every natural number is either less than or greater than every other natural number.

Example. Another interesting directed set can be formed as follows. Let $X$ denote any nonempty topological space and pick a point $x\in X$. The set $N_x$ of all neighborhoods of $x$ forms a directed set when equipped with the preorder $\le$ defined by $U\le V$ is and only if $V\subseteq U$.

This relation is reflexive because for any neighborhood $U$ of $x$, it is clear that $U\subseteq U$ and so $U\le U$.

It is only a tad more difficult to see that $\le$ is transitive. Suppose we have neighborhoods $U, V$ and $W$ of $x$ for which $U\le V$ and $V\le W$. Then $W\subseteq V\subseteq U$, so certainly $W\subseteq U$. Thus, $U\le Q$.

Lastly, we need to show that any pair of neighborhoods of $x$ has an upper bound, which in this case simply means they both contain a common neighborhood of $x$. Again, this is easy to show. Choose any two neighborhoods $U$ and $V$ of $x$. Clearly $x\in U\cap V$, and by the definition of a topology $U\cap V$ is open. Thus it is a neighborhood of $x$. It is obvious that $U\cap V\subseteq U$ and $U\cap V\subseteq V$, so $U\le U\cap V$ and $V\le U\cap V$.

Now that we have some examples of directed sets in our arsenal, it's finally time to define nets. You've likely already guessed how we'll proceed.

Definition. A net in a topological space $X$ is a function $x:A\to X$, where $A$ is any directed set.

Again, we generally write $x_a$ rather than $x(a)$, and we denote a net itself by $(x_a)_{x\in A}$. Since we've already established that $\mathbb{N}$ is a directed set, it should be clear that sequences are a special type of net.

Convergence of nets is extremely similar to convergence of sequences.

Definition. A net $(x_a)_{x\in A}$ in a topological space $X$ converges to a point $x\in X$ if for every neighborhood $U$ of $x$, there exists $b\in A$ for which $x_a\in U$ whenever $a\ge b$.

Definition. If a net $(x_a)_{x\in A}$ in a topological space converges to a point $x$, we say that $x$ is a limit of that net and we write $\lim x_a=x$.

It's fairly easy to come up with a convergent net that is not a sequence, using an example I've already given.

Example. Given a topological space $X$ and a point $x\in X$, let $N_x$ denote the directed set of neighborhoods of $x$ as detailed above. We can construct a net $(x_U)_{U\in N_x}$ by choosing a point $x_U\in U$ for each neighborhood $U$ of $x$. (Notice that this action requires the Axiom of Choice). Intuition tells us that this net should converge to $x$ because the neighborhoods of $x$ get "smaller" the further out we go in our directed set $N_x$. This claim is super easy to verify, so let's just do it.

Choose any neighborhood $U$ of $x$. From our construction of the net $(x_U)_{U\in N_x}$, it is clear that $x_U\in U$. Furthermore, for any neighborhood $V$ of $x$ with $V\ge U$, we have that $V\subseteq U$ and thus $x_V\in X\subseteq U$. It follows that $(x_U)_{U\in N_x}$ converges to $x$.

This post is already so ridiculously long that I'm just going to prove the theorem that I promised you and then be done. Unfortunately, the proof is a little bit on the longer side.

Theorem. Let $X$ and $Y$ denote topological spaces. Then a function $f:X\to Y$ is continuous if and only if for every net $(x_a)_{a\in A}$ that converges to $x$, the net $\big(f(x_a)\big)_{a\in A}$ converges to $f(x)$.

Proof. The forward direction is practically identical for the analogous result for series. Suppose $f$ is continuous and that the net $(x_a)_{a\in A}$ converges to the point $x\in X$. Choose any neighborhood $U$ of $f(x)$. Since $f$ is continuous, $f^{-1}[U]\subseteq X$ is open and clearly $x\in f^{-1}[U]$, so $f^{-1}[U]$ is a neighborhood of $x$. Thus, there exists $b\in A$ for which $x_a\in f^{-1}[U]$ whenever $a\ge b$. It follows that $f(x_a)\in U$ whenever $a\ge b$, so the net $\big(f(x_a)\big)_{a\in A}$ converges to $f(x)$.

I will prove the reverse direction by contradiction. Suppose that for every net $(p_a)_{a\in A}$ that converges to $p$, the net $\big(f(p_a)\big)_{a\in A}$ converges to $f(p)$, but that $f$ is not continuous. Then there exists a point $x\in X$ and a neighborhood $V$ of $f(x)$ for which $f^{-1}[V]$ is not a neighborhood of $x$. Thus, we can construct a net $(x_U)_{U\in N_x}$ for which each $x_U\notin f^{-1}[V]$. Clearly each $f(x_U)\notin V$. Choose any neighborhood $W$ of $x$. Then for any neighborhood $T\ge W$, i.e., $T\subseteq W$, and so $x_T\in W$. It follows that $(x_U)_{U\in N_x}$ converges to $x$, and thus $\big(f(x_U)\big)_{U\in N_x}$ converges to $f(x)$. However, the interior of $V$ is a neighborhood of $f(x)$ and thus $f(x_U)$ is eventually in this interior and therefore also in $V$, but this is a contradiction.

So continuity is equivalent to the preservation of convergent nets, which is pretty cool. It's also true that being Hausdorff is equivalent to the existence of unique limits for nets, but I'm going to end this post here because it's really just getting ridiculous at this point.


  1. Recall that in the trivial topology the only open sets are $\varnothing$ and $X$. ↩ī¸Ž

  2. Or separated, or $\mathbf{T}_2$. ↩ī¸Ž