An archived instance of discourse for discussion in undergraduate Complex Variables.

What’s your favorite function or formula?

mark

My favorite formula is $e^{i\theta} = \cos(\theta)+i\sin(\theta)$ since it is so important in complex variables! We can prove it using power series:

\begin{align*}
e^{i\theta} &= \sum_{k=0}^{\infty} \frac{(i\theta)^k}{k!} \\
&= 1 + i\theta - \frac{\theta^2}{2} - i\frac{\theta^3}{3!} + \frac{\theta^4}{4} + i\frac{\theta^5}{5!} + \cdots \\
&= \left(1 - \frac{\theta^2}{2}+ \frac{\theta^4}{4} - \cdots\right) + i\left(\theta - \frac{\theta^3}{3!} +\frac{\theta^5}{5!} - \cdots\right) \\
&= \cos(\theta) + i\sin(\theta)
\end{align*}




What's your favorite formula or function?


Note: This is a required question - you get 10 points just for doing it. Your response must include some LaTeX formatting - both inline and displaystyle.

dgallimo

My favorite mathematical tool is the two-dimensional counterclockwise rotation matrix since it greatly simplifies the manipulation of vectors. This matrix rotates the unit vectors $\hat{i}$ and $\hat{j}$ through an angle $\theta$ to positions $\hat{i'}=\cos\theta\hat{i}+\sin\theta\hat{j}$ and $\hat{j'}=-\sin\theta\hat{i}+\cos\theta\hat{j}$ respectively. Given a vector $\textbf{v}=x\hat{i}+y\hat{j}$ such that $T\textbf{v}=\textbf{v}'$, this has the elegant consequence that $\textbf{v}'=x'\hat{i}+y'\hat{j}$ in the $\hat{i},\hat{j}$ basis corresponds with $\textbf{v}'=x\hat{i'}+y\hat{j'}$ in the $\hat{i'},\hat{j'}$ basis:
\begin{align}
\left[
\begin{array}{c}
x'
\newline
y'
\end{array}
\right] &=
\left[
\begin{array}{cc}
\cos\theta & -\sin\theta
\newline
\sin\theta & \cos\theta
\end{array}
\right]
\left[
\begin{array}{c}
x
\newline
y
\end{array}
\right] \newline\newline &=
x
\left[
\begin{array}{c}
\cos\theta
\newline
\sin\theta
\end{array}
\right]+
y
\left[
\begin{array}{c}
-\sin\theta
\newline
\cos\theta
\end{array}
\right] \newline\newline &=
x\hat{i'}+y\hat{j'}
\end{align}
In the theory of special relativity, this is a property of the spacetime interval $s^2=x^2+y^2+z^2-c^2t^2$ (the spacelike displacement between two points in a flat spacetime) called invariance. The invariance of physical observables with respect to a transformation of basis also appears in quantum mechanics and many other physical theories.








































hjoseph

My favorite function is the simple yet very useful $\textbf{Logistic Function}$ or $\textbf{sigmoid function}$: $$f(x) = \frac{1}{1+e^{-x}}$$ It is particularly useful in data analysis because it is a good approximation of the normal cumulative distribution function or CDF. It's derivative can also be used to approximate the probability mass function, or PMF:$$\begin{array}{rcl} f'(x) & = & \frac{1}{{(1+e^{-x})}^2} \\ & = & \frac{1}{1+e^{-x}} \cdot (1 - \frac{1}{1+e^{-x}} ) \\ & = & f(x)(1- f(x) ) \\ \end{array}$$ Using this probability function, we can develop a likelyhood function for the weights of the parameters of data points in a set. To perform logistic regression, one would develop a weight function by making a vector of weights for each parameter $\theta$ and applying a function $h_\theta(x)$ to the data. To optimize the parameter weights this process is iterated and the error is minimized by calculating the negative gradient of the error (or loss) function $L(\theta)$ at each step and adjusting the parameters by a factor of $\alpha$ in that direction. This process is called $\textbf{Gradient Descent}$. Assuming iterations are indexed by $n$:$$\theta_{n+1}:=\theta_n -\alpha\triangledown_{\theta_n} L(\theta_n)$$ Iteration stops after a certain number of iterations with reduction in error below a pre-specified threshold.

hjoseph

As a side note, in the array environment in mathjax, I needed three "\" for the line delimeters.

jgorman

My favorite formula is the Eigenvalue equation:

$$\because \det(\pmb A - \lambda \pmb I) = 0$$

$$\therefore \pmb A x = \lambda x$$

Eigenvalues show up in numerous occasions in Engineering and Physics.

The eigenvalues of the Stress tensor are the principal stresses in the system, those directions where the stress is highest. The eigenvalues of the Vibration matrix are the normal modes of the system, which are each the principle linearly independent possible vibrations of the system.

This equation comes from Linear Algebra, and is one of the examples showing that Linear Algebra, in today's economy, is an important and vital field for the processing of large amounts of data, as well as the vector processing required for today's graphics.

mark

@jgorman Looks very good! I edited your TeX line a bit. Specifically:

  • I used double dolar signs $$ TeXCommands $$ to set the equation in display mode
  • I used \det instead of just det for the determinant. This is very common - sin, cos etc are entered this way as well.
  • Instead of \bf, I used \pmb which is the preferred tool in math mode. Note that it doesn't extend to the zero.

Why don't you edit your reply and add something inline as well? Something involve $Ax=\lambda x$ might make sense.

lszabo

My favorite function is the closed form for the Fibonacci sequence. The recurrence relation for the Fibonacci numbers is given by $F_n = F_{n-1} + F_{n-2}$ and setting $F_0 = 0, F_1 = F_2 = 1$. One can solve for the closed form using the linear difference equation that describes the sequence $$\begin{bmatrix} F_n \\
F_n+1 \\
\end{bmatrix}
= \begin{bmatrix} 1 & 1 \\
1 & 0 \\
\end{bmatrix}\begin{bmatrix} F_{k+1} \\ F_{k} \\ \end{bmatrix}$$




This can be done using some linear algebra, namely eigenvalue and eigenvector decomposition.

Finally the closed form for the sequence is $$ \frac{1}{\sqrt{5}} (\frac{1+\sqrt{5}}{2})^n - \frac{1}{\sqrt{5}} (\frac{1- \sqrt{5}}{2})^n$$

The golden ratio is present in this closed form so it is aesthetically pleasing in its own right. Its applications are numerous as they show up throughout combinatorics, computer science and even population models in ecology.

tthorn

My favorite function is actually a set of equations, namely Maxwell's Equations.

$$ \nabla \cdot \textbf{E} = 0 \; \; \; \; \; \; \; \nabla \times \textbf{E} = - \frac{1}{c} \frac {\partial \textbf{H}}{\partial t} $$ $$\nabla \cdot \textbf{H} = 0 \;\;\;\;\;\;\; \nabla \times \textbf{H} = \frac{1}{c} \frac {\partial \textbf{E}}{\partial t}$$

The set of four equations is beautiful to look at just as a glance in the way it is written out, but the sheer amount of information contained within them is what really blows me away. From this starting point, one may derive basically any equation dealing with classical electromagnetism. WIth just a bit of manipulation involving taking the curl of both sides and simplifying, the equation $\nabla \times \textbf{E}$ can be rewritten in the form $\frac{\nabla ^2 \textbf{E}}{\partial t ^2}=c^2\nabla^2\textbf{E}$, a version of the wave function, which demonstrates the wave behavior of electromagnetic energy. This is hardly even the tip of the iceberg of the detail with which this phenomena is described by four brief statements in the language of vector calculus.

complexcharacter

My favorite equation is the Feynman path integral formula,
\begin{align*}
\langle x_f,t_f | x_i,t_i\rangle &= \int e^{i \int_{t_i}^{t_f} \mathcal{L(x,\dot x)} \; dt} \; \mathcal{D}x .
\end{align*}


This is something called a functional integral. I like it because a while ago I spent longer than I'd care to admit trying to figure out the way that the functional measure, $\mathcal{D}x = \lim_{N\rightarrow \infty} \prod_{i=1}^N dx_i$ is properly defined. It turns out it doesn't exist in many cases in physics. Basically, it's just weird.

P.S. Jack Hendrix really, really wanted to weigh in on this discussion: I like the Cauchy Criterion, because it allows you to prove that the sum of Cauchy sequences is Cauchy without extraneous steps.

DPR

One of my favorite formulas is the time-independent Schrodinger equation for a single particle in an electric field $ E\Psi(r)=[\frac{-\hbar^2}{2\mu}\vartriangle^2+V(r)]\Psi(r)$. This equation is the backbone of several fields of study- including computational chemistry and quantum physics. This equation can be expanded to include many electrons. The many electron time independant equation is
$$\begin{equation}\begin{split} E\Psi&=[\hat{T}+\hat{V}+\hat{U}]\psi\\ &= \big[\sum_{i}^{N}(-\frac{\hbar^2}{2m_ i}\vartriangle_i^2)+\sum_{i}^{N}V(r_i)+\sum_{i<J}^{N}U(r_i,r_j)\big]\Psi\end{split}\end{equation}$$
Adding in extra electrons raises the number of perturbations to approximate greatly. Luckily we have computers for that!

emoles

One of my most favorite equations has to be the very essential convolution equation. It's great because it has a built-in joke... why do they call it a convolution? Because it's convoluted! Anyways, the equation illustrates the overlapping of two functions, f and g, and can be simply shown as:
$$f(t)*g(t) = \int_{0}^{t}f(t-\tau)g(\tau) \, d\tau $$
This equation is very useful when translating between the Laplace domain and the time domain as $\mathscr{L} \{f(t)*g(t)\} = F(s)G(s)$. And guess what? These equations are even reversible! How nifty!

cdunn

My favorite formula might be old and simple but it is still very important. It is the pythagorean theorem which of course states that:

$$ a^2 + b^2 = c^2 \Leftrightarrow c=\sqrt{a^2+b^2}.$$

Not only does this give us a mathematical way to calculate a graphical representation of length in physics, it also gives us an idea of $|z|$, or the length of $z$, where $z\in\mathbb{C}$. This allows us to interpret $z$ in a graphical way which makes computations and conceptualization simpler. It also allows us to have a sense of length in higher dimensions where graphical representations are imposable.

felyahia

If $ F'=f$, then

$$\int _ a^b f(x)dx=F(b)-F(a).$$

dmcmurra

My favorite equation is probably just the basic quadratic formula. $f'(x) = -b\pm\frac{\sqrt{b^{2}-4ac}}{2a}$. It has a cool jingle that gets stuck in my head every so often. I remember first learning the equation in middle school and thinking "Woah, this is crazy who even understands this stuff?!?"

mark

@dmcmurra Very good question! Perhaps, you should raise it on the main site as a question under the Course questions category?

opernie

My favorite formula is Faraday’s Law of Electromagnetic Induction.

$$\varepsilon = -nd\varphi/dt.$$

A relationship between electromotive force and flux allows for most of our electronics. ε is the electromotive force, N is the number of turns in the coil, and dφ/dt is the flux associated with the coil. The negative sign comes from Lenz’s Law, stating that the induced electromotive force is opposing the change that produced it.

mark

@opernie Please typeset your math with LaTeX, rather than well, that. :slight_smile:

wschierh

I don't know if it's possible to have a favorite formula per se, but since I've been interested in nuclear reactors/physics for a long time, one of the most interesting/fascinating ones that I've encountered is the differential equation $\frac{dN}{dT}$ = $\frac{\alpha N}{\tau}$. This equation describes the rate of fission in a nuclear fission reactor, where $\alpha$ (also known as the void coefficient) is a measure of the expected number of neutrons after a single neutron lifetime has elapsed, $\frac{dN}{dT}$ is the rate of change of the core's neutron count, $\tau$ is the average lifetime of a neutron. More explicitly, $$\alpha = P_{impact} P_{fission} n_{average} - P_{absorb} - P_{escape}$$which measures the probability that a neutron will impact another neutron, induce fission, and continue the chain reaction, instead of just escaping off into space or not inducing fission upon impact.

This differential equation has 3 states, based on the values of $\alpha$:
1: $\alpha < 0$, meaning that the overall reaction is losing more neutrons than it is gaining them, making it subcritical and shrinking the rate of reaction until it reaches equilibrium at 0.
2: $\alpha > 0$, meaning that the overall reaction is gaining more neutrons than it is losing them, and causing the reaction rate to increase without bound unless checked; in this state it is known as supercritical and is on track to becoming a nuclear bomb rather than a power source. The Chernobyl plant disaster, for example, was a result of the void coefficient staying positive for too long and letting the reaction get out of control.
3: $\alpha$ = 0, meaning that the neutron exchange rate is stable (as $\frac{dN}{dT}$ = 0) and energy is being produced constantly. In this state it is known as critical, and is the desired reaction rate for nuclear power plants.


In short, I like this (apparently simple) differential equation because it's a nice real world example of differential equations in physics, and relevant to my interests.

mark

@wschierh W00T!!