asymptotics

3.3.1

Introduction to Asymptotics

Introduction

In the quest for larger and larger numbers, we eventually discover that it rests upon a quest for faster and faster functions. For this reason we'll want to know how "fast" a function grows to judge it's potential for generating large numbers. It therefore becomes important for us to understand what it means to say one function "grows faster" than another.

In computer science a theory of growth rates is used to compare the efficiency of various algorithms. For example, a computer program, built to compute the nth prime number might take roughly n2 steps to compute it. This may not be the most efficient algorithm so it's important to compare the computational growth rate of various computer programs designed for the same task to determine which is the most efficient. In the case of program analysis, the slowest function is most desirable. Although we will borrowing some of the theory of growth rates in computer science, for our purposes the fastest function is most desirable. These will produce large numbers the quickest. But before we can do this, we first need to formulate a proper definition of what it means for a function to be "fast" or "slow", and how they should be compared to each other. The theory of growth rates will have some important up-shots for our studies in googology.

Asymptotic Growth Rates

In computer science, a functions end behavior is known as it's asymptotic growth rate. What is "end behavior"? The end behavior of a function is how the output behaves as the input approaches infinity (either positive or negative, although for our purposes we're only interested when it approaches positive infinity). In Calculus this is known as "Limits at Infinity". Many different things can happen to a function, f(x), as x approaches infinity. Sometimes it will shrink down to zero very rapidly, even though it might never be equal to zero anywhere. Other times it might approach some constant value, from either above or below, or it might vacillate between the two and cross the constant infinitely often, but always narrowing it's range of values. For our purposes, either of these situations would be an example of a "slow growing function", as it's values become more narrow rather than broader. There are plenty of functions however that simply diverge towards positive infinity rapidly as x goes to infinity. It is these sorts of functions that will be of interest to us, and we will want to develop a way to classify different rates at which a function may diverge to infinity. The study of the end behavior of functions is known as asymptotic analysis, or asymptotics for short.

Permanent Leads

The simplest notion of an asymptotic growth rate is based on a fairly natural idea. We compare the end behavior of two functions and determine which function ultimately wins out as the input approaches infinity. For any x, we can say g(x) leads f(x) if g(x)>f(x). As x approaches infinity, f and g may exchange their lead a number of times, however if one function is "faster" than the other then we would expect that eventually one will gain a permanent lead on the other. If that happens we'll say that the function that gains the permanent lead dominates over the other function. In other words it has a greater asymptotic growth rate.

It isn't difficult to codify this mathematically. The tipping point at which one function gains a permanent lead over the other well refer to by the variable m. With this in mind here is our first definition of "grows faster":

If there exists a real number, m, such that...

f(x) < g(x)

provided x>m

Then g(x) "grows faster" than f(x)

We'll notate g(x) "grows faster" than f(x) as:

f(x) [<] g(x)

We can also write g(x) [>] f(x) to mean the same thing.

Note that " [<] " does not mean the same thing as " < ". The statement:

f(x) < g(x)

implies that f(x) is less than g(x) for all x. That is typically not the case, even when g(x) is the faster growing function. When we state that f(x) [<] g(x) there may be values at which f(x) > g(x), but eventually as x increases without bound (goes towards infinity), the statement will permanently switch to f(x) < g(x).

With some additional mathematical notation we can write the above definition more compactly.

Let "Ex" stand for "there exists an x",

Let "Ax" stand for "for all x"

Let "A --> B" stand for "if A then B"

Let " : " stand for "provided that" or "by condition",

and let " | " stand for "such that".

We can then write the following statement:

Em | f(x) < g(x) : x>m --> f(x) [<] g(x)

This reads:

"if there exists an m such that, f(x) is less than g(x) whenever x is greater than m, then f(x) grows slower than g(x)"

This definition leads immediately to a way to classify functions according to growth rate, but with one caveat: what if neither function gains a "permanent lead"? What if they simply continue to trade places? In this case we will say that the two functions are of equal growth rate. In other words:

If

f(x) [<] g(x) is False

&

f(x) [>] g(x) is False

Then

f(x) [=] g(x)

With this definition it is now possible to organize functions into a hierarchy based on growth rate (Note: we will assume that all the functions of interest are total functions, or at least total over the interval (c,∞), where "c" is a real constant, or a total function over integers greater than some integer constant c.)

Let's look at a simple example. Let f(x) = 2x+5 and g(x) = 3x. Which of the two functions grows faster? Intuitively we understand that 3x would grow faster, since it's "rate of increase", called it's derivative in calculus, is 3, while the derivative of 2x+5 is 2. Although 2x+5 starts out with a lead since f(0) = 5 and g(0) = 0, we also know that g(x) will be catching up to f(x) and must eventually surpass it and dominate. How do we prove this using our definition? We need to find the m such that f(x) < g(x) whenever x>m. To do this simply begin with f(x) < g(x) and solve for x on the right:

f(x) < g(x)

2x+5 < 3x

5 < x

So we find that whenever x is greater than 5, g(x) will be greater than f(x). Therefore by the definition, g(x) grows faster than f(x):

2x+5 [<] 3x

We can get some direct confirmation of this by substituting a value greater than 5 for x:

Let x = 6:

2x+5 < 3x --> 2(6)+5 < 3(6) --> 12+5 < 18 --> 17 < 18

"5" is in fact the only place where f(x) = g(x). Again we can confirm this simply enough by solving for x:

f(x) = g(x)

2x+5 = 3x

5 = x

When x is less than 5, f(x) leads despite being the slower function, but when x is greater than 5 g(x) leads. What if we gave f(x) and even greater head start? Is g(x) still the faster function? Let f(x) = 2x+999,999. Again to prove that g(x) [<] f(x) we simply determine m:

2x+999,999 < 3x

999,999 < x

So in this case m=999,999. Note that it doesn't matter how big m gets. As long as it exists, then it establishes that one function is faster than the other.

From these definitions and examples it becomes immediately apparent that:

2x+N [<] 3x

No matter how large N is. We can also surmise that:

... [<] x [<] 2x [<] 3x [<] 4x [<] 5x [<] ...

In each of these instances m=0, and this is where the faster growing functions trade the lead with the slower growing functions. Are there any functions faster than all nx, where n is a positive integer? Yes, x2. Proving this is only marginally more difficult:

Let f(x) = nx where n=Z+

and g(x) = x2

nx < x2

0 < x2 -nx

0 < x(x-n)

In the factored form we can see that x2-nx = 0 when x=0 or n. To see this just substitute it into the factored form:

x=0 --> x(x-n) = 0(0-n) = 0*(-n) = 0

x=n --> x(x-n) = n(n-n) = n*0 = 0

everywhere else the function either takes on negative or positive values. It in fact changes signs twice, but we aren't concerned with this. We're only interested in the sign of the last sign change. Now let x>n. If this is the case than x(x-n) must be positive since x must be positive, and (x-n) must be positive, and the product of positives must be positive so that x(x-n) is positive. Thus we can say:

nx < x2 : x>n

Thus we can provide an "m" and therefore we can say:

nx [<] x2 :An=Z+

The concept of a permanent lead works pretty well to classify functions, but it is not without it's problems. Take for example x and x+3. For all x we have x<x+3 since 0<3. But this means we can choose any real value for m and say x<x+3 when x>m. By our definition then:

x [<] x+3

This would be the equivalent of two runners who run at exactly the same speed, but because one has a head start (an "advantage"), it wins by default. But when we think of "growth" rate intuitively, it seems that both of these functions "grow" at the same rate. In fact, come to think of it, functions would almost never be of the same growth rate using this definition. One function will almost always win out over another one eventually.

This short coming is addressed by a different definition proposed by G.H. Hardy

Quotients at Infinity

G.H. Hardy proposed a simple and natural definition to determine whether one function grows faster, slower or the same as another. We simply look at the ratio of the end behaviors.

Let f(x) and g(x) be two functions. Now take the limit as x goes to infinity of g(x)/f(x):

If lim(x-->∞) g(x)/f(x) = ∞ Then f(x) [<] g(x)

If lim(x-->∞) g(x)/f(x) = 0 Then f(x) [>] g(x)

&

If lim(x-->∞) g(x)/f(x) = c , where c>0 Then f(x) [=] g(x)

This definition is not completely unrelated to the first, since if m exists, such that f(x)<g(x) whenever x>m, it means that either the ratio g(x)/f(x) approaches a constant greater than or equal to 1, or it diverges to infinity. In other words, either the functions will be of equal strength or g(x) will be said to grow faster in Hardy's system. We'll see that this addresses the problem cited earlier.

For example let's look at x and x+3 again. This time we consider and solve the limit:

lim(x-->∞) (x+3)/x = 1

Since the ratio of x+3 to x approaches 1 we say these functions are of equal growth rate. So we can say:

x+3 [=] x

What about x versus 2x? We simply "compute" the limit. For convenience we'll simply assume all limits are limits at positive infinity and use lim for short:

lim 2x/x = 2

Since the limit is a constant we have:

2x [=] x

In fact, from this it shouldn't be too hard to see that:

nx [=] x

So is every function asymptotic to x? No, x2 is still a faster function under this definition. To see this consider the limit:

lim x2/x = lim x = ∞

Since the ratio grows without bound, we can say that:

x [<] x2

Since xa/xb = xa-b it follows that as long as a>b, then xa grows faster than xb. So we have:

x [<] x2 [<] x3 [<] x4 [<] x5 [<] ...

Hardy's definition has some interesting consequences. It allows us to classify various "growth classes", such as "linear" (ax+b), "quadratic" (ax2+bx+c), "cubic" (x3), "quartic" (x4) etc.

This allows for a much simpler classification system, where as with the first proposal almost every function ends up being in a class all it's own, at least from the elementary functions. But Hardy's definition means that x, 2x, 3x,etc. are all the same "growth rate". Again this doesn't seem to completely capture the intuitive sense of "grows faster". Would we say a runner who runs twice as fast as another is in the same running class? The word grows seems to imply a relation to speed.

Are there any other definitions for growth rate? Yes. The next one I'll present is the most common in computer science...

Big-O Notation

In computer science the most common way to specify growth rates (or rather upper-bounds on growth rates as I'll explain) is to use something called Big O Notation. It differs from our previous two definitions in that it doesn't so much tell us when functions are of equal strength, but rather tells us when one function either grows faster or is the same as another function. In other words, it lets us know which function is an "upper bound" on the other.

The definition is as follows:

Let f(x) and g(x) be two functions.

If there exists two real constants "M" and "m" such that:

f(x) < Mg(x) : x>m

Then f(x) is bounded by g(x).

In Computer science parlance we say f(x) is Big Oh of g(x). This is notated:

f(x) = O(g(x))

However, technically the idea of an equality doesn't make sense here, because what O(g(x)) really is, is the set of all functions f(x) satisfying the definition for f(x) is bounded by g(x). In other words, Big Oh of g(x) is simply the set of all functions which grow just as fast or slower than g(x). So really a proper statement would be that f(x) is an element of O(g(x)). I will therefore use the alternative notation:

Og(x) n {f(x)} = {f(x)}

That is to say "The intersection of Og(x) and f(x) is the set of f(x)". If f(x) was not in the set Og(x), then there intersection would be the empty set, {}. So this statement implies the same thing (without requiring the "element of" symbol). Consequently if:

Og(x) n {f(x)} = {}

then f(x) must grow faster than every function in Og(x).

We can also carry over our asymptotic growth symbols from the previous sub-headings. We'll say:

If

Og(x) n {f(x)} = {f(x)}

then

f(x) [<=] g(x)

If

Og(x) n {f(x)} = {}

then

f(x) [>] g(x)

In Big Oh notation we can also establish that two functions are in the same growth class as follows:

If f(x) is Og(x)

and

g(x) is Of(x)

then

f(x) [=] g(x)

Essentially, if f(x) grows slower than or equal to g(x) and g(x) grows slower than or equal to f(x), then the only way for both statements to be consistent is if f(x) and g(x) grow at the same rate. What kinds of grow classes arise from this definition? Let's look at some simple cases. We'll begin with x and x+3.

We simply need an M and m that satisfy the necessary conditions. We'll first show that x is O(x+3). Since:

x < x+3 for all x, we can immediately establish the following

x < M(x+3) : x>m

where M=1, and m=0

So x is O(x+3). Now we attempt the reverse:

x+3 < Mx : x>m

Again this is fairly easy. Simply let M=2 and m=3.

So x+3 is also O(x), thus:

x+3 [=] x

We can use the notation O(f(x)) (note the bold face "O") to represent the growth class of all functions of equal growth as f(x). It can be shown, without too much difficultly that:

ax+b is O(x)

We can therefore use O(x) to represent the set of all linear functions. Are quadratic functions of a higher class than linear functions in Big Oh? Yes. Let's look at x2 versus 2x. To prove x2 grows faster we need to show that 2x is Ox2 but that x2 is not O2x. To establish the first we observe:

2x < 2(x2) : x>1

So 2x is Ox2

On the other hand if we have:

x2 < M(2x) : x>m

we have a problem. To understand why we can manipulate the inequality:

x2 < M(2x)

x < 2M

Regardless of what M is chosen, if x increases without bound it will pass the constant 2M. Therefore there can be no m such that x<2M holds for x>m, because this implies x grows without bound. So x2 is not O2x. Therefore:

2x [<] x2

Notice the amount of work we have to do to show that x2 is of a higher order than 2x. We have to apply the definition of big O twice. We also need to apply it twice to show that two functions are of equal order. This is a downside of the notation for us since we are mostly interested in showing that one function grows faster than another, not that it is simply bounded by another function. The bounded property makes sense in computer science, because you would want to have a worse-case scenario estimate. In googology however, what we really want is to show that one function does in fact grow faster than another to prove that it does in fact produce larger values for "small" inputs (anything you can write out in decimal is "small" by googology standards). The second problem is that Big Oh makes no distinction between x and 2x, even though it seems natural to think that 2x "grows faster".

Both of these problems are solved by yet a fourth possible definition of asymptotic growth rates...

Dominance over Translations

The next definition I am about to present is one of my own devising. I devised it as a way to compare functional growth rates before I had learned how Big-Oh notation worked. I'll refer to my definition as the "Dominance over translations" definition (DOT) of an asymptotic growth rate. The name will become clear when I explain the definition.

As we saw in the first definition, if we are given two functions, defined over the positive integers, call them f(n) and g(n), then saying g "grows faster" than f generally means that g eventually takes a lead on f that f never overcomes.

This definition works well in the case of g(x) = x2 and f(x) = 2x:

While f(x) < g(x) is not always true, since f(x)>g(x) over the interval (0,2), eventually g(x) gains on f(x) and f(x) never recovers. This occurs at x=2.

While this works, as stated before it has certain limitations. Consider the parallel lines, j and k, for example:

j is always greater than k. Because of this we could pick any value for m and show that j grows faster than k. For example, m=0. We can then say k(n) < j(n) when n > 0. So according to this definition, j "grows faster" than k. However common sense would say "they grow at the same rate". They should be considered to be growing at the same rate, it's just that "j" has a bit of a head start. This provides the motivation for devising a better definition. Notice, if we moved j so that it overlapped with k, the two functions would match. Moving a function is known as a "translation". So in our new definition we want all translations of any function to be of the same growth rate. When functions have a different "shape" however, it is possible that one grows faster than the other. Note that, in our previous example, no matter how we "translate" f(x) = 2x, it will always be overtaken by g(n) = x2 eventually (we'll prove this shortly). This is because, regardless of how great a head start f is given, since g is accelerating while f is increasing at a constant rate, g must eventually catch up and then overtake f. We therefore use the following revised definition to say a function "grows faster" than another:

Given f and g, g is said to grow faster than f if and only if for every translation of f there exists an m such that f(x) < g(x) when n>m.

I'll use the notation T[f(x)] for a translation of f(x). Using this we can formally define "grows faster" as follows:

Dominance over Translations Definition

AT[f(x)] Em | T[f(x)] < g(x) : x>m --> f(x) [<] g(x)

This reads:

"If for all translations of f there exists an m such that T[f(x)] < g(x) whenever x>m, then f(x) is of lower order than g(x) "

Given this definition, it follows that to disprove that g grows faster than f it is sufficient to give a single translation in which M fails to exist. Returning to j and k, we can see that M fails to exist when k is translated to a position above j. However it also follows that k is greater than j for the same reason. If neither function can be established to grow faster than the other, they are declared of an "equivalent growth class". That is, they grow at the same rate.

In other words:

If

f(x) [<] g(x) is False

&

f(x) [>] g(x) is False

Then

f(x) [=] g(x)

This definition provides similar growth classes to the other methods but allows us to distinguish between different linear growth rates. Like the other methods we can use the definition to show when one function grows faster than another. Let's look at 2x and x2 to illustrate how this would be done in DOT.

The first thing that needs to be understood is that all translations are composed of a vertical shift and a horizontal shift. Let f(x) be any function. A vertical shift of f(x) involves adding a quantity to the output of the function. This quantity will be represented by "dy" (change in y). So we can say " f(x)+dy " is a vertical translation of f(x). A horizontal shift of f(x) involves making a shift to the input of the function. We will do this by adding the quantity "dx" (change in x) to the input. So we can say " f(x+dx) " is a horizontal translation of f(x). Combining both of these shifts we can obtain any translation. Thus we say:

T[f(x)] = f(x+dx)+dy

For some constants dx and dy. To prove that one function grows faster than another in DOT we simply have to show that no matter what constants are chosen, the faster growing function will eventually overtake the slower growing one and maintain it's dominance.

Let f(x) = 2x and g(x) = x2. dx and dy will be known as "advantages". The idea is that f(x) is going to get a head start, and x2 is still going to overtake f(x). We first set up the inequality f(x+dx)+dy < g(x) and then determine m be attempting to solve for x:

f(x+dx)+dy < g(x)

2(x+dx)+dy < x2

2x+2dx+dy < x2

2dx+dy < x2-2x

Since lim(x-->infinity)x2-2x = infinity, and 2dx+dy is a constant, it means that no matter what constants are chosen, x2-2x will eventually overtake this constant, and therefore there must be some "m" for every translation. Actually computing the m's is not necessary, as long as we can show every translation will always be overtaken at some point.

The DOT definition works pretty well, but sometimes we will want to group the functions into larger growth classes. This can be achieved by generalizing DOT to "transformations" and not just "translations". A transformation involves a scaling factor in addition to translations. For example, we might multiply the output by some constant. This is known as a "vertical stretch", and can be represented by kf(x). We can also multiply the input by some constant. This is known as a "horizontal stretch", and can be represented by f(cx). Combining this all together along with translations we get kf(cx+dx)+dy is a transformation of f(x). I'll use T{f(x)} to represent a transformation of f(x). One important pitfall however needs to be addressed: c and k must be positive real constants. The reason is that if k<=0, then no matter how fast a function grows it would become either a constant function or a function with negative growth. This would result in some transformations of a "faster growing" function failing to dominate. For this reason c,k>0 for all T{f(x)}. With that in mind we can then use the following alternative definition:

AT{f(x)} Em | T{f(x)} < g(x) : x>m --> f(x) {<} g(x)

This can be thought as a fifth possible definition of "grows faster" or a "growth class".

Before we move on, it will be important to come up with some general results comparing the translation definition and the transformation definition. For the translation definition we will use the notation [<] , [>], [=], and for the transformation definition we will use the notation {<}, {>}, {=}. What we want to show is that order relations in one definition are equivalent to order relations in the other. Here is a list of conjectures we will prove:

f(x) {<} g(x) --> f(x) [<] g(x)

f(x) [=] g(x) --> f(x) {=} g(x)

f(x) [<] g(x) --> f(x) {<=} g(x)

f(x) {=} g(x) --> T{f(x)} {=} g(x)

These statements have not been chosen arbitrarily. The basic idea is that the translation definition defines finer classes, but that both basically preserve the same order. So if f and g belong to different transformation classes they also belong to different translation classes. If f and g are of equivalent translation classes then they are of equivalent transformation classes, and if g is of a greater translation class than f, then g is either of a greater or equal transformation class.

To prove the first conjecture we simply observe that if all transformations of f(x) are dominated by g(x), then the subset of translations of f(x) must by necessity also be dominated.

To prove the second conjecture we simply observe that if some translations of f(x) dominate g(x) while others don't, then the super set of transformations must contain at least some cases of each, and therefore some transformations of f(x) dominate g(x) while others don't.

To prove the third conjecture we first need to understand that for every transformation of f(x) there is a inverse-transformation. If the transformation of f(x), call it T{f(x)} dominates g(x), then the inverse transformation of g(x), call it T-1{g(x)} is dominated by f(x).

By definition Let:

T-1{T{f(x)}} = f(x)

It can be shown that every transformation has a counter transformation. We simply solve for the counter transformation:

T-1{T{f(x)}} = f(x)

T-1{kf(cx+dx)+dy} = f(x)

Let g(x) = kf(cx+dx)+dy

T-1{g(x)} = f(x)

k-1g(c-1x+d-1x)+d-1y = f(x)

k-1kf(c(c-1x+d-1x)+dx)+dy+d-1y = f(x)

Let k-1 = 1/k, c-1=1/c, and d-1y=-dy

f(x+cd-1x+dx) = f(x)

x+cd-1x+dx = x

cd-1x = -dx

d-1x = -dx/c

Thus we have:

If

T{f(x)} = kf(cx+dx)+dy

T-1{f(x)} = f(x/c-dx/c)/k -dy

If f(x) [<] g(x) then every T[f(x)] is dominated by g(x). Given this we can show that f(x) {>} g(x) must be false. In order for this to be true every transformation of g(x) would need to be dominated by f(x), or every inverse-transformation of f(x) would need to dominate g(x). But every inverse-transformation of f(x) can not dominate g(x) since a subset of these are the translations of f(x) which are all dominated by g(x). Therefore f(x) {>} g(x) must be false, and f(x) {<=} g(x) must be true, since either all transformations of f(x) are dominated by g(x) or some of them, but not all them.

The fourth conjecture is very important to prove because, when combined with the other 3, it amounts to showing that the sets of the transformation class and the sets of the translation class preserve the same order, but that the translation sets are subsets of the transformation classes.

We begin with the assumption f(x) {=} g(x). This means some transformations of f(x) dominate g(x), while others do not. If we apply yet another transformation to f(x), it doesn't change the set of transformations, so again there will be some transformations of T{f(x)} will dominate g(x), while others will not. Thus the second statement follows from the first.

We can use the transformation definition when we are interested in broad growth classes (linear,quadratic,cubic,exponential,hyper-exponential,tetrational,etc.), and use the translation definition when we are interested in the more natural idea of "grows faster".

Now let's consider how the transformation definition compares to Big-O.

Transformation Definition Vs. Big-O Notation

Since Big-O Notation is the standard in computer science and professional mathematics I will compare my transformation definition to this recognized standard. Let's say we have two functions again. Let's further say that:

f(x) {<} g(x)

According to the transformation definition. Does it follow that f(x) is Og(x):

f(x) {<} g(x) --> f(x) is Og(x)

To prove this we first consider what the first statement means. It means that every transformation of f(x) is overtaken by g(x) at some value. This naturally includes the transformation which does not change f(x) ie. 1f(1x+0)+0. So by definition there exists an m such that f(x)<g(x) : x>m. Let M=1 and we obtain f(x) < 1g(x) : x>m. So it is true that the first statement implies the second. Also note that f(x) [<] g(x) --> f(x) is Og(x), since the default translation also must have an m, and we can simply let M=1. So we have:

f(x) {<} g(x) --> f(x) is Og(x)

f(x) [<] g(x) --> f(x) is Og(x)

This is an important connection which suggests that similar growth classes are obtained in either DOT definition. But a more vital issue is whether the O-definition for "grows faster" is identical to the Dominance over transformations definition. We'll use the notation f(x) O[<] g(x), for the O-definition. Remember that:

f(x) is Og(x) & g(x) is not Of(x) --> f(x) O[<] g(x)

So our question is, does:

f(x) {<} g(x) --> f(x) O[<] g(x)

Again we can say that if g(x) overtakes for all translations of f(x), then it naturally overtakes for the default translation, therefore there exists an m such that:

f(x) < g(x) : x>m

We know then that f(x) is Og(x). To prove the second condition true (g(x) is not Of(x)), we have to show that M and m fail to exist given f(x) {<} g(x).

To show g(x) is Of(x) we would need to find M and m that satisfies:

g(x) < Mf(x) : x>m

Note however that there can be no M provided f(x) {<} g(x), since Mf(x) is simply a transformation of f(x), and therefore g(x) must overtake it at some point. Therefore there can be no M and m satisfying this condition. Therefore if f(x) {<} g(x) then f(x) is Og(x) and g(x) is not Of(x) which means f(x) O[<] g(x). So it's true!

The last conjecture that's important to prove is:

f(x) {=} g(x) --> f(x) O[=] g(x)

Each of these statements actually breaks down into two. We have:

f(x) {</} g(x) & g(x) {</} f(x) --> f(x) {=} g(x)

f(x) is Og(x) & g(x) is Of(x) --> f(x) O[=] g(x)

The first two statements can actually be simplified into a single simple assertion, that some transformations of f(x) dominate while others don't. Let's then consider f(x) is Og(x). For this to be true there must be M and m such that:

f(x) < Mg(x) : x>m

Recall the idea of inverse transformations. Let T{g(x)} = Mg(x), then T-1{f(x)} = f(x)/M. To show there is such an M such that f(x)/M is dominated by g(x), we can consider transformations of f(x)

Note that f(x)+dy always dominates f(x) when dy>0 and f(x)+dy is always dominated by f(x) when dy<0. when dy=0 the two functions are of equal strength.

f(x+dx) always dominates f(x) when dx>0 and f(x) is a strictly increasing function.