Foot-To-Ball N+1 Mac OS

broken image


To see if your product is compatible with Mac OS X, please click on the + below next to your model series. Visit our Self-Help Center landing page for other OS-compatibility information for your product. Older models that are not listed are not supported with the operating systems listed below. The particular formula employed is r(n+1) = (a. r(n) + c) mod m. The default value for the multiplicand `a' is 0xfdeece66d (7). The default value for the the addend `c' is 0xb (11). The modulo is always fixed at m = 2. 48. R(n) is called the seed of the random number gen-erator. For the six generator routines. It is available on all modern Unix systems, Windows, Mac OS X, and probably additional platforms, as long as OpenSSL is installed on that platform. Some behavior may be platform dependent, since calls are made to the operating system socket APIs. The installed version of OpenSSL may also cause variations in behavior. Mac OS is more secure than Windows. Many of you might have heard that Mac Os is more secure than windows. But that's not true. The market share of Mac is less than 10% which results in less malware and spyware attacks. Actually, the main reason behind this is iMac and Macbook. These two products running on Mac OS which is a bit expensive.

2.7.1. Solving Recurrence Relations¶

Recurrence relations are often used to model the cost of recursivefunctions.For example, the standard Mergesorttakes a list of size (n), splits it in half, performs Mergesorton each half, and finally merges the two sublists in (n) steps.The cost for this can be modeled as

In other words, the cost of the algorithm on input ofsize (n) is two times the cost for input of size (n/2)(due to the two recursive calls to Mergesort) plus (n)(the time to merge the sublists together again).

The official home of the Python Programming Language.

There are many approaches to solving recurrence relations, and webriefly consider three here.The first is an estimation technique:Guess the upper and lower bounds for the recurrence, useinduction to prove the bounds, and tighten as required.The second approach is to expand the recurrence to convert it to asummation and then use summation techniques.The third approach is to take advantage of already proven theoremswhen the recurrence is of a suitable form.In particular, typical divide-and-conquer algorithms such asMergesort yield recurrences of a form that fits a pattern for whichwe have a ready solution.

2.7.1.1. Estimating Upper and Lower Bounds¶

The first approach to solving recurrences is to guess theanswer and then attempt to prove it correct.If a correct upper or lower bound estimate is given,an easy induction proof will verify this fact.If the proof is successful, then try to tighten the bound.If the induction proof fails, then loosen the bound and try again.Once the upper and lower bounds match, you are finished.This is a useful technique when you are only looking for asymptoticcomplexities.When seeking a precise closed-form solution (i.e., you seek theconstants for the expression), this method will probably be too muchwork.

Example 2.7.1

Use the guessing technique to find the asymptotic bounds forMergesort, whose running time is described by the equation

[{bf T}(n) = 2{bf T}(n/2) + n; quad {bf T}(2) = 1.]

We begin by guessing that this recurrence has an upperbound in (O(n^2)).To be more precise, assume that

We prove this guess is correct by induction.In this proof, we assume that (n) is apower of two, to make the calculations easy.For the base case, ({bf T}(2) = 1 leq 2^2).For the induction step, we need to show that({bf T}(n) leq n^2) implies that({bf T}(2n) leq (2n)^2) for (n = 2^N, N geq 1).The induction hypothesis is

[{bf T}(i) leq i^2, {rm for all} i leq n.]

It follows that

[{bf T}(2n) = 2{bf T}(n) + 2n leq 2n^2 + 2n leq 4n^2 leq (2n)^2]

which is what we wanted to prove.Thus, ({bf T}(n)) is in (O(n^2)).

Is (O(n^2)) a good estimate?In the next-to-last step we went from (n^2 + 2n) to the muchlarger (4n^2).This suggests that (O(n^2)) is a high estimate.If we guess something smaller, such as ({bf T}(n) leq cn)for some constant (c), it should be clear that this cannotwork because (c 2 n = 2 c n) and there is no room for theextra (n) cost to join the two pieces together.Thus, the true cost must be somewhere between (cn) and(n^2).

Let us now try ({bf T}(n) leq n log n).For the base case, the definition of the recurrence sets({bf T}(2) = 1 leq (2 cdot log 2) = 2).Assume (induction hypothesis) that ({bf T}(n) leq n log n).Then,

[{bf T}(2n) = 2{bf T}(n) + 2n leq 2n log n + 2nleq 2n(log n + 1) leq 2 n log 2n]

which is what we seek to prove.In similar fashion, we can prove that ({bf T}(n)) is in(Omega(n log n)).Thus, ({bf T}(n)) is also (Theta(n log n)).

Example 2.7.2

We know that the factorial function grows exponentially.How does it compare to (2^n)? To (n^n)?Do they all grow 'equally fast' (in an asymptotic sense)?We can begin by looking at a few initial terms.

[begin{split}begin{array}{r|rrrrrrrrr}n&1&2&3&4&5&6&7&8&9hlinen! &1&2&6&24&120&720&5040&40320&3628802^n&2&4&8&16&32&64&128&256&512n^n&1&4&9&256&3125&46656&823543&16777216&387420489end{array}end{split}]

We can also look at these functions in terms of their recurrences.

[begin{split}n! = left{begin{array}{ll}1&n=1n(n-1)!&n>1end{array}right.end{split}]
[begin{split}2^n = left{begin{array}{ll}2&n=12(2^{n-1})&n>1end{array}right.end{split}]
[begin{split}n^n = left{begin{array}{ll}n&n=1n(n^{n-1})&n>1end{array}right.end{split}]

At this point, our intuition should be telling us pretty clearlythe relative growth rates of these three functions.But how do we prove formally which grows the fastest?And how do we decide if the differences are significant in anasymptotic sense, or just constant factor differences?

We can use logarithms to help us get an idea about the relativegrowth rates of these functions.Clearly, (log 2^n = n).Equally clearly, (log n^n = n log n).We can easily see from this that (2^n) is (o(n^n)),that is, (n^n) grows asymptotically faster than (2^n).

How does (n!) fit into this?We can again take advantage of logarithms.Obviously (n! leq n^n), so we know that (log n!) is(O(n log n)).But what about a lower bound for the factorial function?Consider the following.

[begin{split}begin{eqnarray*}n! &=& n times (n - 1) times cdots times frac{n}{2} times(frac{n}{2} - 1) times cdots times 2 times 1&geq& frac{n}{2} times frac{n}{2} times cdots times frac{n}{2}times 1 times cdots times 1 times 1&=& (frac{n}{2})^{n/2}end{eqnarray*}end{split}]

Therefore

[log n! geq log(frac{n}{2})^{n/2} =(frac{n}{2})log(frac{n}{2}).]

In other words, (log n!) is in (Omega(n log n)).Thus, (log n! = Theta(n log n)).

Note that this does not mean that (n! = Theta(n^n)).Because (log n^2 = 2 log n), it follows that(log n = Theta(log n^2)) but (n neq Theta(n^2)).The log function often works as a 'flattener' when dealing withasymptotics.That is, whenever (log f(n)) is in (O(log g(n))) weknow that (f(n)) is in (O(g(n))).But knowing that (log f(n) = Theta(log g(n))) does notnecessarily mean that (f(n) = Theta(g(n))).

Example 2.7.3

What is the growth rate of the Fibonacci sequence?We define the Fibonacci sequence as(f(n) = f(n-1) + f(n-2)) for (n geq 2);(f(0) = f(1) = 1).

In this case it is useful to compare the ratio of (f(n)) to(f(n-1)).The following table shows the first few values.

[begin{split}begin{array}{c|lllllll}n&1&2&3&4&5&6&7hlinef(n)&1&2&3&5&8&13&21f(n)/f(n-1)&1&2&1.5&1.666&1.625&1.615&1.619end{array}end{split}]

If we continue for more terms, the ratio appears to converge on avalue slightly greater then 1.618.Assuming (f(n)/f(n-1)) really does converge to a fixed valueas (n) grows, we can determine what that value must be.

[frac{f(n)}{f(n-2)} = frac{f(n-1)}{f(n-2)} + frac{f(n-2)}{f(n-2)}rightarrow x+1]

for some value (x).This follows from the fact that (f(n) = f(n-1) + f(n-2)).We divide by (f(n-2)) to make the second term go away, and wealso get something useful in the first term.Remember that the goal of such manipulations is to give us anequation that relates (f(n)) to something without recursivecalls.

For large (n), we also observe that:

[frac{f(n)}{f(n-2)} = frac{f(n)}{f(n-1)}frac{f(n-1)}{f(n-2)}rightarrow x^2]

as (n) gets big.This comes from multiplying (f(n)/f(n-2)) by(f(n-1)/f(n-1)) and rearranging.

If (x) exists, then (x^2 - x - 1 rightarrow 0).Using the quadratic equation, the only solution greater than one is

This expression also has the name (phi).What does this say about the growth rate of the Fibonacci sequence?It is exponential, with (f(n) = Theta(phi^n)).More precisely, (f(n)) converges to

[frac{phi^n - (1 - phi)^n}{sqrt{5}}.]

2.7.1.2. Expanding Recurrences¶

Estimating bounds is effective if you only need an approximation tothe answer.More precise techniques are required to find an exact solution.One approach is called expanding the recurrence.In this method, the smaller terms on the right side of the equationare in turn replaced by their definition.This is the expanding step.These terms are again expanded, and so on, until a full serieswith no recurrence results.This yields a summation,and techniques for solving summations can then be used.

Settings

Foot-to-ball N 1 Mac Os Update

Example 2.7.4

Our next example models the cost of the algorithm to build a heap.You should recall that to build a heap,we first heapify the two subheaps, then push down the root to itsproper position.The cost is:

Let us find a closed form solution for this recurrence.We can expand the recurrence a few times to see that

[begin{split}begin{eqnarray*}f(n) &leq& 2f(n/2) + 2 log n&leq& 2[2f(n/4) + 2 log n/2] + 2 log n&leq& 2[2(2f(n/8) + 2 log n/4) + 2 log n/2] + 2 log nend{eqnarray*}end{split}]

We can deduce from this expansion that this recurrence isequivalent to following summation and its derivation:

[begin{split}begin{eqnarray*}f(n) &leq& sum_{i=0}^{log n -1} 2^{i+1} log(n/2^i)&=& 2 sum_{i=0}^{log n -1} 2^i (log n - i)&=& 2 log n sum_{i=0}^{log n -1} 2^i - 4 sum_{i=0}^{log n -1} i 2^{i-1}&=& 2 n log n - 2 log n - 2 n log n + 4n -4&=& 4n - 2 log n - 4.end{eqnarray*}end{split}]

2.7.1.3. Divide-and-Conquer Recurrences¶

The third approach to solving recurrences is to take advantage ofknown theorems that provide the solution for classes of recurrences.Of particular practical use is a theorem that gives theanswer for a class known as divide-and-conquer recurrences.These have the form

Foot-to-ball n 1 mac os update
[{bf T}(n) = a{bf T}(n/b) + cn^k; quad {bf T}(1) = c]

where (a), (b), (c), and (k) are constants.In general, this recurrence describes a problem of size (n)divided into (a) subproblems of size (n/b),while (cn^k) is the amount of work necessary to combine thepartial solutions.Mergesort is an example of a divide and conquer algorithm, and itsrecurrence fits this form.So does binary search.We use the method of expanding recurrences to derive the generalsolution for any divide and conquer recurrence, assuming that(n = b^m).

[begin{split}begin{eqnarray*}{bf T}(n) & = & a{bf T}(n/b) + cn^k & = & a(a{bf T}(n/b^2) + c(n/b)^k) + cn^k & = & a(a[a{bf T}(n/b^3) + c(n/b^2)^k] + c(n/b)^k) + cn^k & = & a^m{bf T}(1) + a^{m-1}c(n/b^{m-1})^k + cdots + ac(n/b)^k + cn^k & = & a^mc + a^{m-1}c(n/b^{m-1})^k + cdots + ac(n/b)^k + cn^k & = & csum_{i=0}^{m} a^{m-i} b^{ik} & = &ca^msum_{i=0}^{m} (b^k/a)^i.end{eqnarray*}end{split}]

Here is a more visual presentation of this same derivation.

Settings

So, we are left with this result:

[{bf T}(n) = ca^msum_{i=0}^{m} (b^k/a)^i.]

At this point, it is useful to note that

[begin{eqnarray}label{ThmEquiv}a^m = a^{log_bn} = n^{log_ba}.end{eqnarray}]

This gives us

[{bf T}(n) = c n^{log_ba} sum_{i=0}^{m} (b^k/a)^i.]

The summation part of this equation is a geometric series whose sumdepends on the ratio (r = b^k/a).There are three cases.

  1. (r<1).From Equation (4) of Module summation,

    [sum_{i=0}^{m} r^i < 1/(1-r), {rm a~constant.}]

    Thus,

    [{bf T}(n) = Theta(a^m) = Theta(n^{log_ba}).]
  2. (r=1).Because (r = b^k/a), we know that (a = b^k).From the definition of logarithms it follows immediately that(k = log_b a).Also note that since we defined (n = b^m),then (m = log_b n).Thus,

    Because (a^m = n^{log_b a} = n^k), we have

    [{bf T}(n) = Theta(n^{log_ba}log_b n) = Theta(n^klog_b n).]
  3. (r>1).From Equation (5) of Module summation,

    [sum_{i=0}^{m} r^i = frac{r^{m+1} - 1}{r - 1} = Theta(r^m).]

    Thus,

    [{bf T}(n) = Theta(a^mr^m) = Theta(a^m(b^k/a)^m) = Theta(b^{km}) = Theta(n^k).]

We can summarize the above derivation as the following theorem,sometimes referred to as the Master Theorem.

Theorem 2.7.1

The Master Theorem: For any recurrence relation of the form({bf T}(n) = a{bf T}(n/b) + cn^k, {bf T}(1) = c),the following relationships hold.

[begin{split}{bf T}(n) = left{ begin{array}{ll} Theta(n^{log_ba}) & mbox{if (a > b^k)} Theta(n^klog_b n) & mbox{if (a = b^k)} Theta(n^k) & mbox{if (a < b^k).} end{array} right.end{split}]

This theorem may be applied whenever appropriate, rather thanre-deriving the solution for the recurrence.

N1 Vesti

Example 2.7.5

Apply the Master Theorem to solve

Because (a=3), (b=5), (c=8), and (k=2), wefind that (3<5^2).Applying case (3) of the theorem, ({bf T}(n) = Theta(n^2)).

Example 2.7.6

Use the Master Theorem to solve the recurrence relationfor Mergesort:

[{bf T}(n) = 2{bf T}(n/2) + n; quad {bf T}(1) = 1.]

Because (a=2), (b=2), (c=1), and (k=1),we find that (2 = 2^1).Applying case (2) of the theorem,({bf T}(n) = Theta(n log n)).

2.7.1.4. Average-Case Analysis of Quicksort¶

In Module Quicksort, we determined thatthe average-case analysis of Quicksort had the following recurrence:

[{bf T}(n) = cn + frac{1}{n}sum_{k=0}^{n-1} [{bf T}(k) + {bf T}(n -1 - k)], qquad {bf T}(0) = {bf T}(1) = c.]

The (cn) term is an upper bound on the findpivot andpartition steps.This equation comes from assuming that the partitioning element isequally likely to occur in any position (k).It can be simplified by observing that the tworecurrence terms ({bf T}(k)) and ({bf T}(n - 1 - k)) areequivalent, because one simply counts up from (T(0)) to(T(n-1)) while the other counts down from (T(n-1)) to(T(0)).This yields

[{bf T}(n) = cn + frac{2}{n}sum_{k=0}^{n-1} {bf T}(k).]

This form is known as a recurrence with full history.The key to solving such a recurrence is to cancel out the summationterms.The shifting method for summations provides a way to dothis.Multiply both sides by (n) and subtract the result from theformula for (n{bf T}(n+1)):

Foot-To-Ball N+1 Mac OS
[{bf T}(n) = a{bf T}(n/b) + cn^k; quad {bf T}(1) = c]

where (a), (b), (c), and (k) are constants.In general, this recurrence describes a problem of size (n)divided into (a) subproblems of size (n/b),while (cn^k) is the amount of work necessary to combine thepartial solutions.Mergesort is an example of a divide and conquer algorithm, and itsrecurrence fits this form.So does binary search.We use the method of expanding recurrences to derive the generalsolution for any divide and conquer recurrence, assuming that(n = b^m).

[begin{split}begin{eqnarray*}{bf T}(n) & = & a{bf T}(n/b) + cn^k & = & a(a{bf T}(n/b^2) + c(n/b)^k) + cn^k & = & a(a[a{bf T}(n/b^3) + c(n/b^2)^k] + c(n/b)^k) + cn^k & = & a^m{bf T}(1) + a^{m-1}c(n/b^{m-1})^k + cdots + ac(n/b)^k + cn^k & = & a^mc + a^{m-1}c(n/b^{m-1})^k + cdots + ac(n/b)^k + cn^k & = & csum_{i=0}^{m} a^{m-i} b^{ik} & = &ca^msum_{i=0}^{m} (b^k/a)^i.end{eqnarray*}end{split}]

Here is a more visual presentation of this same derivation.

Settings

So, we are left with this result:

[{bf T}(n) = ca^msum_{i=0}^{m} (b^k/a)^i.]

At this point, it is useful to note that

[begin{eqnarray}label{ThmEquiv}a^m = a^{log_bn} = n^{log_ba}.end{eqnarray}]

This gives us

[{bf T}(n) = c n^{log_ba} sum_{i=0}^{m} (b^k/a)^i.]

The summation part of this equation is a geometric series whose sumdepends on the ratio (r = b^k/a).There are three cases.

  1. (r<1).From Equation (4) of Module summation,

    [sum_{i=0}^{m} r^i < 1/(1-r), {rm a~constant.}]

    Thus,

    [{bf T}(n) = Theta(a^m) = Theta(n^{log_ba}).]
  2. (r=1).Because (r = b^k/a), we know that (a = b^k).From the definition of logarithms it follows immediately that(k = log_b a).Also note that since we defined (n = b^m),then (m = log_b n).Thus,

    Because (a^m = n^{log_b a} = n^k), we have

    [{bf T}(n) = Theta(n^{log_ba}log_b n) = Theta(n^klog_b n).]
  3. (r>1).From Equation (5) of Module summation,

    [sum_{i=0}^{m} r^i = frac{r^{m+1} - 1}{r - 1} = Theta(r^m).]

    Thus,

    [{bf T}(n) = Theta(a^mr^m) = Theta(a^m(b^k/a)^m) = Theta(b^{km}) = Theta(n^k).]

We can summarize the above derivation as the following theorem,sometimes referred to as the Master Theorem.

Theorem 2.7.1

The Master Theorem: For any recurrence relation of the form({bf T}(n) = a{bf T}(n/b) + cn^k, {bf T}(1) = c),the following relationships hold.

[begin{split}{bf T}(n) = left{ begin{array}{ll} Theta(n^{log_ba}) & mbox{if (a > b^k)} Theta(n^klog_b n) & mbox{if (a = b^k)} Theta(n^k) & mbox{if (a < b^k).} end{array} right.end{split}]

This theorem may be applied whenever appropriate, rather thanre-deriving the solution for the recurrence.

N1 Vesti

Example 2.7.5

Apply the Master Theorem to solve

Because (a=3), (b=5), (c=8), and (k=2), wefind that (3<5^2).Applying case (3) of the theorem, ({bf T}(n) = Theta(n^2)).

Example 2.7.6

Use the Master Theorem to solve the recurrence relationfor Mergesort:

[{bf T}(n) = 2{bf T}(n/2) + n; quad {bf T}(1) = 1.]

Because (a=2), (b=2), (c=1), and (k=1),we find that (2 = 2^1).Applying case (2) of the theorem,({bf T}(n) = Theta(n log n)).

2.7.1.4. Average-Case Analysis of Quicksort¶

In Module Quicksort, we determined thatthe average-case analysis of Quicksort had the following recurrence:

[{bf T}(n) = cn + frac{1}{n}sum_{k=0}^{n-1} [{bf T}(k) + {bf T}(n -1 - k)], qquad {bf T}(0) = {bf T}(1) = c.]

The (cn) term is an upper bound on the findpivot andpartition steps.This equation comes from assuming that the partitioning element isequally likely to occur in any position (k).It can be simplified by observing that the tworecurrence terms ({bf T}(k)) and ({bf T}(n - 1 - k)) areequivalent, because one simply counts up from (T(0)) to(T(n-1)) while the other counts down from (T(n-1)) to(T(0)).This yields

[{bf T}(n) = cn + frac{2}{n}sum_{k=0}^{n-1} {bf T}(k).]

This form is known as a recurrence with full history.The key to solving such a recurrence is to cancel out the summationterms.The shifting method for summations provides a way to dothis.Multiply both sides by (n) and subtract the result from theformula for (n{bf T}(n+1)):

[begin{split}begin{eqnarray*}n{bf T}(n) & = & cn^2 + 2 sum_{k=1}^{n-1} {bf T}(k)(n+1){bf T}(n+1) & = & c(n+1)^2 + 2 sum_{k=1}^{n} {bf T}(k).end{eqnarray*}end{split}]

Subtracting (n{bf T}(n)) from both sides yields:

[begin{split}begin{eqnarray*}(n+1){bf T}(n+1) - n{bf T}(n) & = & c(n+1)^2 - cn^2 + 2{bf T}(n)(n+1){bf T}(n+1) - n{bf T}(n) & = & c(2n+1) + 2{bf T}(n)(n+1){bf T}(n+1) & = & c(2n+1) + (n+2){bf T}(n){bf T}(n+1) & = & frac{c(2n+1)}{n+1} + frac{n+2}{n+1}{bf T}(n).end{eqnarray*}end{split}]

At this point, we have eliminated the summation and can nowuse our normal methods for solving recurrences to get a closed-formsolution.Note that (frac{c(2n+1)}{n+1} < 2c), so we can simplify theresult.Expanding the recurrence, we get

[begin{split}begin{eqnarray*}{bf T}(n+1) & leq & 2c + frac{n+2}{n+1} {bf T}(n) & = & 2c + frac{n+2}{n+1}left (2c + frac{n+1}{n}{bf T}(n-1)right ) & = & 2c + frac{n+2}{n+1}left (2c + frac{n+1}{n}left (2c + frac{n}{n-1}{bf T}(n-2)right )right ) & = & 2c + frac{n+2}{n+1}left (2c + cdots + frac{4}{3}(2c + frac{3}{2}{bf T}(1))right ) & = & 2cleft (1 + frac{n+2}{n+1} + frac{n+2}{n+1}frac{n+1}{n} + cdots + frac{n+2}{n+1}frac{n+1}{n}cdotsfrac{3}{2}right ) & = & 2cleft (1 + (n+2)left (frac{1}{n+1} + frac{1}{n} + cdots + frac{1}{2}right )right ) & = & 2c + 2c(n+2)left ({cal H}_{n+1} - 1right )end{eqnarray*}end{split}]

for ({cal H}_{n+1}), the Harmonic Series.From Equation (10) of Module summation,({cal H}_{n+1} = Theta(log n)),so the final solution is (Theta(n log n)).





broken image