**Clever student:**

I know!

= = = = .

Now we just plug in x=0, and we see that zero to the zero is one!

**Cleverer student:**

No, you’re wrong! You’re not allowed to divide by zero, which you did in the last step. This is how to do it:

= = = =

which is true since anything times 0 is 0. That means that

= .

**Cleverest student :**

That doesn’t work either, because if then

is

so your third step also involves dividing by zero which isn’t allowed! Instead, we can think about the function and see what happens as x>0 gets small. We have:

=

=

=

=

=

=

=

=

=

So, since = 1, that means that = 1.

**High School Teacher:**

Showing that approaches 1 as the positive value x gets arbitrarily close to zero does not prove that . The variable x having a value close to zero is different than it having a value of exactly zero. It turns out that is undefined. does not have a value.

**Calculus Teacher:**

For all , we have

.

Hence,

That is, as x gets arbitrarily close to (but remains positive), stays at .

On the other hand, for real numbers y such that , we have that

.

Hence,

That is, as y gets arbitrarily close to , stays at .

Therefore, we see that the function has a discontinuity at the point . In particular, when we approach (0,0) along the line with x=0 we get

but when we approach (0,0) along the line segment with y=0 and x>0 we get

.

Therefore, the value of is going to depend on the direction that we take the limit. This means that there is no way to define that will make the function continuous at the point .

**Mathematician: **Zero raised to the zero power is one. Why? Because mathematicians said so. No really, it’s true.

Let’s consider the problem of defining the function for positive integers y and x. There are a number of definitions that all give identical results. For example, one idea is to use for our definition:

:=

where the y is repeated x times. In that case, when x is one, the y is repeated just one time, so we get

= .

However, this definition extends quite naturally from the positive integers to the non-negative integers, so that when x is zero, y is repeated zero times, giving

=

which holds for any y. Hence, when y is zero, we have

.

Look, we’ve just proved that ! But this is only for one possible definition of . What if we used another definition? For example, suppose that we decide to define as

:= .

In words, that means that the value of is whatever approaches as the real number z gets smaller and smaller approaching the value x arbitrarily closely.

*[Clarification: *a reader asked how it is possible that we can use in our definition of , which seems to be recursive. The reason it is okay is because we are working here only with , and everyone agrees about what equals in this case. Essentially, we are using the known cases to construct a function that has a value for the more difficult x=0 and y=0 case.]

Interestingly, using this definition, we would have

= = =

Hence, we would find that rather than . Granted, this definition we’ve just used feels rather unnatural, but it does agree with the common sense notion of what means for all positive real numbers x and y, and it does preserve continuity of the function as we approach x=0 and y=0 along a certain line.

So which of these two definitions (if either of them) is right? What is *really*? Well, for x>0 and y>0 we know what we mean by . But when x=0 and y=0, the formula doesn’t have an obvious meaning. The value of is going to depend on our preferred choice of definition for what we mean by that statement, and our intuition about what means for positive values is not enough to conclude what it means for zero values.

But if this is the case, then how can mathematicians claim that ? Well, merely because it is useful to do so. Some very important formulas become less elegant to write down if we instead use or if we say that is undefined. For example, consider the binomial theorem, which says that:

=

where means the binomial coefficients.

Now, setting a=0 on both sides and assuming we get

= =

=

=

=

where, I’ve used that for k>0, and that . Now, it so happens that the right hand side has the magical factor . Hence, if we do not use then the binomial theorem (as written) does not hold when a=0 because then does not equal .

If mathematicians were to use , or to say that is undefined, then the binomial theorem would continue to hold (in some form), though not as written above. In that case though the theorem would be more complicated because it would have to handle the special case of the term corresponding to k=0. We gain elegance and simplicity by using .

There are some further reasons why using is preferable, but they boil down to that choice being more useful than the alternative choices, leading to simpler theorems, or feeling more “natural” to mathematicians. The choice is not “right”, it is merely nice.

Most of people are talking about limits and things…

Let’s take it at the point.

0^0=0^(1-1)=0^1/0^1=0/0=indeterminate

because any indeterminate number times 0 is 0. So 0/0 can be anything, like 0^0.

you need something other than numbers

https://en.wikipedia.org/wiki/Kronecker_delta

Pingback: No power | The Weight of My Days

In response to Ruvian, who wrote at Oct 9, 6:41:

“Most of people are talking about limits and things…

Let’s take it at the point.

0^0=0^(1-1)=0^1/0^1=0/0=indeterminate

because any indeterminate number times 0 is 0. So 0/0 can be anything, like 0^0.”

All this means is that you can’t use the exponent laws to determine what 0^0 is. It doesn’t preclude using other methods, many of which have already been posted. Here’s a summary.

0. Limits don’t matter. Consider the indeterminate form 1^oo.

lim (h->0) (1+h)^(1/h) = e

This doesn’t mean that 1^oo = e. (!)

1. Empty product

3^2 = 1*3*3

3^1 = 1*3

3^0 = 1

3^-1 = 1/3

0^2 = 1*0*0

0^1 = 1*0

0^0 = 1

0^-1 = 1/0 = “undefined”

So when we attempt to define 0^-1, we write, as above, 1/0. Not 7/0 or -3/0 — but 1/0. Why? Because any number to the zeroth power is 1, which is why we write a^-1 = a^(0-1) = a^0/a^1 = 1/a. So when we write 0^-1 = 1/0 we are assuming 0^0 = 1.

2. Mapping

y^x is the number of ways to map y objects to x objects. This gives 0^0 = 1.

3. Series written using summation notation

Every math book I’ve ever read that gives e^x in summation notation says that it is good for _all_ x:

e^x = Sum(0->oo) x^n/n! = 1 + x + x^2/2 + . . .

This can only be true for x = 0 if 0^0 = 1. So you have a choice. Either 0^0 = 1, or the summation notation version of power series, including e^x, is invalid for x = 0. There’s no way out of it! Do you really want to make an exception here for x = 0? I think not!

And this is needed for other power series that start with 1, like cos x. And it’s makes the binomial theorem,

(a+b)^n = Sum (k=0->n) (n k) * a^k * b^n-k

work for the following cases

A) a = 0

B) b = 0

C) n = 0 and a+b = 0.

This greatly simplifies programming for series or sums that contain a term of the form x^0. It’s also elegant.

And 0^0 = 1 works fine with the exponent laws.

Bottom line: 0^0 = 1

Alan E. Feldman

0^0 = 0

infinite root of( 0)=0

because the number which is multiplied infinite times to get 0 is 0

so infinite root of( 0)=0 can be written as

0^(1÷infinite)

0^(1÷infinite) can be written as 0^0

0^0 is equal to infinite root of (0)

so 0^0 is 0

lim (h->?) ((2h+1)/2h)^(h) = ?

lim (h->?) (2h/(2h+1))^(h) = ?

what’s the scoop ? one does not , the other intangible

https://en.wikipedia.org/wiki/Parity_%28mathematics%29

This question has got me crazy…

how can we bring the concept of limit of function into Indices, I know limit of function to be related to differential calculus.

3^4 = 3*3*3*3= 81

3^3 = 3*3*3= 27

3^2 = 3*3 = 9

3^1 = 3

3^0 = undefined

but 81 ÷ 3 = 27….27 ÷ 3 = 9…. 9 ÷ 3 = 3…. 3÷3 = 1…

Therefore 3^ 0 = 1

Can 0^0 be = 1 ?

0^4 = 0

0^3 = 0

0^2 = 0

0^1 = 0

0^0 = Indeterminate

betaneptune,

My comment telling 0^0 is indeterminate, I think, not proved what it is, but proved what it’s not.

“All this means is that you can’t use the exponent laws to determine what 0^0 is.”

It’s not what I meant to say. It means the exponential laws reveals that it’s indeterminate.

I could try to prove that 0^0 is 0, as tried by jeremy.

In fact, it can be, since 0^0=e^(ln(0^0))=e^(0*ln(0))=e^(0*-infinity), which is an indetermination and can lead to any result (including 1).

I searched on the web to have more quotes to my post seem more “powerful” (lol) and I found the following:

“It is commonly taught that any number to the zero power is 1, and zero to any power is 0. […] Well, it is undefined (since xy as a function of 2 variables is not continuous at the origin).”

Finally, It’s not a good thing (because I’ve been thinking about the most common problems of undefined points and their derivatives) to a function have a point where its derivative is undefined, but the point is defined. You can say x^(1/3) is not differentiable at x=0 (but still there’s 0^(1/3)), but if you define x^(1/3) as a set of functions like [if x>=0 then x^(1/3), if x<0 then 0 (for the real part)] you could take the derivative of a part of the function. Then you achieve a defined point with a defined part derivative. The function itself just not have a defined derivative for x=0 because it's naturally composed by 2 functions being united at x=0.

But x^y would not be this kind of case. x^y would be [1 if y goes faster to 0 than x, and 0 if x goes faster to 0 than y]. Plotting x^0 and 0^y will reveal that the first is always 1 and the other is always 0, except for x=0 and y=0. That's the point. It is a set composed by different functions that differs one from another, naturally. It doesn't have a common point. That's why you can't have even a part derivative at the point x=0 y=0.

I understand that it seems to appear in the reality as equal to one, but, mathematically, it's anything. And I know that in infinite sums it's considered to be 1, but it's just a common sense. It's like to say to everyone that thing that grows and have wood and leaves is called a tree. So now everyone knows what a tree is. But still it doesn't mean that "tree" is its true name.

(x)^(x)=sin(x*pi+sqrt2+?) 🙂

approximation ineffective -> but it is the image and graph and equation

http://oi64.tinypic.com/333wo07.jpg

put a= 0

put b=0

#include

using namespace std;

int main()

{ float a,b,c;

cout<<"enter the the value of a"<>a;

cout<<"enter the the value of b"<>b;

float x;

x=1;

for(int i=1; i<=b; i++)

{ x = x*a;

}

cout<<"a^b:"<<" "<<x<<endl;

return 0;

}

i get 1…then i am shocked … i checked my calculator ..it give math error……how funny mathematics………..

@Wadut Shaikh

There are many different algorithms that reach the same solutions for most common cases of a particular problem, but algorithms that USUALLY work as expected can reach unexpected results when certain cases are used, and those cases can be different between many different approaches to the exact same problem.

For instance, while your algorithm works as expected for positive ‘a’ and positive integers ‘b’, and despite the fact that you declare your ‘a’ and ‘b’ variables as float type, which is a type that allows negatives and decimals, your algorithm returns an incorrect solution for:

all non-integer ‘b’ except for a=0 or a=1

all “b<0" except for a=1

An algorithm that works for significantly more cases would use the limit definition of the natural logarithm and it's inverse function (stopped at an arbitrarily high number of trials, I.E. 10000) to solve "ln(a)", and then e^(ln(a)*b). Multiplication is safe in general, so as long as "ln(a)" solves, and so long as "e^x" is written correctly, then the solution a^b should be both real and correct. However, despite the fact that this algorithm works for most cases, it returns an incorrect/no solution for:

"a=0”

Notice that these two completely different approaches (yours and mine) to the same problem return correct solutions for most cases as expected, but return incorrect solutions for completely different sets of cases. Specifically, in most of the cases mentioned where my algorithm fails to return a solution that DOES exist, yours succeeds, and vice versa. This is except for when they BOTH fail, when:

“a<0" and integer "b<0", (but not when "a=-1" and 'b' is even)

or "a=0" and "b<1"

TLDR: Just because an algorithm returns many predictable solutions, does not mean that ALL its solutions are correct or predictable. Your calculator is evidently using a more sophisticated method, which is no surprise since it's likely being sold to millions.

Tip: In C code, you can compress the lines "float x" and "x=1" into a single line, "float x=1".

of course you knew the answers before I knew the results

there is something strange in the square root 😉

http://oi68.tinypic.com/spdqug.jpg

The proof for 0/0 (or 0^0) which I have read is:

Let 0/0=x

0=0*x

Thus x can be any number

I extended it a bit as:

0-o*x=0

0-0*x is 0/0(Dividend -Quotient*Divisor)

0/0=0

x=0

I didn’t have time to read through all of the comments posted, but in your second starting point, the attempt to define 0^0 by first defining y^x := lim (z -> x+) y^z has in it what seems to me an error.

It is the line which goes:

0^0 = lim (x->0+) 0^x = lim(x->0+) 0 = 0

It seems more natural to me to have that line say:

0^0 = lim (x->0+) 0^x = lim(x->0+) 0^0 = 1

I am perfectly happy defining 0^0 := 1 since any other positive number raised to the zeroth power is the number one (1). i.e., x^0 = 1, for all positive x (just haven’t thought through negative xes aspect), because I am taught that exponents, as you point out, are defined to represent multiplication of a number a number of times determined by the exponent number, as you said, x^1 = x, x^2 = x times x, x^3 = x times x times x, and so forth, which tells me that it is natural to think of x^0 = 1. I think you showed it as y^x = 1 times y times y times y. . . with y multiplied x times, so when the exponent reaches 0, there are zero ys to multiply against the 1. y^x = 1.

O^0 =0 it’s right

Take

25^0=1

Prove

25^0,00001=1,000032189

25^1 still 25 so it’s proof with 0,00001 as ~0 seems correct while it close enought to 1

Now take

0,00001^0,00001=0,999884877

0,0000001^0,0000001=0,999998388

Seems it approaches 1, that means 0^0 can really be 1 🙂

0 with the 0 power is 0 because no matter you multiply 0 times 1 times 2 times 3 times 4 times 5 and so on it is 0.

https://en.wikipedia.org/wiki/Volume_of_an_n-ball 😀

when the sinus is himself and his brother hyperbolic 😉

http://oi63.tinypic.com/1183apy.jpg

Declaring or defining a value for a troublesome expression is not proof. How you define the value of 0^0 does have far reaching implications for other established systems. And there may be a value that seems to be the best fit in many of those scenarios. Use what you like when you find it “helpful”, but understand that, in its most rudimentary arithmetic sense, it arises from 0^n/0^n = 0^(n-n) = 0^0 = 0/0 which I hope we are still declaring to be indeterminate. Else, we begin to neglect the meaning of equality.

There are several very key points that have been grossly overlooked.

First, definitions cannot be proved nor can they be disproved. Definitions are true, by definition. If 0^0 is defined to be 1, then it is 1–there is nothing to prove or disprove. Likewise, if 0^0 is regarded as undefined, then it is so–there is nothing to prove or disprove.

Definitions are made to be useful. If a concept is consistently useful (and 0^0 = 1 is very useful, as some posters have already noted in terms of the null product, binomial theorem, polynomial expressions, power series expressions, etc.), then the definition will likely be made to correspond to such uses, so many mathematicians regard 0^0 as 1 by definition.

Definitions are made to be used. The definition of a function is made in terms of its value for various input values–nothing more, nothing less. Exponentiation is a function: ^(x,y) = x^y with various restrictions on the domain as to what x and y can simultaneously be along with what values should be considered as candidate results (codomain). The definition of the value of a function generally does not depend on limits. A function can be defined to have a value at a point, and the function might have a limit at that point that agrees with the defined function value, it might have a limit that disagrees with the defined function value, or the limit might not exist at that point. The lack of a limit or having a variety of potential limits does not mean that we are unable to define a value for the function at that point. Likewise, if a limit does exist, there is no requirement to define the function to have a value at that point and, even if we do define the function to have a value at that point, there is no requirement that it match the limit value. The independence of the defined value of a function and the limiting value of a function is commonly a topic that students struggle with in the early weeks of introductory calculus courses. The two concepts are combined via the concept of continuity, but there is no requirement for a function to be continuous in order to have a definition or to be useful. This is the problem in the 1820s when the idea of 0^0 being undefined became the dominant position, but real analysts were still struggling with setting firm foundations for their field and at that time conflated the ideas of definitions of functions with limits of functions. If 0^0 is defined to be 1, then use it happily and productively, but recognizing that limits appearing to have the form 0^0 do not necessarily have 0^0 = 1 as a result.

On a related note, many people think that because 0^0 is called an indeterminate form, that means that 0^0 must be undefined. That is not necessarily true. 0^0 being called indeterminate is properly used only in the context of limits and it means that x^y can have different limiting values for x and y both going to 0 depending on how quickly each approaches 0 relative to the other. Again, just because there are different limiting values, that does not mean that the value 0^0 as a particular point in a function domain must itself be indeterminate or undefined.

The argument is nonsense that 0^0 must be interpreted as 0^1 / 0^1, which is 0/0 and, therefore, involves division by 0, which is meaningless, so 0^0 is meaningless. I can equally validly argue that 0^2 must be undefined because 0^2 = 0^3 / 0^1 which involves division by zero. This argument is clearly nonsense for 0^2 and it is equally nonsensical for 0^0. 0^0 is not defined as 0^1 / 0^1.

The argument is erroneously stated that there is a contradiction between anything (with the possible exception of the 0 currently in question) raised to an exponent of 0 is 1 and 0 raised to any exponent (except possibly 0) is 0, so such a contradiction clearly dispels any meaning for 0^0 itself. The error in such a statement is that z^0 = 1 is indeed true for all nonzero complex z, but 0^y fails miserably to be meaningful for negative real y and all non-real y. That is a huge difference in applicable domain. If one thing is true for only positive real numbers and another is true for all complex numbers, I will side with the latter where the two options meet. (This is different from the 0/0 case where 0/z = 0 for all nonzero complex z and z/z = for all nonzero complex z–there is no dominating domain, so to regard 0^0 as a comparable situation with 0/0 is failing to see the key distinction.)

The best that the “undefined” side can hope to accomplish meaningfully is that there is no one useful value for 0^0, so it is pointless to try to assign a defined value to 0^0.

However, there are many useful cases to regard 0^0 = 1, and everything I have read indicates there is no truly meaningful situation where 0^0 should be thought of as anything other than 1.

* The null product: a^n = 1 × a × … × a, where there are n copies of a multiplied for n a nonnegative integer. If n = 0, the a^n = 1. There is nothing in this description that depends on the value of a–it is true for all a for which multiplication is valid and there is nothing mentioned (nor needed to be mentioned about division being valid), so it works for a = 0 as well. This is the same argument as 0! = 1, which tends to drive some students nutty as well, so 0! is defined to be 1. For any commutative, power-associative binary operation, the result of applying that operation to a null set of arguments is ALWAYS the identity element for that operation (0 for complex addition, the 0-matrix for addition of a particular size of matrices, 1 for complex multiplication, the identity matrix for multiplication of particular size of square matrices, the empty set for union, the universal set for intersection, T for conjunction, F for disjunction, …). In fact, this should be regarded as the basis for the definition 0^0 = 1.

* Polynomials: P(x) = Sum(i=0..n, c_i x^i) for all complex c_i and x. P(0) = c_0 × 0^0, for which 0^0 needs to be 1 to get the desired result of c_0.

* Binomial theorem: (a+b)^n = Sum(i=0..n, C(n,i) a^i b^(n-i)) is true for all complex a and b and all nonnegative integers n (i.e., including n=0) if 0^0 = 1; otherwise, one has to place oddball restrictions on a, b, and n or rewrite the formula in an inconvenient manner. Restricting n to be positive does not avoid the problem, because there are still a^0 b^n and a^n b^0 terms and, if a or b is 0, one still has 0^0, which works when treated as 1.

* Number of functions from finite domain D to finite codomain C: The number of functions is |C|^|D|. It is permitted that D be empty for a function, in which case there is always 1 such function, the null function (which is the empty set when expressed as a set of ordered pairs), regardless of whether C is empty or non-empty. (If D is non-empty, then C needs to be non-empty as well; otherwise, there are 0^n = 0 such functions for |C| = n > 0.)

* Multiplicative laws of exponents: 1 = 0^0 = 0^(0+0) = 0^0 × 0^0 = 1 × 1; 1 = 0^0 = (0×0)^0 = 0^0 × 0^0 = 1 × 1.

There are many other situations where 0^0 is useful to regard as 1.

In summary, it is useful to regard 0^0 as 1. It is not useful to regard 0^0 as any value other than 1. It is not useful to regard 0^0 as undefined, unless you really insist on the basic operations of real arithmetic being continuous everywhere–but you already run into an issue with division by 0, so why not allow a discontinuity for 0^0 as well? The typical arguments for regarding 0^0 as undefined based on limits, 0^y versus x^0, division laws of exponents, etc. are all irrelevant at best and fallacious at worst. Functions, including arithmetic operators, can be defined in whatever way you wish as along as it is useful (and even usefulness is, strictly speaking, not a requirement) regardless of the behavior of limits, etc. and the definition cannot be disproved.

In reply to

Howard Ludwig commented on Q: What does 0^0 (zero raised to the zeroth power) equal? Why do mathematicians and high school teachers disagree?.

[…]

Excellent write-up! The only thing I would add is the e^x power series (and other similar power series). We often see it written in summation notation as

e^x = Sum(i=0->oo) x^n/n!

and we are always told that this formula is good for _all_ x. But this would not be true for x = 0 unless 0^0 = 1. So you “undefiners” have a choice: 0^0 = 1, or the above formula is good for all x except 0.

Yes, Alan, power series, which I regard as a generalization of polynomials offer a variety of examples, such as your e^x as to why 0^0 should be regarded as 1. Thank you for that reminder.

Oddly enough, I have encountered comments from people who see these examples and recognize their value but still cling to the idea that 0^0 must be undefined, because it has been in-grained in their minds so long. Their response is: “Yes, there are many cases where it is useful to treat 0^0 as 1 for shorthand purposes, but we must remember that such is merely a convenience and 0^0 is not really 1.” This is really wishy-washy word-gaming, and such people do not truly understand the concept of “definition”. Most mathematical terms are defined in a certain way simply because it is convenient to do so. It is convenient to have 0^0 = 1, and there is nothing wrong with defining it to be so.

One comment I forgot to include previously is that several people offered various fallacious arguments that 0^0 itself (not just limiting forms trending to the indeed indeterminate form commonly referred to as 0^0–those are two distinct concepts) is indeterminate, so that 0^0 can have any value. One property we would want 0^0 to satisfy is the pair of multiplicative laws of exponents. Let’s see what value, if any, for 0^0 could possibly satisfy the pair of laws. Let z be a possible value of 0^0. Then:

z = 0^0 = 0^(0 + 0) = 0^0 × 0^0 = z × z = z², based on rule for common bases;

z = 0^0 = (0 × 0)^0 = 0^0 × 0^0 = z × z = z², based on rule for common exponents.

In both cases z = z², which has 0 and 1 as the only possible solutions. This is additional evidence of the distinction between defining a function at a point and evaluating limits of functions at a point. The only possibilities are: (1) to regard 0^0 to be undefined; (2) to define 0^0 = 0; or (3) to define 0^0 = 1. Choice (2) can be immediately rejected because it has neither usefulness nor proponents. Choice (1) leads to nonsensical statements like “I am intending for you to interpret this expression in this manner but do not think the expression actually means this” or to ridiculously long and obfuscated expressions of definitions, theorems, and formulas to cover all cases. Choice (3) leads to a consistently applicable simplification that works for all relevant definitions, theorems, and formulas.

Anything power zero(0) gives 1

For eg:

5^0=1

Similarly by

When zero raised to zeroth power gives

0^0 is always be one(1)

0/0=2?

0/0=(4-4)/(4-4)

o/o=(2^2-2^2)/(2.2-2.2)

0/0=(2+2)(2-2)/2(2-2)

0/0=(2+2)/2

0/0=4/2

0/0=2……………

0 times 0, 0 times… I’m not going to pretend like I know what you guys are saying, but ANYTHING times 0 is 0.

Junior Higher writes on September 22, 2016 at 8:12 pm: “0 times 0, 0 times… I’m not going to pretend like I know what you guys are saying, but ANYTHING times 0 is 0.”

Yes, anything times zero is zero. But here we are multiplying time zero zero times, as you yourself say. That means no multiplication has taken place, so your premise is irrelevant.

Based on the argument, 0^0 must equal undefined, 1 or 0. In fact, ALL arguments’re INvalid, ’cause it doesn’t equal any of them. It’s indeterminate.

X^0=1

0^X=0

Which makes it indeterminate. There’s also another reason why it’s indeterminate.

The quotient of powers property states that

X^Z/X^Y=X^(Z-Y)

We know that

0^1=0 Divide equation by 0

/0 /0

Applying X^Z/X^Y=X^(Z-Y) 0^1/0^1=0^(1-1)=0^0

0^0=0/0

Since 0/0=indeterminate, 0^0=indeterminate

+Math guy: You say x^0=1 and 0^x=1. The former is true in all cases. The latter is true only for x >= 0. If you exclude x=0, the former applies to more values of x than the latter. So by the sheer power of democracy, it wins! 0^0 = 1

The term “indeterminate”, as I and Howard Ludwig have gone to great pains to explain, normally applies to a collection limiting _forms_ which are used to classify various limits. I shan’t repeat the argument here except to say that even though 1^oo is a limiting form (as in a particular definition of e), its actual value is 1. It itself is not indeterminate, but it is used to represent a limiting form that is. It is a _symbolic_ representation, as is 0^0. You need look only a few posts back to see the full argument.

Then you give the quotient argument: x^z / x^y = x^(z-y), which both Howard and I have already refuted. But I’ll do it again:

By this argument you can prove that any power of zero is “indeterminate” (I would call it: not sensibly definable [almost always, if not always, shortened to “undefined”].)

Example: 0^2 = 0^(4-2) = 0^4 / 0^2 = 0 / 0. Therefore, 0^2 is undefined. This is exactly according to your argument. So you would have to say that 0^2 is also “indeterminate”. You can’t have it both ways.

What we do instead is calculate 0^2 by another means: 0^2 = 0*0 = 0.

(Actually, the exponent law you give isn’t valid for x = 0 anyway.)

So we need to find another means to calculate 0^0. Your argument notwithstanding, the exponent laws _are not violated_ for 0^0 = 0 or 0^0 = 1. But 0^0 = 0 is not useful, while 0^0 = 1 is immensely useful and quite sensible by several arguments (see Howard’s posts and my posts).

Consider 0!. We can’t do this with the simple standard definition of n! = 1*2*3*… n times. So we use alternate methods! One is similar to your division trick:

n! = n * (n-1)!

(n-1)! = n!/n

Set n = 1 and you get

0! = 1!/1 = 1/1 = 1

Another is to define 0! using the gamma function: Gamma(n) = (n-1)!

So why can’t we use alternate methods for 0^0? Your arguments simply mean that we must calculate 0^0 by yet another means, and Howard and I have given plenty. Actually, you yourself are using _two_ alternate means! Neither of which helps, but they are still alternate means. So what’s wrong with trying a third alternate means? Are you only allowed to use two, neither of which rule anything out?

You can think of this as extending or generalizing y^x to the case of 0^0. In fact, we do exactly such a thing for any instance in which the exponent is not a positive integer. We use the concept of the division to give meaning to nonpositive exponents. We use the concept of a root for rational exponents. We use the definition of the log function for irrational exponents. And we use the formula

e^(ix) = cos x + i sin x

for imaginary exponents.

I mean, what does 4^3.7 mean? How do you multiply 4 by itself 3.7 times? What about i^i (the primary value of which is e^(-pi/2) ~= 0.2079)? What exactly does it mean to take any number to the i-th power? To multiply something i times?

Now why can’t we extend or generalize the definition of y^x for the case x = y = 0, especially since there are so many ways to do it, all of which come up with the same value of 1?

By the way, math man, here is a specific example of the inappropriateness of your argument that 0¹ / 0¹ = 0⁰. I think we would agree that 0¹ = 0, 0² = 0 × 0 = 0, and 0⁴ = (0²)² = (0)² = 0. By your interpretation and use of the division laws of powers, I can say 0³ = 0⁴/0¹ = 0/0, which is indeterminate but equal to 0³, so 0³ must be indeterminate. This is clearly ridiculous but it is the same argument you used to declare 0⁰ indeterminate. The use of this argument to declare 0⁰ indeterminate is just as ridiculous as using this argument to declare 0³ indeterminate. I’m confident you agree that this argument is ridiculous for 0³, especially since we know that 0³ = 0 × 0 × 0 = 0, which is well-defined.

The point is that there are some commonly used concepts and constructs that usually work but cannot be applied in certain cases, but that does not mean that we have no other approaches to determine a value. Just because one approach fails to yield a meaningful value for 0³, that does not mean that 0³ is undefined or indeterminate; the same is true for 0⁰. The approach for evaluating 0⁰ is called the nullary operation principle, which says that if you have an associative binary operation on a set S with a standard identity element for that operation, then an n-ary version for the operator can be defined (such as Σ for addition +, Π for multiplication ×, and similarly for set unions and intersections and logical conjunctions and disjunctions) to combine n values using that operation, and when n is 0 the result is always the identity element for the operation. There are no other restrictions. Thus, adding zero values together yields the additive identity 0 (which seems obvious to most people) and multiplying zero values yields the multiplicative identity 1 (which stuns most people, who think that multiplying nothing together should yield nothing, which would mean 0, but they are confusing zero versus nothing, which are very distinct concepts as demonstrated by this example.) This principle is the fundamental reason that 0! = 1. It is also the reason that 𝑥⁰ = 1 for ALL complex numbers 𝑥 (and even further, all quaternions). Remember I said there are no exceptions as long as the operation satisfies the stated required characteristics–the result applies regardless of operation and values under consideration: 0 is NOT an exception case for multiplication. Therefore, 0⁰ = 1.

I mentioned that with at least one approach, the multiplicative laws of powers, 0 is a potential meaningful result for o⁰. This is not a contradiction because 1 is also a potentially meaningful value. Those are the only two, because, in essence, we are solving for 𝑥 in 𝑥² = 𝑥. Because other approaches yield 1 as the only possible meaning for o⁰, we in essence regard 0 here as an extraneous solution.

From a practical standpoint 1 is a very useful value to have for o⁰. It simplifies expressions for polynomials and power series, stating the binomial theorem more succinctly and generally, and numerous other helpful situations.

Now, as a caveat, the concept of n-ary operations involves combing n values together, and that is meaningful only for n a nonnegative integer. Therefore, when we refer to 𝑥⁰ = 1 for all 𝑥, we are really looking at 𝑥ⁿ where n is restricted to nonnegative integers though there is no restriction on 𝑥. There are some applications in real analysis where continuity is important (and we said there is a discontinuity at (0; 0) and the exponent is allowed to be real, not just integer. Researchers in such areas tend to prefer to keep (0; 0) out of the domain of regard for the power function, which means they are regarding o⁰ as undefined for their work. This leads to a dilemma, with some mathematicians choosing to go one way and others go another way. As described above o⁰ needs to be regarded as 1 in the context of integer exponents. The choice is whether to regard 0 in the context of real numbers as different from 0 in the context of integers (here only for the exponent, but having anywhere such a situation that an integer 0 not being the same as a real 0 is grating), or we say o⁰ = 1 in the context of real exponents to match behavior with integer exponents and force the real analysts to complicate expressing their concepts.

This is not the only situation that powers cause some grief. The laws of powers have alternative sets of restrictions. To allow for exponents to be any real numbers, the bases must be positive real numbers; if we restrict exponents to be integers, the bases can be any nonzero real numbers; and there are other pairings of restrictions on exponents and bases possible. These are required to avoid dividing by 0 as well as cases like 1 = √[(−1) × (−1)] ≠ √(−1) × √(−1) = −1. Another situation that arises is the value to be assigned to ∛−8. Since 𝑦 = 𝑥³ is a bijection on the real numbers, it is invertible; since (−2)³ = −8, that would mean the inverse applies: ∛−8 = −2. However, in the context of complex numbers the principal cube root of −8 is 1 + i√3. Why should ∛−8 have two distinct principal values, one in the context of real numbers and another in the context of complex numbers? Is a real −8 not the same as a complex −8, so why should their principal cube roots be different? While 𝑦 = ∛𝑥 is continuous on the real numbers, it is not continuous on the complex numbers (there being a branch cut, usually placed along the negative real axis in the complex plane). As a result, some mathematics textbooks regard ∛𝑥 as being undefined for negative real values of 𝑥 in the context of real numbers, very similarly to why many want to regard o⁰ as undefined. It seems odd to me that many people who insist that o⁰ must be undefined are totally happy with calling ∛−8 = −2.

Thanks, Alan for your posting. I tried posting something to math guy last night and did not see your message. Then I decided to add more this morning and saw my posting from last night but still did not see yours. Now I finally see your post from yesterday afternoon, and my post from this morning but not my post from last night. We seem to be in a time warp. Anyway, you did a fantastic job of summarizing what I was trying to post with varying degrees of success. I was not trying to repeat you or act like I disagreed with you in any way.