**Clever student:**

I know!

= = = = .

Now we just plug in x=0, and we see that zero to the zero is one!

**Cleverer student:**

No, you’re wrong! You’re not allowed to divide by zero, which you did in the last step. This is how to do it:

= = = =

which is true since anything times 0 is 0. That means that

= .

**Cleverest student :**

That doesn’t work either, because if then

is

so your third step also involves dividing by zero which isn’t allowed! Instead, we can think about the function and see what happens as x>0 gets small. We have:

=

=

=

=

=

=

=

=

=

So, since = 1, that means that = 1.

**High School Teacher:**

Showing that approaches 1 as the positive value x gets arbitrarily close to zero does not prove that . The variable x having a value close to zero is different than it having a value of exactly zero. It turns out that is undefined. does not have a value.

**Calculus Teacher:**

For all , we have

.

Hence,

That is, as x gets arbitrarily close to (but remains positive), stays at .

On the other hand, for real numbers y such that , we have that

.

Hence,

That is, as y gets arbitrarily close to , stays at .

Therefore, we see that the function has a discontinuity at the point . In particular, when we approach (0,0) along the line with x=0 we get

but when we approach (0,0) along the line segment with y=0 and x>0 we get

.

Therefore, the value of is going to depend on the direction that we take the limit. This means that there is no way to define that will make the function continuous at the point .

**Mathematician: **Zero raised to the zero power is one. Why? Because mathematicians said so. No really, it’s true.

Let’s consider the problem of defining the function for positive integers y and x. There are a number of definitions that all give identical results. For example, one idea is to use for our definition:

:=

where the y is repeated x times. In that case, when x is one, the y is repeated just one time, so we get

= .

However, this definition extends quite naturally from the positive integers to the non-negative integers, so that when x is zero, y is repeated zero times, giving

=

which holds for any y. Hence, when y is zero, we have

.

Look, we’ve just proved that ! But this is only for one possible definition of . What if we used another definition? For example, suppose that we decide to define as

:= .

In words, that means that the value of is whatever approaches as the real number z gets smaller and smaller approaching the value x arbitrarily closely.

*[Clarification: *a reader asked how it is possible that we can use in our definition of , which seems to be recursive. The reason it is okay is because we are working here only with , and everyone agrees about what equals in this case. Essentially, we are using the known cases to construct a function that has a value for the more difficult x=0 and y=0 case.]

Interestingly, using this definition, we would have

= = =

Hence, we would find that rather than . Granted, this definition we’ve just used feels rather unnatural, but it does agree with the common sense notion of what means for all positive real numbers x and y, and it does preserve continuity of the function as we approach x=0 and y=0 along a certain line.

So which of these two definitions (if either of them) is right? What is *really*? Well, for x>0 and y>0 we know what we mean by . But when x=0 and y=0, the formula doesn’t have an obvious meaning. The value of is going to depend on our preferred choice of definition for what we mean by that statement, and our intuition about what means for positive values is not enough to conclude what it means for zero values.

But if this is the case, then how can mathematicians claim that ? Well, merely because it is useful to do so. Some very important formulas become less elegant to write down if we instead use or if we say that is undefined. For example, consider the binomial theorem, which says that:

=

where means the binomial coefficients.

Now, setting a=0 on both sides and assuming we get

= =

=

=

=

where, I’ve used that for k>0, and that . Now, it so happens that the right hand side has the magical factor . Hence, if we do not use then the binomial theorem (as written) does not hold when a=0 because then does not equal .

If mathematicians were to use , or to say that is undefined, then the binomial theorem would continue to hold (in some form), though not as written above. In that case though the theorem would be more complicated because it would have to handle the special case of the term corresponding to k=0. We gain elegance and simplicity by using .

There are some further reasons why using is preferable, but they boil down to that choice being more useful than the alternative choices, leading to simpler theorems, or feeling more “natural” to mathematicians. The choice is not “right”, it is merely nice.

0^0=1 because i^2=-1 🙂

http://oi57.tinypic.com/2qmzyvk.jpg

(1-1)^(1-1) view 😉

http://oi57.tinypic.com/v3d0rk.jpg

http://oi61.tinypic.com/zvbnv4.jpg

omg! 0-0^0=-1

http://oi62.tinypic.com/f3i6oz.jpg

the shape of the square root

http://oi61.tinypic.com/23tjms1.jpg

The argument is a construct for mathematical convenience.

It is not “true” for all abstracts. But it does work for most models which is why it is so widely used.

e.g.

If (in the form a^x)

1^1=1

and

1^0=1

then

x=1=0

which is not “true”

0^0 is indeterminate quantity : proof

let 0^0= k ; taking log both side

0loge 0=loge k

0(-infinite)= loge k

hense proof no value of exist

Rao KD,

Limits are irrelevant. Consider lim(h->0) (1+h)^(1/h). This is an indeterminate form of the type 1^infinity. Clearly 1^infinity = 1, not e. By changing the above to lim(h->0) (1+2/h)^h we get e^2. So limits are not unique, not equal to the correct answer of 1, and therefore are of no help. In fact, we already know that the indeterminate forms produce different answers in different cases, which is why we have, for example, L’Hopital’s rule for the 0/0 cases. (Which is why they’re called “indeterminate”.) So you can’t determine 0^0 via limits any more than you can 0/0.

As for 3^7 > 7^3 and such: You say that if x,y>2, the exponent rules (gives the larger result). The actual boundary is e, not 2. Example: 2.1 ^ 2.2 = 5.1154… whereas 2.2 ^ 2.1 = 5.2370…

You need a more sophisticated approach:

https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule

I am of the view that 0^0 =0 for the reason that 0^n=0. 0^n=0 x 0 x 0 x 0x…….n times. You are multiplying NOTHING so you get NOTHING. In the case of 0^0, you get nothing and you do not even dare to multiply, you get NOTHING.

0^0 = 1. Every math book I’ve seen that gives the following formula,

e^x = Sum(n=0->oo) x^n/n!

says that it is good all real numbers. I have never seen an exception given. Now, for this formula to work for x=0 you have to have 0^0=1. So you have to choose: Either 0^0=1 and the formula is good for all numbers, or 0^0!=1 and the formula for e^x above is good for all numbers except zero. You can’t have it both ways.

You also need 0^0=1 for other formula that use the capital sigma notation for polynomials (e.g., the binomial theorem) or infinite series (like the one above) to work.

On top of this there’s the argument that y^x is the number of ways to map x objects to y objects, which gives 0^0=1. And Euler says so. And limits are no good because lim(h->0) (1+h)^1/h is e, not 1, as 1^oo would be. And lim(h->0) (1+2h)^1/h is e^2 — yet another answer! And then there’s also the empty product bit.

0^0 = 1.

I agree, 1. Because e=2.71828, defined by 1/O! + 1/1! + 1/2! … which

resolves to 2.71828, not 1.71828. That’s my 2¢.

Probably many of you were taught that any number to the 0th power = 1 and that 0 to any power = 0

The question here is weather or not zero is a number.

i have not understand the answer of this i want specific answer

i think zero means nothing

so in both cases zero means nothing

so 0^0=0

Most of people are talking about limits and things…

Let’s take it at the point.

0^0=0^(1-1)=0^1/0^1=0/0=indeterminate

because any indeterminate number times 0 is 0. So 0/0 can be anything, like 0^0.

you need something other than numbers

https://en.wikipedia.org/wiki/Kronecker_delta

Pingback: No power | The Weight of My Days

In response to Ruvian, who wrote at Oct 9, 6:41:

“Most of people are talking about limits and things…

Let’s take it at the point.

0^0=0^(1-1)=0^1/0^1=0/0=indeterminate

because any indeterminate number times 0 is 0. So 0/0 can be anything, like 0^0.”

All this means is that you can’t use the exponent laws to determine what 0^0 is. It doesn’t preclude using other methods, many of which have already been posted. Here’s a summary.

0. Limits don’t matter. Consider the indeterminate form 1^oo.

lim (h->0) (1+h)^(1/h) = e

This doesn’t mean that 1^oo = e. (!)

1. Empty product

3^2 = 1*3*3

3^1 = 1*3

3^0 = 1

3^-1 = 1/3

0^2 = 1*0*0

0^1 = 1*0

0^0 = 1

0^-1 = 1/0 = “undefined”

So when we attempt to define 0^-1, we write, as above, 1/0. Not 7/0 or -3/0 — but 1/0. Why? Because any number to the zeroth power is 1, which is why we write a^-1 = a^(0-1) = a^0/a^1 = 1/a. So when we write 0^-1 = 1/0 we are assuming 0^0 = 1.

2. Mapping

y^x is the number of ways to map y objects to x objects. This gives 0^0 = 1.

3. Series written using summation notation

Every math book I’ve ever read that gives e^x in summation notation says that it is good for _all_ x:

e^x = Sum(0->oo) x^n/n! = 1 + x + x^2/2 + . . .

This can only be true for x = 0 if 0^0 = 1. So you have a choice. Either 0^0 = 1, or the summation notation version of power series, including e^x, is invalid for x = 0. There’s no way out of it! Do you really want to make an exception here for x = 0? I think not!

And this is needed for other power series that start with 1, like cos x. And it’s makes the binomial theorem,

(a+b)^n = Sum (k=0->n) (n k) * a^k * b^n-k

work for the following cases

A) a = 0

B) b = 0

C) n = 0 and a+b = 0.

This greatly simplifies programming for series or sums that contain a term of the form x^0. It’s also elegant.

And 0^0 = 1 works fine with the exponent laws.

Bottom line: 0^0 = 1

Alan E. Feldman

0^0 = 0

infinite root of( 0)=0

because the number which is multiplied infinite times to get 0 is 0

so infinite root of( 0)=0 can be written as

0^(1÷infinite)

0^(1÷infinite) can be written as 0^0

0^0 is equal to infinite root of (0)

so 0^0 is 0

This question has got me crazy…

how can we bring the concept of limit of function into Indices, I know limit of function to be related to differential calculus.

3^4 = 3*3*3*3= 81

3^3 = 3*3*3= 27

3^2 = 3*3 = 9

3^1 = 3

3^0 = undefined

but 81 ÷ 3 = 27….27 ÷ 3 = 9…. 9 ÷ 3 = 3…. 3÷3 = 1…

Therefore 3^ 0 = 1

Can 0^0 be = 1 ?

0^4 = 0

0^3 = 0

0^2 = 0

0^1 = 0

0^0 = Indeterminate

betaneptune,

My comment telling 0^0 is indeterminate, I think, not proved what it is, but proved what it’s not.

“All this means is that you can’t use the exponent laws to determine what 0^0 is.”

It’s not what I meant to say. It means the exponential laws reveals that it’s indeterminate.

I could try to prove that 0^0 is 0, as tried by jeremy.

In fact, it can be, since 0^0=e^(ln(0^0))=e^(0*ln(0))=e^(0*-infinity), which is an indetermination and can lead to any result (including 1).

I searched on the web to have more quotes to my post seem more “powerful” (lol) and I found the following:

“It is commonly taught that any number to the zero power is 1, and zero to any power is 0. […] Well, it is undefined (since xy as a function of 2 variables is not continuous at the origin).”

Finally, It’s not a good thing (because I’ve been thinking about the most common problems of undefined points and their derivatives) to a function have a point where its derivative is undefined, but the point is defined. You can say x^(1/3) is not differentiable at x=0 (but still there’s 0^(1/3)), but if you define x^(1/3) as a set of functions like [if x>=0 then x^(1/3), if x<0 then 0 (for the real part)] you could take the derivative of a part of the function. Then you achieve a defined point with a defined part derivative. The function itself just not have a defined derivative for x=0 because it's naturally composed by 2 functions being united at x=0.

But x^y would not be this kind of case. x^y would be [1 if y goes faster to 0 than x, and 0 if x goes faster to 0 than y]. Plotting x^0 and 0^y will reveal that the first is always 1 and the other is always 0, except for x=0 and y=0. That's the point. It is a set composed by different functions that differs one from another, naturally. It doesn't have a common point. That's why you can't have even a part derivative at the point x=0 y=0.

I understand that it seems to appear in the reality as equal to one, but, mathematically, it's anything. And I know that in infinite sums it's considered to be 1, but it's just a common sense. It's like to say to everyone that thing that grows and have wood and leaves is called a tree. So now everyone knows what a tree is. But still it doesn't mean that "tree" is its true name.

put a= 0

put b=0

#include

using namespace std;

int main()

{ float a,b,c;

cout<<"enter the the value of a"<>a;

cout<<"enter the the value of b"<>b;

float x;

x=1;

for(int i=1; i<=b; i++)

{ x = x*a;

}

cout<<"a^b:"<<" "<<x<<endl;

return 0;

}

i get 1…then i am shocked … i checked my calculator ..it give math error……how funny mathematics………..

@Wadut Shaikh

There are many different algorithms that reach the same solutions for most common cases of a particular problem, but algorithms that USUALLY work as expected can reach unexpected results when certain cases are used, and those cases can be different between many different approaches to the exact same problem.

For instance, while your algorithm works as expected for positive ‘a’ and positive integers ‘b’, and despite the fact that you declare your ‘a’ and ‘b’ variables as float type, which is a type that allows negatives and decimals, your algorithm returns an incorrect solution for:

all non-integer ‘b’ except for a=0 or a=1

all “b<0" except for a=1

An algorithm that works for significantly more cases would use the limit definition of the natural logarithm and it's inverse function (stopped at an arbitrarily high number of trials, I.E. 10000) to solve "ln(a)", and then e^(ln(a)*b). Multiplication is safe in general, so as long as "ln(a)" solves, and so long as "e^x" is written correctly, then the solution a^b should be both real and correct. However, despite the fact that this algorithm works for most cases, it returns an incorrect/no solution for:

"a=0”

Notice that these two completely different approaches (yours and mine) to the same problem return correct solutions for most cases as expected, but return incorrect solutions for completely different sets of cases. Specifically, in most of the cases mentioned where my algorithm fails to return a solution that DOES exist, yours succeeds, and vice versa. This is except for when they BOTH fail, when:

“a<0" and integer "b<0", (but not when "a=-1" and 'b' is even)

or "a=0" and "b<1"

TLDR: Just because an algorithm returns many predictable solutions, does not mean that ALL its solutions are correct or predictable. Your calculator is evidently using a more sophisticated method, which is no surprise since it's likely being sold to millions.

Tip: In C code, you can compress the lines "float x" and "x=1" into a single line, "float x=1".

of course you knew the answers before I knew the results

The proof for 0/0 (or 0^0) which I have read is:

Let 0/0=x

0=0*x

Thus x can be any number

I extended it a bit as:

0-o*x=0

0-0*x is 0/0(Dividend -Quotient*Divisor)

0/0=0

x=0

I didn’t have time to read through all of the comments posted, but in your second starting point, the attempt to define 0^0 by first defining y^x := lim (z -> x+) y^z has in it what seems to me an error.

It is the line which goes:

0^0 = lim (x->0+) 0^x = lim(x->0+) 0 = 0

It seems more natural to me to have that line say:

0^0 = lim (x->0+) 0^x = lim(x->0+) 0^0 = 1

I am perfectly happy defining 0^0 := 1 since any other positive number raised to the zeroth power is the number one (1). i.e., x^0 = 1, for all positive x (just haven’t thought through negative xes aspect), because I am taught that exponents, as you point out, are defined to represent multiplication of a number a number of times determined by the exponent number, as you said, x^1 = x, x^2 = x times x, x^3 = x times x times x, and so forth, which tells me that it is natural to think of x^0 = 1. I think you showed it as y^x = 1 times y times y times y. . . with y multiplied x times, so when the exponent reaches 0, there are zero ys to multiply against the 1. y^x = 1.

O^0 =0 it’s right

Take

25^0=1

Prove

25^0,00001=1,000032189

25^1 still 25 so it’s proof with 0,00001 as ~0 seems correct while it close enought to 1

Now take

0,00001^0,00001=0,999884877

0,0000001^0,0000001=0,999998388

Seems it approaches 1, that means 0^0 can really be 1 🙂

0 with the 0 power is 0 because no matter you multiply 0 times 1 times 2 times 3 times 4 times 5 and so on it is 0.

https://en.wikipedia.org/wiki/Volume_of_an_n-ball 😀

Declaring or defining a value for a troublesome expression is not proof. How you define the value of 0^0 does have far reaching implications for other established systems. And there may be a value that seems to be the best fit in many of those scenarios. Use what you like when you find it “helpful”, but understand that, in its most rudimentary arithmetic sense, it arises from 0^n/0^n = 0^(n-n) = 0^0 = 0/0 which I hope we are still declaring to be indeterminate. Else, we begin to neglect the meaning of equality.

There are several very key points that have been grossly overlooked.

First, definitions cannot be proved nor can they be disproved. Definitions are true, by definition. If 0^0 is defined to be 1, then it is 1–there is nothing to prove or disprove. Likewise, if 0^0 is regarded as undefined, then it is so–there is nothing to prove or disprove.

Definitions are made to be useful. If a concept is consistently useful (and 0^0 = 1 is very useful, as some posters have already noted in terms of the null product, binomial theorem, polynomial expressions, power series expressions, etc.), then the definition will likely be made to correspond to such uses, so many mathematicians regard 0^0 as 1 by definition.

Definitions are made to be used. The definition of a function is made in terms of its value for various input values–nothing more, nothing less. Exponentiation is a function: ^(x,y) = x^y with various restrictions on the domain as to what x and y can simultaneously be along with what values should be considered as candidate results (codomain). The definition of the value of a function generally does not depend on limits. A function can be defined to have a value at a point, and the function might have a limit at that point that agrees with the defined function value, it might have a limit that disagrees with the defined function value, or the limit might not exist at that point. The lack of a limit or having a variety of potential limits does not mean that we are unable to define a value for the function at that point. Likewise, if a limit does exist, there is no requirement to define the function to have a value at that point and, even if we do define the function to have a value at that point, there is no requirement that it match the limit value. The independence of the defined value of a function and the limiting value of a function is commonly a topic that students struggle with in the early weeks of introductory calculus courses. The two concepts are combined via the concept of continuity, but there is no requirement for a function to be continuous in order to have a definition or to be useful. This is the problem in the 1820s when the idea of 0^0 being undefined became the dominant position, but real analysts were still struggling with setting firm foundations for their field and at that time conflated the ideas of definitions of functions with limits of functions. If 0^0 is defined to be 1, then use it happily and productively, but recognizing that limits appearing to have the form 0^0 do not necessarily have 0^0 = 1 as a result.

On a related note, many people think that because 0^0 is called an indeterminate form, that means that 0^0 must be undefined. That is not necessarily true. 0^0 being called indeterminate is properly used only in the context of limits and it means that x^y can have different limiting values for x and y both going to 0 depending on how quickly each approaches 0 relative to the other. Again, just because there are different limiting values, that does not mean that the value 0^0 as a particular point in a function domain must itself be indeterminate or undefined.

The argument is nonsense that 0^0 must be interpreted as 0^1 / 0^1, which is 0/0 and, therefore, involves division by 0, which is meaningless, so 0^0 is meaningless. I can equally validly argue that 0^2 must be undefined because 0^2 = 0^3 / 0^1 which involves division by zero. This argument is clearly nonsense for 0^2 and it is equally nonsensical for 0^0. 0^0 is not defined as 0^1 / 0^1.

The argument is erroneously stated that there is a contradiction between anything (with the possible exception of the 0 currently in question) raised to an exponent of 0 is 1 and 0 raised to any exponent (except possibly 0) is 0, so such a contradiction clearly dispels any meaning for 0^0 itself. The error in such a statement is that z^0 = 1 is indeed true for all nonzero complex z, but 0^y fails miserably to be meaningful for negative real y and all non-real y. That is a huge difference in applicable domain. If one thing is true for only positive real numbers and another is true for all complex numbers, I will side with the latter where the two options meet. (This is different from the 0/0 case where 0/z = 0 for all nonzero complex z and z/z = for all nonzero complex z–there is no dominating domain, so to regard 0^0 as a comparable situation with 0/0 is failing to see the key distinction.)

The best that the “undefined” side can hope to accomplish meaningfully is that there is no one useful value for 0^0, so it is pointless to try to assign a defined value to 0^0.

However, there are many useful cases to regard 0^0 = 1, and everything I have read indicates there is no truly meaningful situation where 0^0 should be thought of as anything other than 1.

* The null product: a^n = 1 × a × … × a, where there are n copies of a multiplied for n a nonnegative integer. If n = 0, the a^n = 1. There is nothing in this description that depends on the value of a–it is true for all a for which multiplication is valid and there is nothing mentioned (nor needed to be mentioned about division being valid), so it works for a = 0 as well. This is the same argument as 0! = 1, which tends to drive some students nutty as well, so 0! is defined to be 1. For any commutative, power-associative binary operation, the result of applying that operation to a null set of arguments is ALWAYS the identity element for that operation (0 for complex addition, the 0-matrix for addition of a particular size of matrices, 1 for complex multiplication, the identity matrix for multiplication of particular size of square matrices, the empty set for union, the universal set for intersection, T for conjunction, F for disjunction, …). In fact, this should be regarded as the basis for the definition 0^0 = 1.

* Polynomials: P(x) = Sum(i=0..n, c_i x^i) for all complex c_i and x. P(0) = c_0 × 0^0, for which 0^0 needs to be 1 to get the desired result of c_0.

* Binomial theorem: (a+b)^n = Sum(i=0..n, C(n,i) a^i b^(n-i)) is true for all complex a and b and all nonnegative integers n (i.e., including n=0) if 0^0 = 1; otherwise, one has to place oddball restrictions on a, b, and n or rewrite the formula in an inconvenient manner. Restricting n to be positive does not avoid the problem, because there are still a^0 b^n and a^n b^0 terms and, if a or b is 0, one still has 0^0, which works when treated as 1.

* Number of functions from finite domain D to finite codomain C: The number of functions is |C|^|D|. It is permitted that D be empty for a function, in which case there is always 1 such function, the null function (which is the empty set when expressed as a set of ordered pairs), regardless of whether C is empty or non-empty. (If D is non-empty, then C needs to be non-empty as well; otherwise, there are 0^n = 0 such functions for |C| = n > 0.)

* Multiplicative laws of exponents: 1 = 0^0 = 0^(0+0) = 0^0 × 0^0 = 1 × 1; 1 = 0^0 = (0×0)^0 = 0^0 × 0^0 = 1 × 1.

There are many other situations where 0^0 is useful to regard as 1.

In summary, it is useful to regard 0^0 as 1. It is not useful to regard 0^0 as any value other than 1. It is not useful to regard 0^0 as undefined, unless you really insist on the basic operations of real arithmetic being continuous everywhere–but you already run into an issue with division by 0, so why not allow a discontinuity for 0^0 as well? The typical arguments for regarding 0^0 as undefined based on limits, 0^y versus x^0, division laws of exponents, etc. are all irrelevant at best and fallacious at worst. Functions, including arithmetic operators, can be defined in whatever way you wish as along as it is useful (and even usefulness is, strictly speaking, not a requirement) regardless of the behavior of limits, etc. and the definition cannot be disproved.

In reply to

Howard Ludwig commented on Q: What does 0^0 (zero raised to the zeroth power) equal? Why do mathematicians and high school teachers disagree?.

[…]

Excellent write-up! The only thing I would add is the e^x power series (and other similar power series). We often see it written in summation notation as

e^x = Sum(i=0->oo) x^n/n!

and we are always told that this formula is good for _all_ x. But this would not be true for x = 0 unless 0^0 = 1. So you “undefiners” have a choice: 0^0 = 1, or the above formula is good for all x except 0.

Yes, Alan, power series, which I regard as a generalization of polynomials offer a variety of examples, such as your e^x as to why 0^0 should be regarded as 1. Thank you for that reminder.

Oddly enough, I have encountered comments from people who see these examples and recognize their value but still cling to the idea that 0^0 must be undefined, because it has been in-grained in their minds so long. Their response is: “Yes, there are many cases where it is useful to treat 0^0 as 1 for shorthand purposes, but we must remember that such is merely a convenience and 0^0 is not really 1.” This is really wishy-washy word-gaming, and such people do not truly understand the concept of “definition”. Most mathematical terms are defined in a certain way simply because it is convenient to do so. It is convenient to have 0^0 = 1, and there is nothing wrong with defining it to be so.

One comment I forgot to include previously is that several people offered various fallacious arguments that 0^0 itself (not just limiting forms trending to the indeed indeterminate form commonly referred to as 0^0–those are two distinct concepts) is indeterminate, so that 0^0 can have any value. One property we would want 0^0 to satisfy is the pair of multiplicative laws of exponents. Let’s see what value, if any, for 0^0 could possibly satisfy the pair of laws. Let z be a possible value of 0^0. Then:

z = 0^0 = 0^(0 + 0) = 0^0 × 0^0 = z × z = z², based on rule for common bases;

z = 0^0 = (0 × 0)^0 = 0^0 × 0^0 = z × z = z², based on rule for common exponents.

In both cases z = z², which has 0 and 1 as the only possible solutions. This is additional evidence of the distinction between defining a function at a point and evaluating limits of functions at a point. The only possibilities are: (1) to regard 0^0 to be undefined; (2) to define 0^0 = 0; or (3) to define 0^0 = 1. Choice (2) can be immediately rejected because it has neither usefulness nor proponents. Choice (1) leads to nonsensical statements like “I am intending for you to interpret this expression in this manner but do not think the expression actually means this” or to ridiculously long and obfuscated expressions of definitions, theorems, and formulas to cover all cases. Choice (3) leads to a consistently applicable simplification that works for all relevant definitions, theorems, and formulas.

Anything power zero(0) gives 1

For eg:

5^0=1

Similarly by

When zero raised to zeroth power gives

0^0 is always be one(1)

0/0=2?

0/0=(4-4)/(4-4)

o/o=(2^2-2^2)/(2.2-2.2)

0/0=(2+2)(2-2)/2(2-2)

0/0=(2+2)/2

0/0=4/2

0/0=2……………

0 times 0, 0 times… I’m not going to pretend like I know what you guys are saying, but ANYTHING times 0 is 0.

Junior Higher writes on September 22, 2016 at 8:12 pm: “0 times 0, 0 times… I’m not going to pretend like I know what you guys are saying, but ANYTHING times 0 is 0.”

Yes, anything times zero is zero. But here we are multiplying time zero zero times, as you yourself say. That means no multiplication has taken place, so your premise is irrelevant.

Based on the argument, 0^0 must equal undefined, 1 or 0. In fact, ALL arguments’re INvalid, ’cause it doesn’t equal any of them. It’s indeterminate.

X^0=1

0^X=0

Which makes it indeterminate. There’s also another reason why it’s indeterminate.

The quotient of powers property states that

X^Z/X^Y=X^(Z-Y)

We know that

0^1=0 Divide equation by 0

/0 /0

Applying X^Z/X^Y=X^(Z-Y) 0^1/0^1=0^(1-1)=0^0

0^0=0/0

Since 0/0=indeterminate, 0^0=indeterminate

+Math guy: You say x^0=1 and 0^x=1. The former is true in all cases. The latter is true only for x >= 0. If you exclude x=0, the former applies to more values of x than the latter. So by the sheer power of democracy, it wins! 0^0 = 1

The term “indeterminate”, as I and Howard Ludwig have gone to great pains to explain, normally applies to a collection limiting _forms_ which are used to classify various limits. I shan’t repeat the argument here except to say that even though 1^oo is a limiting form (as in a particular definition of e), its actual value is 1. It itself is not indeterminate, but it is used to represent a limiting form that is. It is a _symbolic_ representation, as is 0^0. You need look only a few posts back to see the full argument.

Then you give the quotient argument: x^z / x^y = x^(z-y), which both Howard and I have already refuted. But I’ll do it again:

By this argument you can prove that any power of zero is “indeterminate” (I would call it: not sensibly definable [almost always, if not always, shortened to “undefined”].)

Example: 0^2 = 0^(4-2) = 0^4 / 0^2 = 0 / 0. Therefore, 0^2 is undefined. This is exactly according to your argument. So you would have to say that 0^2 is also “indeterminate”. You can’t have it both ways.

What we do instead is calculate 0^2 by another means: 0^2 = 0*0 = 0.

(Actually, the exponent law you give isn’t valid for x = 0 anyway.)

So we need to find another means to calculate 0^0. Your argument notwithstanding, the exponent laws _are not violated_ for 0^0 = 0 or 0^0 = 1. But 0^0 = 0 is not useful, while 0^0 = 1 is immensely useful and quite sensible by several arguments (see Howard’s posts and my posts).

Consider 0!. We can’t do this with the simple standard definition of n! = 1*2*3*… n times. So we use alternate methods! One is similar to your division trick:

n! = n * (n-1)!

(n-1)! = n!/n

Set n = 1 and you get

0! = 1!/1 = 1/1 = 1

Another is to define 0! using the gamma function: Gamma(n) = (n-1)!

So why can’t we use alternate methods for 0^0? Your arguments simply mean that we must calculate 0^0 by yet another means, and Howard and I have given plenty. Actually, you yourself are using _two_ alternate means! Neither of which helps, but they are still alternate means. So what’s wrong with trying a third alternate means? Are you only allowed to use two, neither of which rule anything out?

You can think of this as extending or generalizing y^x to the case of 0^0. In fact, we do exactly such a thing for any instance in which the exponent is not a positive integer. We use the concept of the division to give meaning to nonpositive exponents. We use the concept of a root for rational exponents. We use the definition of the log function for irrational exponents. And we use the formula

e^(ix) = cos x + i sin x

for imaginary exponents.

I mean, what does 4^3.7 mean? How do you multiply 4 by itself 3.7 times? What about i^i (the primary value of which is e^(-pi/2) ~= 0.2079)? What exactly does it mean to take any number to the i-th power? To multiply something i times?

Now why can’t we extend or generalize the definition of y^x for the case x = y = 0, especially since there are so many ways to do it, all of which come up with the same value of 1?

By the way, math man, here is a specific example of the inappropriateness of your argument that 0¹ / 0¹ = 0⁰. I think we would agree that 0¹ = 0, 0² = 0 × 0 = 0, and 0⁴ = (0²)² = (0)² = 0. By your interpretation and use of the division laws of powers, I can say 0³ = 0⁴/0¹ = 0/0, which is indeterminate but equal to 0³, so 0³ must be indeterminate. This is clearly ridiculous but it is the same argument you used to declare 0⁰ indeterminate. The use of this argument to declare 0⁰ indeterminate is just as ridiculous as using this argument to declare 0³ indeterminate. I’m confident you agree that this argument is ridiculous for 0³, especially since we know that 0³ = 0 × 0 × 0 = 0, which is well-defined.

The point is that there are some commonly used concepts and constructs that usually work but cannot be applied in certain cases, but that does not mean that we have no other approaches to determine a value. Just because one approach fails to yield a meaningful value for 0³, that does not mean that 0³ is undefined or indeterminate; the same is true for 0⁰. The approach for evaluating 0⁰ is called the nullary operation principle, which says that if you have an associative binary operation on a set S with a standard identity element for that operation, then an n-ary version for the operator can be defined (such as Σ for addition +, Π for multiplication ×, and similarly for set unions and intersections and logical conjunctions and disjunctions) to combine n values using that operation, and when n is 0 the result is always the identity element for the operation. There are no other restrictions. Thus, adding zero values together yields the additive identity 0 (which seems obvious to most people) and multiplying zero values yields the multiplicative identity 1 (which stuns most people, who think that multiplying nothing together should yield nothing, which would mean 0, but they are confusing zero versus nothing, which are very distinct concepts as demonstrated by this example.) This principle is the fundamental reason that 0! = 1. It is also the reason that 𝑥⁰ = 1 for ALL complex numbers 𝑥 (and even further, all quaternions). Remember I said there are no exceptions as long as the operation satisfies the stated required characteristics–the result applies regardless of operation and values under consideration: 0 is NOT an exception case for multiplication. Therefore, 0⁰ = 1.

I mentioned that with at least one approach, the multiplicative laws of powers, 0 is a potential meaningful result for o⁰. This is not a contradiction because 1 is also a potentially meaningful value. Those are the only two, because, in essence, we are solving for 𝑥 in 𝑥² = 𝑥. Because other approaches yield 1 as the only possible meaning for o⁰, we in essence regard 0 here as an extraneous solution.

From a practical standpoint 1 is a very useful value to have for o⁰. It simplifies expressions for polynomials and power series, stating the binomial theorem more succinctly and generally, and numerous other helpful situations.

Now, as a caveat, the concept of n-ary operations involves combing n values together, and that is meaningful only for n a nonnegative integer. Therefore, when we refer to 𝑥⁰ = 1 for all 𝑥, we are really looking at 𝑥ⁿ where n is restricted to nonnegative integers though there is no restriction on 𝑥. There are some applications in real analysis where continuity is important (and we said there is a discontinuity at (0; 0) and the exponent is allowed to be real, not just integer. Researchers in such areas tend to prefer to keep (0; 0) out of the domain of regard for the power function, which means they are regarding o⁰ as undefined for their work. This leads to a dilemma, with some mathematicians choosing to go one way and others go another way. As described above o⁰ needs to be regarded as 1 in the context of integer exponents. The choice is whether to regard 0 in the context of real numbers as different from 0 in the context of integers (here only for the exponent, but having anywhere such a situation that an integer 0 not being the same as a real 0 is grating), or we say o⁰ = 1 in the context of real exponents to match behavior with integer exponents and force the real analysts to complicate expressing their concepts.

This is not the only situation that powers cause some grief. The laws of powers have alternative sets of restrictions. To allow for exponents to be any real numbers, the bases must be positive real numbers; if we restrict exponents to be integers, the bases can be any nonzero real numbers; and there are other pairings of restrictions on exponents and bases possible. These are required to avoid dividing by 0 as well as cases like 1 = √[(−1) × (−1)] ≠ √(−1) × √(−1) = −1. Another situation that arises is the value to be assigned to ∛−8. Since 𝑦 = 𝑥³ is a bijection on the real numbers, it is invertible; since (−2)³ = −8, that would mean the inverse applies: ∛−8 = −2. However, in the context of complex numbers the principal cube root of −8 is 1 + i√3. Why should ∛−8 have two distinct principal values, one in the context of real numbers and another in the context of complex numbers? Is a real −8 not the same as a complex −8, so why should their principal cube roots be different? While 𝑦 = ∛𝑥 is continuous on the real numbers, it is not continuous on the complex numbers (there being a branch cut, usually placed along the negative real axis in the complex plane). As a result, some mathematics textbooks regard ∛𝑥 as being undefined for negative real values of 𝑥 in the context of real numbers, very similarly to why many want to regard o⁰ as undefined. It seems odd to me that many people who insist that o⁰ must be undefined are totally happy with calling ∛−8 = −2.

Thanks, Alan for your posting. I tried posting something to math guy last night and did not see your message. Then I decided to add more this morning and saw my posting from last night but still did not see yours. Now I finally see your post from yesterday afternoon, and my post from this morning but not my post from last night. We seem to be in a time warp. Anyway, you did a fantastic job of summarizing what I was trying to post with varying degrees of success. I was not trying to repeat you or act like I disagreed with you in any way.

No problem, Howard, and thanks for the compliment! And you’ve brought up some excellent points I didn’t know about.

>—-o—-

= 0. If you exclude x=0, the former applies to more values of x than the latter. So by the sheer power of democracy, it wins! 0^0 = 1″I meant 0^x = 0, of course! And “the latter is true only for” x > 0.

>—-o—-= 0.>—-o——–o—-—-o—-<Testing my keyboard square root symbol. Please ignore. √Looks like your website doesn’t like or got confused by my separator symbols! Here’s the same post again without them:

No problem, Howard, and thanks for the compliment! And you’ve brought up some excellent points I didn’t know about.

I made a serious error with

“+Math guy: You say x^0=1 and 0^x=1. The former is true in all cases. The latter is true only for x >= 0. If you exclude x=0, the former applies to more values of x than the latter. So by the sheer power of democracy, it wins! 0^0 = 1”

I meant 0^x = 0, of course! And “the latter is true only for” x > 0.

I was recently wondering if any useful meaning could be given to raising zero to an imaginary power: 0^i, e.g. After playing with the laws of exponents, I have decided there’s no useful definition. You get

(0^i)^i = 0^(-1) = 1/0 .

Not promising! From

(0*0)^i = 0^i * 0^i

0^i = 0^i * 0^i

you find that 0^i can be only 0 or 1, neither of which works with the first equation. So I have concluded that the useful domain for 0^x is x >= 0 (with x real, of course).

The problem with

1 = √[(−1) × (−1)] ≠ √(−1) × √(−1) = −1 ,

I think, is that √ normally means positive square root, and there is no such thing for -1. The number i is neither positive nor negative. You must go into the complex plane and use the exponent 1/2. Then you get to play with multi-value functions, which makes life tough for the exponent rules! But you can make it work by expressing the complex numbers in polar form and judiciously choosing the values of the angles.

I have no problem with ∛−8 = −2. It’s the real solution of x^3 + 8 = 0. It’s the real number that when cubed gives -8. Are you going to say that x^3 + 8 = 0 has no real solution because it’s not the “principal value”? On second thought, √ does mean _positive_ square root, so ∛ should also be similarly qualified, I guess. Well, you can define it to be the real root. And why not?

Testing my keyboard square root symbol. Please ignore. √

Sorry about the length of this, but there are some important but subtle issues to cover dealing with properties of powers, and I would rather give substantive rationale instead of just spouting assertions like so many of the “0⁰ is undefined” supporters do, showing they do not really understand. Unfortunately, that takes some space, and I hope it helps.

The symbol √ means “𝐭𝐡𝐞 𝐩𝐫𝐢𝐧𝐜𝐢𝐩𝐚𝐥 __ root of”. The blank to fill in corresponds to allowing for a superscript prepended to the √ to indicate the index of the root (so default 2/square but could be 3/cube, …, 𝑛/𝑛-th). I bolded 𝐭𝐡𝐞 to emphasize there is only one result to make the root operation well-defined. I bolded 𝐩𝐫𝐢𝐧𝐜𝐢𝐩𝐚𝐥 to emphasize that there is a conventional mechanism for establishing only one result when there are multiple alternatives to consider—this word is commonly left out in speaking because it is “too long and too much trouble to say, and everybody knows what we mean anyway” even though people commonly forget that little detail and arrive at incorrect conclusions.

In the context of real numbers (for both radicand and resulting root), the term “principal” does not do anything when the index is odd, as there is only one alternative to consider anyway, so that one must be the real root and the principal root. When the index is even, there is no [real] root for a negative radicand, 0 is the only alternative for a root of radicand 0, and there are two alternatives, a positive value and a negative value, with a positive radicand and the positive value is the one conventionally selected to be the principal root.

In the context of complex numbers, there are always 𝑛 distinct 𝑛-th roots for any nonzero radicand, so one needs to be preferred. The principal root is determined by writing the radicand in exponential form 𝑟 exp(i𝜃) with 𝑟 > 0 and –π < 𝜃 ≤ +π, in which case the principal root is ⁿ√𝑟 exp(i𝜃/𝑛). The geometric interpretation of this is that all the 𝑛-th roots lie equally spaced around a circle of radius ⁿ√𝑟 centered on the origin and the principal root is the one closest to the positive real axis; if there are two equally close (one above the positive real axis and one equally far below), take the one above.

In the case of roots of positive real numbers, the same answer results whether the principal root is determined in the context of real numbers or the context of complex numbers (all real numbers being complex). Everybody is happy. The real number radicand behaves the same way in the complex domain/codomain as it does in a real domain/codomain. We like it very much (and often expect it at least subconsciously) when an operation involving any two operands in a set yield the same result as when applied in the context of a superset of the set. This shows up in algebra as the concepts of subgroup/group, subring/ring, and subfield/field as well as the concept of embedding.

For negative real numbers as radicands, even-indexed roots are not defined in the context of real numbers, but they are in the context of complex numbers, a distinction that does not bother anyone since the issue merely involves the “size” of the codomain. However, the behavior of odd-indexed roots of negative real numbers is fundamentally different when dealt with in the context of real numbers versus the context of complex numbers. According to the definitions/rules above, ∛−8 has the value −2 in the context (domain/codomain) of real numbers and 1 + i√3 in the context of complex numbers. This tends to surprise many people when they first encounter this, and they think something must be wrong—an analysis, a definition,… They want (hey, I want) the result of an operation on operands in the context of real numbers to still be the result when I apply the same operation to the same operands in the context of a superset domain/codomain. In addition, subtraction, multiplication, and division of any two rational numbers (except division by 0) there will be a result in the context of rational numbers, of real numbers, and of complex numbers, and all three results will match. Exponentiation is violating that. Now, there might not be a result in the first context because that codomain is too small to contain the appropriate result, but the superset is large enough to contain the result—that is fine, as, for example, the division of two nonzero integers might not be an integer. What is not fine is when the subset context contains an appropriate result, the superset contains (by definition of superset) that same value, but there is an even better result in the superset that was not in the subset—the first result is not “good enough”. One way of handling this mismatch is to say there is in fact no mismatch because we are going to say that ∛−8 is undefined in the context of the reals, so that it cannot be −2 to yield a mismatch. I think I might have confused you before into thinking that I do not regard 𝑥³ + 8 =0 as having a solution, or at least not a real solution. No, this is a cubic equation, so it has 3 roots: −2, 1 + i√3, and 1 − i√3. The question is which one, if any, of these should be regarded as the principal cube root of −8. In the context of complex numbers it is conventionally defined to be 1 + i√3, which is not a real number and, therefore cannot be the principal root in the context of real numbers. The question is whether we want to allow a mismatching choice (−2) in the context of the subset case, the reals, or to say none of the 3 roots is principal in order to prevent a mismatch.

I might have you now wondering what is my position on how to treat ∛−8. I sympathize with the above viewpoint; however, 𝑓: 𝐑→𝐑 such that 𝑓(𝑥) = 𝑥³ is a continuous bijection and, therefore, has a continuous inverse. Since (−2)³ = −8, the inverse function applied to −8 yields −2. What else would we call this inverse function of the cubing function other than the [principal] cube root function. If ∛−8 is mandated to be undefined so as to avoid the mismatch with the complex world, then our inverse function cannot be the cube root function. This inverse concept is too important to give up in spite of the troubles it causes, so ∛−8 = −2 in my mind. Most mathematicians and theoretical physicists agree with this viewpoint, but there are some (mostly Germans as I understand) who emphasize the importance of the matching and regard the cube root function, when restricted to real numbers, as being defined only for non-negative input values.

The mismatching troubles are caused by the complex cubing function being many-to-one (specifically 3-to-1 almost everywhere), so it is not injective and, thus, not bijective. This would mean in the standard treatment of functions that the cubing function has no inverse. This kind of thing happens so often in complex analysis that typically functions are allowed to be multivalued. Such functions typically have branch cuts, which is a line of discontinuity, because the value of the limit as you approach a point on a branch cut depends on the direction of approach to the point in question, so the limit does not exist (but the function is usually defined to have a value at the point itself on the branch cut). As an example, the complex cubing function is defined and continuous at all complex numbers, whereas the multivalued inverse function is discontinuous along the negative real axis where the branch cut is, though it is defined for all complex numbers. This sort of thing does not occur in the context of real numbers. In my mind this distinction between real functions and complex functions is enough to explain and make acceptable the mismatch of the value of ∛−8 in the context of real numbers versus complex numbers.

Now this has gotten to be a very long discussion, and what does ∛−8 have to do with 0⁰? The primary argument used against defining a value for 0⁰, instead regarding it as undefined or indefinite, is that the limit does not exist for 𝑥^𝑦 as (𝑥; 𝑦)→(0; 0), because the value depends on the path of approach. However, for the base and exponent being analytic functions (“nicely” behaved) of 𝑥 that approach 0 positively as 𝑥 goes to 0, the limit of the power is 1. It is “pathological” cases that act contrarily and cause people to refuse to regard 0⁰ as having value 1. Yet these same people wonder why anyone would question ∛−8 having value 2, even though in the context of complex numbers, the principal value is different and the limit of ∛𝑥 as 𝑥 approaches −8 because the value depends on the path of approach. Not only that, the only two values that ∛𝑥 can approach are 1 + i√3 and 1 − i√3, neither of which is the “expected” −2. Why is it that ∛−8 = −2 is so easily accepted by people but 0⁰ = 1 is so hard to accept, even though the ∛−8 has all the same issues that 0⁰ has, plus some?

I was using the ∛−8 (which is the power (−8)^(1/3)) argument and the 1 = √[(−1) × (−1)] ≠ √(−1) × √(−1) = −1 argument to show that powers do not have all the nice properties that the more basic arithmetic operations do. A lot of people have not realized that and the ramifications. You can’t have it all—something must be given up. We typically do not want to give up defining ∛−8 even though there are consistency issues between real numbers and complex numbers as well as limits and continuity issues. Let’s not throw out the usefulness of 0⁰ = 1 just because of even fewer issues with limits and continuity.

OH MY GOD!?

REALLY!?

IT IS SIMPLE

0^0 has no meaning like 1/0

how can people not see it!?

how!?

it`s as simple as 0=0 or 1+1=2

Yes, Aydin. It is really very simple, based on the nullary operation principle, so 0⁰ = 1, just like the 0th power of every number is 1.

So, yes, 0⁰ has a meaning, not like 1/0.

Dear Howard Ludwig

it is a very simple thing and 0^0 has some differences with 1/0 by the time i was writing the previous cm i was so disturbed that people don`t know such a simple thing i saw using limits and other things trying to explain 0^0

let me make it as simple as it gets if you look at the book of mathematics for first year if Iran`s high school you will see that it says (and i quote) 0^0 has no meaning!

why is any number (not 0) to the power of 0 is 1 ?

because of the power rule n^0 = n^a * n^-a=1 (n still is not 0 !)

why is 0 to power of any number is 0 ?

0^n = 0* to it self n times = 0 (it simply can be shown for any n and you guessed it not 0 !)

but 0^0 has no meaning it is very basic mathematics you can`t use limits and other stuff for it because it is more fundamental than that it is adding multiplying …

Aydin,

Sorry, but your book, and as well as many others, is wrong. Is it a calculus book? Many math books, especially calculus books, give the series for e^x using summation notation:

e^x = SUM(n=0->oo) x^n/n!

Every book I’ve ever seen that gives this formula says it is valid for _all_ x. They also say that 0^0 is undefined. But if that were so, the formula wouldn’t work for x = 0. If you admit that 0^0 = 1, then the formula will work for _all_ x. Therefore, 0^0 = 1. Add to that all the other reasons that Howard Ludwig and I have already given, and it’s pretty much a done deal:

0^0 = 1

You wrote:

“why is 0 to power of any number is 0 ?

0^n = 0* to it self n times = 0 (it simply can be shown for any n and you guessed it not 0 !)”

As you say, y^x is y times itself with a total of x factors. That only works when x is a positive integer. You can generalize that by defining how negative exponents work and setting y^0 = 1. You can use the integral definition of the log function to generalize further to real exponents, but you have to limit y to positive real numbers. All this can be done while maintaining the three basic exponent rules and the rule that defines what negative exponents do.

The log function is needed, because what does 2^3.7 mean without it? 2 times itself with 3.7 factors. It makes no sense! So you come up with the log definition, restrict y to positive integers, and all is well again.

For complex numbers you need to generalize even more and you get into trouble because in general, log x, and hence y^x, is multi-valued. (It is single valued for integral exponents.)

So if we need to do all this generalizing to extend the definition of y^x to numbers that aren’t positive integers, why can’t we try to extend the definition to 0^0 in a useful way? With a value of 1, it satisfies the exponent rules, makes summation notation work for all x, and has all the other niceties that Howard and I have described in previous posts.

By the way, 0^(any negative number) is undefined.

0^0 is still an empty product, so in set theory at least, 0^0 = 1.

Another way of looking at it is the zero-dimensional “size” of a single point is simply the number of points. This is equal to 1. Thus 0^0 = 1.