¿Qué significa


15

¿Qué significa log O ( 1 ) nlogO(1)n ?

Soy consciente de la notación big-O, pero esta notación no tiene sentido para mí. Tampoco puedo encontrar nada al respecto, porque no hay forma de que un motor de búsqueda interprete esto correctamente.

Para un poco de contexto, la oración donde lo encontré dice "[...] llamamos a una función [eficiente] si usa espacio O ( log n )O(logn) y, como máximo, log O ( 1 ) nlogO(1)n por elemento".


1
Estoy de acuerdo en que no se deben escribir cosas como esta, a menos que se tenga muy claro lo que significa (y se lo diga al lector) y se utilicen las mismas reglas de manera consistente.
Raphael

1
Sí, uno debería escribirlo como ( log ( n ) ) O ( 1 )(log(n))O(1).

1
@RickyDemer Ese no es el punto que Raphael está haciendo. log b l a h nlogblahn significa exactamente ( log n ) b l a h(logn)blah .
David Richerby

44
@Raphael Esta es la notación estándar en el campo. Cualquiera que esté enterado sabría lo que significa.
Yuval Filmus

1
@YuvalFilmus Creo que la variedad de respuestas en desacuerdo es una prueba concluyente de que su reclamo es falso y de que uno debería abstenerse de usar dicha notación.
Raphael

Respuestas:


16

Debe ignorar por un momento la fuerte sensación de que la " O " está en el lugar equivocado y seguir adelante con la definición de todos modos. f ( n ) = log O ( 1 ) n significa que existen constantes k y n 0 tales que, para todos n n 0 , f ( n ) log k 1 n = log k n .Of(n)=logO(1)nkn0nn0f(n)logk1n=logkn

Tenga en cuenta que log k n significa ( log n ) k . Las funciones del formulario log O ( 1 ) n a menudo se llaman polilogarítmicaslogkn(logn)klogO(1)n y es posible que escuche a la gente decir: " f es polilógico  n ".fn

Notarás que es fácil demostrar que 2 n = O ( n ) , ya que 2 n k n para todo n 0 , donde k = 2 . Quizás se pregunte si 2 log n = log O ( 1 ) n . La respuesta es sí ya que, para lo suficientemente grande n , log n 2 , entonces 2 log n log 2 n para lo suficientemente grande 2n=O(n)2nknn0k=22logn=logO(1)nnlogn22lognlog2nn .n

En una nota relacionada, a menudo verás polinomios escritos como n O ( 1 ) : la misma idea.nO(1)


Esto no es compatible con la convención común de marcadores de posición.
Raphael

Retracto mi comentario: escribes en todos los lugares importantes, lo cual es suficiente.
Raphael

@Raphael OK. Todavía no había tenido tiempo de verificarlo, pero creo que podría estar ordenando cuantificadores de manera diferente a como soy yo. No estoy seguro de que estemos definiendo la misma clase de funciones.
David Richerby

Creo que está definiendo mi (2), y Tom define c R > 0 { log c n } . cR>0{logcn}
Raphael

9

Este es un abuso de notación que puede tener sentido por la convención de marcador de posición generalmente aceptada : cada vez que encuentre un término de Landau O ( f ) , reemplácelo (en su mente o en el papel) por una función arbitraria g O (O(f) f ) .gO(f)

Entonces si encuentras

f ( n ) = log O ( 1 ) nf(n)=logO(1)n

debes leer

f ( n ) = log g ( n ) n para algunos g O ( 1 ) .f(n)=logg(n)n( 1 )gO(1).(1)

Note the difference from saying "loglog to the power of some constant": g=n1/ng=n1/n is a distinct possibility.

Warning: The author may be employing even more abuse of notation and want you to read

f(n)O(logg(n)n)f(n)O(logg(n)n) for some gO(1).(2)gO(1).(2)

Note the difference between (1) and (2); while it works out to define the same set of positive-valued functions here, this does not always work. Do not move OO around in expressions without care!


3
I think what makes it tick is that xlogx(n)xlogx(n) is monotonic and sufficiently surjective for each fixed nn. Monotonic makes the position of the OO equivalent and gives you (2) ⇒ (1); going the other way requires gg to exist which could fail if f(n)f(n) is outside the range of the function. If you want to point out that moving OO around is dangerous and doesn't cover “wild” functions, fine, but in this specific case it's ok for the kind of functions that represent costs.
Gilles 'SO- stop being evil'

@Gilles I weakened the statement to a general warning.
Raphael

1
This answer has been heavily edited, and now I am confused: do you now claim that (1) and (2) are effectively the same?
Oebele

@Oebele As far as I can tell, they are not in general, but here.
Raphael

But, something like 3log2n3log2n does not match (1) but does match (2) right? or am I just being silly now?
Oebele

6

It means that the function grows at most as loglog to the power of some constant, i.e. log2(n)log2(n) or log5(n)log5(n) or log99999(n)log99999(n)...


This can be used when the function growth is known to be bounded by some constant power of the loglog, but the particular constant is unknown or left unspecified.
Yves Daoust

This is not supported by the common placeholder convention.
Raphael

2

"At most logO(1)nlogO(1)n" means that there is a constant cc such that what is being measured is O(logcn)O(logcn).

In a more general context, f(n)logO(1)nf(n)logO(1)n is equivalent to the statement that there exists (possibly negative) constants aa and bb such that f(n)O(logan)f(n)O(logan) and f(n)Ω(logbn)f(n)Ω(logbn).

It is easy to overlook the Ω(logbn)Ω(logbn) lower bound. In a setting where that would matter (which would be very uncommon if you're exclusively interested in studying asymptotic growth), you shouldn't have complete confidence that the author actually meant the lower bound, and would have to rely on the context to make sure.


The literal meaning of the notation logO(1)nlogO(1)n is doing arithmetic on the family of functions, resulting in the family of all functions logg(n)nlogg(n)n, where g(n)O(1)g(n)O(1). This works in pretty much the same as how multiplying O(g(n))O(g(n)) by h(n)h(n) results in O(g(n)h(n))O(g(n)h(n)), except that you get a result that isn't expressed so simply.


Since the details of the lower bound are in probably unfamiliar territory, it's worth looking at some counterexamples. Recall that any g(n)O(1)g(n)O(1) is bounded in magnitude; that there is a constant cc such that for all sufficiently large nn, |g(n)|<c|g(n)|<c.

When looking at asymptotic growth, usually only the upper bound g(n)<cg(n)<c matters, since, e.g., you already know the function is positive. However, in full generality you have to pay attention to the lower bound g(n)>cg(n)>c.

This means, contrary to more typical uses of big-oh notation, functions that decrease too rapidly can fail to be in logO(1)nlogO(1)n; for example, 1n=log(logn)/(loglogn)nlogO(1)n

1n=log(logn)/(loglogn)nlogO(1)n
because lognloglognO(1)
lognloglognO(1)
The exponent here grows in magnitude too rapidly to be bounded by O(1)O(1).

A counterexample of a somewhat different sort is that 1logO(1)n1logO(1)n.


Can't I just take b=0b=0 and make your claimed lower bound go away?
David Richerby

1
@DavidRicherby No, b=0b=0 still says that ff is bounded below. Hurkyl: why isn't f(n)=1/nf(n)=1/n in logO(1)nlogO(1)n?
Gilles 'SO- stop being evil'

@Gilles: More content added!

@Gilles OK, sure, it's bounded below by 1. Which is no bound at all for "most" applications of Landau notation in CS.
David Richerby

1) Your "move around OO" rule does not always work, and I don't think "at most" usually has that meaning; it's just redundant. 2) Never does OO imply a lower bound. That's when you use ΘΘ. 3) If and how negative functions are dealt with by a given definition of O (even without abuse of notation) is not universally clear. Most definitions (in analysis of algorithms) exclude them. You seem to assume a definition that bounds the absolute value, which is fine.
Raphael
Al usar nuestro sitio, usted reconoce que ha leído y comprende nuestra Política de Cookies y Política de Privacidad.
Licensed under cc by-sa 3.0 with attribution required.