Encontrar aproximaciones polinómicas de una onda sinusoidal


16

Quiero aproximar la onda sinusoidal dada por aplicando una forma de onda polinómica a una onda triangular simple , generada por la funciónsin(πx)

T(x)=14|12mod(12x+14, 1)|

donde es la parte fraccionaria de :mod(x,1)x

mod(x,y)y(xyxy)

Una serie de Taylor podría usarse como formador de ondas.

S1(x)=πx2πx233!+πx255!πx277!

Dadas las funciones anteriores, S1(T(x)) nos dará una aproximación decente de una onda sinusoidal. Pero necesitamos subir a la séptima potencia de la serie para obtener un resultado razonable, y los picos son un poco bajos y no tendrán una pendiente exactamente cero.

En lugar de la serie Taylor, podríamos usar un formador de onda polinomial siguiendo algunas reglas.

  • Debe pasar por -1, -1 y + 1, + 1.
  • La pendiente en -1, -1 y + 1, + 1 debe ser cero.
  • Debe ser simétrico.

Una función simple que cumple nuestros requisitos:

S2(x)=3x2x32

Los gráficos de S2(T(x)) y sin(πx) están bastante cerca, pero no tan cerca como la serie Taylor. Entre los picos y los cruces por cero, se desvían visiblemente un poco. Una función más pesada y precisa que cumple nuestros requisitos:

S3(x)=x(x25)216

Probablemente sea lo suficientemente cercano para mis propósitos, pero me pregunto si existe otra función que se aproxime más a la onda sinusoidal y sea computacionalmente más barata. Tengo una buena comprensión de cómo encontrar funciones que cumplan los tres requisitos anteriores, pero no estoy seguro de cómo encontrar funciones que cumplan esos requisitos y que también coincidan más con una onda sinusoidal.

¿Qué métodos existen para encontrar polinomios que imitan una onda sinusoidal (cuando se aplica a una onda triangular)?


Para aclarar, no necesariamente busco solo polinomios de simétrica impar, aunque esas son la opción más sencilla.

Algo así como la siguiente función también podría satisfacer mis necesidades:

S4(x)=3x2+x24+x44

Esto cumple con los requisitos en el rango negativo, y también se podría usar una solución por partes para aplicarlo al rango positivo; por ejemplo

3x2P(x,2)4P(x,4)4

donde es la función de potencia firmada .P

También me interesarían las soluciones que utilizan la función de potencia con signo para admitir exponentes fraccionales, ya que esto nos da otro "mando para girar" sin agregar otro coeficiente.

a0x +a1P(x, p1)

Dadas las constantes correctas, esto podría obtener una muy buena precisión sin la pesadez de los polinomios de quinto o séptimo orden. Aquí hay un ejemplo que cumple los requisitos descritos aquí usando algunas constantes seleccionadas a mano: .a0=1.666¯,a1=0.666¯,p1=2.5

5x2P(x, 52)3

De hecho, esas constantes están muy cerca de , y , y . Conectarlos da algo que se ve extremadamente cerca de una onda sinusoidal. 1-ππ21π2e

π2x +(1π2)P(x,e)

Para decirlo de otra manera, ve muy cerca de entre 0,0 y π / 2,1. ¿Alguna idea sobre el significado de esto? Quizás una herramienta como Octave pueda ayudar a descubrir las "mejores" constantes para este enfoque. pecado(x)xxe6sin(x)


1
Entonces, ¿cuál es su definición de término de error para "más cerca"? Por lo que pude ver, la serie de Taylor que citó es el error mínimo L² aproximado para un número finito de coeficientes. (Creo.)
Marcus Müller el

2
¿Cuál es, por cierto, tu objetivo? Realmente podría ayudarnos decirnos por qué está buscando un moldeador de ondas polinomiales, sobre qué base tecnológica y cuáles son sus objetivos principales para la aproximación.
Marcus Müller el

@ MarcusMüller Estoy dispuesto a sacrificar la precisión de la serie Taylor por algo significativamente más barato, si es indistinguible de una onda sinusoidal para el oído humano. Los picos de la aproximación de la serie Taylor también me molestan. Estoy interesado en encontrar algo "más cercano" que las otras dos funciones que enumeré. Sospecho que no será más barato que . S2
Invitado

1
"Para el oído humano" es crítico aquí :) ¿Por qué los picos te "molestan"? Nuevamente: danos una idea de por qué / para qué propósito y bajo qué restricciones estás haciendo esto. ¡Sin suficientes antecedentes, su pregunta es simplemente demasiado amplia para ser respondida adecuadamente!
Marcus Müller el

1
¿Por qué estás comenzando con una onda triangular? Los generadores de seno son simples y comunes, las ondas cuadradas se filtran trivialmente al armónico fundamental, etc.
Carl Witthoft

Respuestas:


10

Hace aproximadamente una década hice esto para una compañía de sintetizadores de música sin nombre que tenía I&D no muy lejos de mi condominio en Waltham MA. (No puedo imaginar quiénes son). No tengo los coeficientes.

pero prueba esto:

f(x)sin(π2x)for 1x+1=π2x(a0+a1x2+a2x4)

esto garantiza que .f(x)=f(x)

Para garantizar que entoncesf(x)|x=±1=0

f(x)=π2(a0+3a1x2+5a2x4)

(1)a0+3a1+5a2=0

Esa es la primera restricción. Para garantizar que , entonces|f(±1)|=1

(2)a0+a1+a2=2π

Esa es la segunda restricción. Eliminando y resolviendo Eqs. (1) y (2) para un 2 en términos de un 1 (que se deja ajustar):a0a2a1

a0=52π12a1

a2=12π12a1

Ahora solo tiene un coeficiente, , restante para girar para obtener el mejor rendimiento:a1

f(x)=π2x((52π12a1)+a1x2(12π+12a1)x4)

Esta es la forma en que giraría para obtener el mejor rendimiento para un oscilador de onda sinusoidal. Ajustaría usar lo anterior y la simetría de la onda sinusoidal sobre x = 1 y colocar exactamente un ciclo completo en un búfer con una potencia de dos puntos (digamos 128, no me importa) y ejecutar el FFT en ese ciclo perfectoa1x=1

El resultado de FFT bin 1 será la fuerza del seno y debe ser de aproximadamente . Ahora puede ajustar un 1 para subir y bajar su distorsión armónica tercera. Comenzaría con un 15N/2a1para quea01. Eso está en el bin 3 de los resultados de FFT, pero la distorsión armónica 5 (valor en bin 5) será consecuente (aumentará a medida que baje el 3er armónico). Ajustaríaun1para que la fuerza del 5 ° nivel armónico sea igual al 3 ° nivel armónico. Será de alrededor de -70 dB desde el primer armónico (como recuerdo). Esa será la onda sinusoidal de mejor sonido de un polinomio barato, de 3 coeficientes, 5º orden, simétrico impar.a15π2a01a1

Alguien más puede escribir el código MATLAB. ¿Cómo te suena eso?


Definitivamente no tendré tiempo de hacer MATLAB para buscar el óptimo, de modo que el 3er armónico sea igual al 5º armónico, aproximadamente 70 dB por debajo del fundamental (1er armónico). alguien más necesita hacer eso. lo siento. a1
robert bristow-johnson

Gran respuesta, aún digiriéndolo. En realidad comienza a preguntarse si debe ser un polinomio de 3 coeficientes, 5º orden, simétrico impar ... ¿Podría su f '(x) ser realmente f (x) y ser una operación por partes alrededor de 0? Bosquejo áspero aquí . ¿Quizás esto es lo que Ced tiene en mente? Aún poniéndome al día con ustedes.
Invitado

Este es un enfoque hermoso. Me pregunto si, en lugar de tomar el FFT y resolverlo iterativamente, podrías formar los polinomios de Chebyshev de tercer y quinto orden a partir de tu , ¿luego igualar los dos y resolver un 1 ? f(x)a1
Speedy

Debía estar medio dormido cuando publiqué ese "boceto". Tenía la intención de hacer algo como esto , pero lo corregí para que corriera a través de ± 1 y tuviera pendiente cero (puede tomar la derivada, jugar con ella, integrarla nuevamente). No estoy seguro de si hay alguna ventaja sobre el quinto orden, algo que aún no había considerado.
Invitado

1
Esta es realmente una solución brillante, solo tardé un tiempo en asimilarme. Espero que marcarlo correctamente no impida que alguien más venga y escriba el código.
Invitado

9

Lo que generalmente se hace es una aproximación que minimiza alguna norma del error, a menudo la forma (donde se minimiza el error máximo) o la forma L 2 (donde se minimiza el error cuadrático medio). La aproximación L se realiza utilizando el algoritmo de intercambio Remez . Estoy seguro de que puedes encontrar algún código fuente abierto que implemente ese algoritmo. Sin embargo, en este caso creo que una optimización muy simple (discreta) l 2 es suficiente. Veamos un código de Matlab / Octave y los resultados:LL2Ll2

x = espacio de lins (0, pi / 2,300); % cuadrícula en [0, pi / 2]
x = x (:);
% sistema sobredeterminado de ecuaciones lineales
% (usando solo poderes impares)
A3 = [x, x. ^ 3];
A5 = [x, x. ^ 3, x. ^ 5];
b = sin (x);
% resolver en sentido l2
c3 = A3 \ b;
c5 = A5 \ b;
f3 = A3 * c3; % Aproximación de tercer orden
f5 = A5 * c5; % Aproximación de quinto orden

La siguiente figura muestra los errores de aproximación para el orden de y para las aproximaciones de orden de 5 t h . Los errores de aproximación máxima son y , respectivamente.3rd5th8.8869e-031.5519e-04

enter image description here

Los coeficientes óptimos son

c3 =
   0.988720369237930
  -0.144993929056091

y

c5 =
   0.99976918199047515
  -0.16582163562776930
   0.00757183954143367

Entonces, la aproximación de tercer orden es

(1)sin(x)0.988720369237930x0.144993929056091x3,x[π/2,π/2]

y la aproximación de quinto orden es

(2)sin(x)0.99976918199047515x0.16582163562776930x3+0.00757183954143367x5,x[π/2,π/2]

EDITAR:

Eché un vistazo a las aproximaciones con la función de potencia con signo, como se sugiere en la pregunta, pero la mejor aproximación es apenas mejor que la aproximación de tercer orden que se muestra arriba. La función aproximada es

(3)f(x)=x1p(π2)1pxp,x[0,π/2]

donde las constantes fueron elegidas de manera tal que y f ( π / 2 ) = 0 . La potencia p se optimizó para lograr el error máximo más pequeño en el rango [ 0 , π / 2 ] . Se encontró que el valor óptimo para p era p = 2.774 . La siguiente figura muestra los errores de aproximación para la aproximación de tercer orden ( 1 ) y para la nueva aproximación ( 3f(0)=1f(π/2)=0p[0,π/2]pp=2.774(1) :(3)

enter image description here

The maximum approximation error of the approximation (3) is 4.5e-3, but note that the third-order approximation only exceeds that error close to π/2 and that for the most part its approximation error is actually smaller than the one of the signed power function.

EDIT 2:

If you don't mind division you could also use Bhaskara I's sine approximation formula, which has a maximum approximation error of 1.6e-3:

(4)sin(x)16x(πx)5π24x(πx),x[0,π/2]

That's very helpful, thanks. This is the first time I've used Octave. I followed most of it, but how did you get the approximation error plots and maximum values?
Guest

1
@Guest: The errors are just b-f3 and b-f5, respectively. Use the plot command to plot them.
Matt L.

1
@Guest: And the maxima you get from max(abs(b-f3)) and max(abs(b-f5)).
Matt L.

@Guest: I played around with the signed power function, but the result is not significantly better than the third-order approximation I had before. Check out my edited answer. As for complexity, would it make such a big difference?
Matt L.

Thanks for looking into it. Complexity isn't a huge deal, just curious how accurate the approximation can get with relatively low complexity. I'm not quite sure how you came up with (3), but it works nicely. I'd need to use 2.752 instead for p, since anything above that will send the peaks over 1 (clipping).
Guest

7

Start with an otherwise general, odd-symmetry 5th-order parameterized polynomial:

f(x)=a0x1+a1x3+a2x5=x(a0+a1x2+a2x4)=x(a0+x2(a1+a2x2))

Now we place some constraints on this function. Amplitude should be 1 at the peaks, in other words f(1)=1. Substituting 1 for x gives:

(1)a0+a1+a2=1

That's one constraint. The slope at the peaks should be zero, in other words f(1)=0. The derivative of f(x) is

a0+3a1x2+5a2x4

and substituting 1 for x gives our second constraint:

(2)a0+3a1+5a2=0

Now we can use our two constraints to solve for a1 and a2 in terms of a0.

(3)a1=522a0a2=a032

All that's left is to tweak a0 to get a nice fit. Incidentally, a0 (and the slope at the origin) ends up being π2, as we can see from a plot of the function.

Parameter optimization

Below are a number of optimizations of the coefficients, which result in these relative amplitudes of the harmonics compared to the fundamental frequency (1st harmonic):

Comparison of approximations

In the complex Fourier series:

k=ckei2πPkx,

of a real P-periodic waveform with P=4 and time symmetry about x=1 and with half a period defined by odd function f(x) over 1x1, the coefficient of the kth complex exponential harmonic is:

ck=1P11+P({f(x)if x<1f(x2)if x1)ei2πPkxdx.

Because of the relationship 2cos(x)=eix+eix (see: Euler's formula), the amplitude of a real sinusoidal harmonic with k>0 is 2|ck|, which is twice that of the magnitude of the complex exponential of the same frequency. This can be massaged to a form which makes it easier for some symbolic mathematics software to simplify the integral:

2|ck|=24|13({f(x)if x<1f(x2)if x1)ei2π4kxdx|=12|11f(x)eiπ2kxdx13f(x2)eiπ2kxdx|=12|11f(x)eiπ2kxdx11f(x+22)eiπ2k(x+2)dx|=12|11f(x)eiπ2kxdx11f(x)eiπ2k(x+2)dx|=12|11f(x)(eiπ2kxeiπ2k(x+2))dx|=12|eiπ2x11f(x)(eiπ2kxeiπ2k(x+2))dx|=12|11f(x)(eiπ2k(x1)eiπ2k(x+1))dx|

The above takes advantage of that |eix|=1 for real x. It is easier for some computer algebra systems to simplify the integral by assuming k is real, and to simplify to integer k at the end. Wolfram Alpha can integrate individual terms of the final integral corresponding to the terms of the polynomial f(x). For the coefficients given in Eq. 3 we get amplitude:

=|48((1)k1)(16a0(π2k210)5×(5π2k248))π6k6|

5th order, continuous derivative

We can solve for the value of a0 that gives equal amplitude 2|ck|of the 3rd and the 5th harmonic. There will be two solutions corresponding to the 3rd and the 5th harmonic having equal or opposite phases. The best solution is the one that minimizes the maximum amplitude of the 3rd and above harmonics and equivalently the maximum relative amplitude of the 3rd and above harmonics compared to the fundamental frequency (1st harmonic):

a0=3×(132375π2130832)16×(15885π216354)1.569778813,a1=522a0=79425π2654168×(15885π2+16354)0.6395576276,a2=a032=15885π216×(15885π216354)0.06977881382.

This gives the fundamental frequency at amplitude 1367961615885π616354π41.000071420 and both the 3rd and the 5th harmonic at relative amplitude 18906 or about 78.99 dB compared to the fundamental frequency. A kth harmonic has relative amplitude (1(1)k)|8177k279425|142496k6.

7th order, continuous derivative

Likewise, the optimal 7th order polynomial approximation with the same initial constraints and the 3rd, 5th, and 7th harmonic at the lowest possible equal level is:

f(x)=a0x1+a1x3+a2x5+a3x7=x(a0+a1x2+a2x4+a3x7)=x(a0+x2(a1+x2(a2+a3x2)))

a0=2a2+4a3+321.570781972,a1=4a2+6a3+120.6458482979,a2=347960025π4405395408π216×(281681925π4405395408π2+108019280)0.07935067784,a3=16569525π416×(281681925π4405395408π2+108019280)0.004284352588.

This is the best of four possible solutions corresponding to equal/opposite phase combinations of the 3rd, 5th, and 7th harmonic. The fundamental frequency has amplitude 2293523251200281681925π8405395408π6+108019280π40.9999983752, and the 3rd, 5th, and 7th harmonics have relative amplitude 11555395123.8368 dB compared to the fundamental. A kth harmonic has relative amplitude (1(1)k)|1350241k450674426k2+347960025|597271680k8 compared to the fundamental.

5th order

If the requirement of a continuous derivative is dropped, the 5th order approximation will be more difficult to solve symbolically, because the amplitude of the 9th harmonic will rise above the amplitude of the 3rd, 5th, and the 7th harmonic if those are constrained to be equal and minimized. Testing 16 different solutions corresponding to different subsets of three harmonics from {3,5,7,9} being of equal amplitude and of equal or opposite phases, the best solution is:

f(x)=a0x1+a1x3+a2x5a0=1a1a21.570034357a1=3×(2436304π22172825π4)8×(1303695π41827228π2+537160)0.6425216143a2=1303695π416×(1303695π41827228π2+537160)0.07248725712

The fundamental frequency has amplitude 10804305921303695π61827228π4+537160π20.9997773320. The 3rd, 5th, and 9th harmonics have relative amplitude 726377791.52 dB, and the 7th harmonic has relative amplitude 7260833103310027392.6 dB compared to the fundamental. A kth harmonic has relative amplitude (1(1)k)|67145k42740842k2+19555425|33763456k6.

This approximation has a slight corner at the half-cycle boundaries, because the polynomial has zero derivative not at x=±1 but at x±1.002039940. At x=1 the value of the derivative is about 0.004905799828. This results in slower asymptotic decay of the amplitudes of the harmonics at large k, compared to the 5th order approximation that has a continuous derivative.

7th order

A 7th order approximation without continuous derivative can be found similarly. The approach requires testing 120 different solutions and was automated by the Python script at the end of this answer. The best solution is:

f(x)=a0x1+a1x3+a2x5+a3x7a0=1a1a2a31.5707953785726114835a1=5×(4374085272375π66856418226992π4+2139059216768π2)16×(2124555703725π63428209113496π4+1336912010480π2155807094720)0.64590724797262922190a2=2624451163425π63428209113496π416×(2124555703725π63428209113496π4+1336912010480π2155807094720)0.079473610232926783079a3=124973864925π616×(2124555703725π63428209113496π4+1336912010480π2155807094720)0.0043617408329090447344

The fundamental frequency has amplitude 169918012823961602124555703725π83428209113496π6+1336912010480π4155807094720π21.0000024810802368487. The largest relative amplitude of the harmonics above the fundamental is 502400688077133.627 dB. compared to the fundamental. A kth harmonic has relative amplitude (1(1)k)|162299057k6+16711400131k4428526139187k2+2624451163425|4424948250624k8.

Python source

from sympy import symbols, pi, solve, factor, binomial

numEq = 3 # Number of equations
numHarmonics = 6 # Number of harmonics to evaluate

a1, a2, a3, k = symbols("a1, a2, a3, k")
coefficients = [a1, a2, a3]
harmonicRelativeAmplitude = (2*pi**4*a1*k**4*(pi**2*k**2-12)+4*pi**2*a2*k**2*(pi**4*k**4-60*pi**2*k**2+480)+6*a3*(pi**6*k**6-140*pi**4*k**4+6720*pi**2*k**2-53760)+pi**6*k**6)*(1-(-1)**k)/(2*k**8*(2*pi**4*a1*(pi**2-12)+4*pi**2*a2*(pi**4-60*pi**2+480)+6*a3*(pi**6-140*pi**4+6720*pi**2-53760)+pi**6))

harmonicRelativeAmplitudes = []
for i in range(0, numHarmonics) :
    harmonicRelativeAmplitudes.append(harmonicRelativeAmplitude.subs(k, 3 + 2*i))

numCandidateEqs = 2**numHarmonics
numSignCombinations = 2**numEq
useHarmonics = range(numEq + 1)

bestSolution = []
bestRelativeAmplitude = 1
bestUnevaluatedRelativeAmplitude = 1
numSolutions = binomial(numHarmonics, numEq + 1)*2**numEq
solutionIndex = 0

for i in range(0, numCandidateEqs) :
    temp = i
    candidateNumHarmonics = 0
    j = 0
    while (temp) :
        if (temp & 1) :
            if candidateNumHarmonics < numEq + 1 :
                useHarmonics[candidateNumHarmonics] = j
            candidateNumHarmonics += 1
        temp >>= 1
        j += 1
    if (candidateNumHarmonics == numEq + 1) :
        for j in range(0,  numSignCombinations) :
            eqs = []
            temp = j
            for n in range(0, numEq) :
                if temp & 1 :
                    eqs.append(harmonicRelativeAmplitudes[useHarmonics[0]] - harmonicRelativeAmplitudes[useHarmonics[1+n]])
                else :
                    eqs.append(harmonicRelativeAmplitudes[useHarmonics[0]] + harmonicRelativeAmplitudes[useHarmonics[1+n]])
                temp >>= 1
            solution = solve(eqs, coefficients, manual=True)
            solutionIndex += 1
            print "Candidate solution %d of %d" % (solutionIndex, numSolutions)
            print solution
            solutionRelativeAmplitude = harmonicRelativeAmplitude
            for n in range(0, numEq) :                
                solutionRelativeAmplitude = solutionRelativeAmplitude.subs(coefficients[n], solution[0][n])
            solutionRelativeAmplitude = factor(solutionRelativeAmplitude)
            print solutionRelativeAmplitude
            solutionWorstRelativeAmplitude = 0
            for n in range(0, numHarmonics) :
                solutionEvaluatedRelativeAmplitude = abs(factor(solutionRelativeAmplitude.subs(k, 3 + 2*n)))
                if (solutionEvaluatedRelativeAmplitude > solutionWorstRelativeAmplitude) :
                    solutionWorstRelativeAmplitude = solutionEvaluatedRelativeAmplitude
            print solutionWorstRelativeAmplitude
            if (solutionWorstRelativeAmplitude < bestRelativeAmplitude) :
                bestRelativeAmplitude = solutionWorstRelativeAmplitude
                bestUnevaluatedRelativeAmplitude = solutionRelativeAmplitude                
                bestSolution = solution
                print "That is a new best solution!"
            print

print "Best Solution is:"
print bestSolution
print bestUnevaluatedRelativeAmplitude
print bestRelativeAmplitude

This is a variation on Robert's answer, and is the route I eventually took. I'm leaving it here in case it helps anyone else.
Guest

wow, solving it analytically. i woulda just used MATLAB and an FFT and sorta hunt around for the answer.
you did very well.
robert bristow-johnson

2
actually @OlliNiemitalo, i think -79 dB is good enough for the implementation of a digital synth sine wave oscillator. it can be driven by a triangle wave, which is generated easily from the abs value of a sawtooth, which is most easily generated with a fixed-point phase accumulator.
no one will hear a difference between that 5th-order polynomial sine wave and a pure sine.
robert bristow-johnson

1
Polynomials in general as f have the advantage that by increasing the order, the error can be made arbitrarily small. Rational functions have the same advantage, but a division is typically more costly to compute than multiplication. For example in Intel i7, a single thread can do 7-27 times as many multiplications and additions than divisions in the same time. Approximating some alternative f means decomposing it to elementary ops, typically multiplications and additions which always amount to polynomials. Those could be optimized to approximate sine directly versus via f.
Olli Niemitalo

1
@OlliNiemitalo, I see what you mean... if division is that much slower than multiplication (and I guess things like roots / fractional exponents will be even worse), then an approach like the above with a "good, fast f0" is going to wind up factoring out to a Taylor-series-like-polynomial anyway. I guess since it's an approximation anyway, some kind of cheap root approximation could potentially overtake the polynomial approach at some level of accuracy, but that's kinda off in the weeds for what was essentially supposed to be a math question.
Guest

5

Are you asking this for theoretical reasons or a practical application?

Usually, when you have an expensive to compute function over a finite range the best answer is a set of lookup tables.

One approach is to use best fit parabolas:

n = floor( x * N + .5 );

d = x * N - n;

i = n + N/2;

y = L_0 + L_1[i] * d + L_2[i] * d * d;

By finding the parabola at each point that meets the values for d being -1/2, 0, and 1/2, rather than using the derivatives at 0, you ensure a continuous approximation. You could also shift the x value, rather than the array index to deal with your negative x values.

Ced

=================================================

Followup:

The amount of effort, and the results, that have gone into finding good approximations is very impressive. I was curious as to how my boring and bland piecewise parabolic solution would compare. Not surprisingly, it does much better. Here are the results:

   Method    Minimum    Maximum     Mean       RMS
  --------   --------   --------   --------   --------
     Power   -8.48842    1.99861   -4.19436    5.27002
    OP S_3   -2.14675    0.00000   -1.20299    1.40854
     Bhask   -1.34370    1.63176   -0.14367    0.97353
     Ratio   -0.24337    0.22770   -0.00085    0.16244
     rbj 5   -0.06724    0.15519   -0.00672    0.04195
    Olli5C   -0.16367    0.20212    0.01003    0.12668
     Olli5   -0.26698    0.00000   -0.15177    0.16402
    Olli7C   -0.00213    0.00000   -0.00129    0.00143
     Olli7   -0.00005    0.00328    0.00149    0.00181
    Para16   -0.00921    0.00916   -0.00017    0.00467
    Para32   -0.00104    0.00104   -0.00001    0.00053
    Para64   -0.00012    0.00012   -0.00000    0.00006

The values represent 1000x the error between the approximation and the actual evaluated every .0001 from a scale of 0 to 1 (inclusive), so 10001 points in all. The scale is converted to evaluate the functions from 0 to π/2, except for Olli Niemitalo's equations which use the 0 to 1 scale. The columns values should be clear from the headers. The results don't change with a .001 spacing.

The "Power" line is the equation: xxe6.

The rbj 5 line is the same as Matt L's c5 solution.

The 16, 32, and 64 are the number of intervals that have parabolic fits. Of course there are insignificant discontinuities in the first derivative at each interval boundary. The values of the function are continuous though. Increasing the number of intervals only increases the memory requirements (and initialization time), it does not increase the amount of calculation needed for the approximation, which is less than any of the other equations. I chose powers of two because a fixed point implementation could save a division by using an AND in such cases. Also, I didn't want the count to be commensurate with the test sampling.

I did run Olli Niemitalo's python program and got this as part of the printout: "Candidate solution 176 of 120" I thought that was odd, so I am mentioning it.

If anybody wants me to include any of the other equations, please let me know in the comments.

Here is the code for the piecewise parabolic approximations. The entire test program is too long to post.

#=============================================================================
def FillParab( argArray, argPieceCount ):

#  y = a d^2 + b d + c

#  ym = a .25 - b .5 + c
#  y  =                c
#  yp = a .25 + b .5 + c

#  c = y
#  b = yp - ym
#  a = ( yp + ym - 2y ) * 2

#---- Calculate Lookup Arrays

        theStep = pi * .5 / float( argPieceCount - 1 )
        theHalf = theStep * .5

        theL0 = zeros( argPieceCount )
        theL1 = zeros( argPieceCount )
        theL2 = zeros( argPieceCount )

        for k in range( 0, argPieceCount ):
         x  = float( k ) * theStep

         ym = sin( x - theHalf )
         y  = sin( x )
         yp = sin( x + theHalf )

         theL0[k] = y
         theL1[k] = yp - ym
         theL2[k] = ( yp + ym - 2.0 * y ) * 2

#---- Do the Fill

        theN = len( argArray )

        theFactor = pi * .5 / float( theN - 1 )

        for i in range( 0, theN ):
         x  = float( i ) * theFactor

         kx = x / theStep
         k  = int( kx + .5 )
         d  = kx - k

         argArray[i] = theL0[k] + ( theL1[k] + theL2[k] * d ) * d

#=============================================================================

=======================================

Appendum

I have included Guest's S3 function from the original post as "OP S_3" and Guest's two parameter formula from the comments as "Ratio". Both are on the 0 to 1 scale. I don't think the Ratio one is suitable for either calculation at runtime or for building a lookup table. After all, it is significantly more computation for the CPU than just a plain sin() call. It is interesting mathematically though.


Good work! I fixed that bug ("176 of 120").
Olli Niemitalo

Nice update, this makes more sense to me now. The xxe6 probably doesn't need to be tested, I just threw it out there because I was trying to figure out the significance of e which seemed to keep popping up while I was playing with this. A better rational expression to test might be something like this: f0(x)=|x|asign(x) ; b=f0(1) ; f1(x)=f0(x)bx ; c=1f1(1) ; f2(x)=f1(x)c ... now a should be set to about 223...
Guest

...or f0(x) can be pretty much any other odd-symmetrical function; sigmoids seem to work well, like ax1ax+1 (but then the right value for a needs to be found, of course). Here's a plot... as Olli mentions, this probably isn't practical for on-the-fly computation, but I guess it could be useful for building a lookup table.
Guest

Or a more accurate 2-param version of that, a0xa1xa0x+a1x looks pretty good with a013 and a1109
Guest
Al usar nuestro sitio, usted reconoce que ha leído y comprende nuestra Política de Cookies y Política de Privacidad.
Licensed under cc by-sa 3.0 with attribution required.