I'm writing Python code to generate and plot 'super-Gaussian' functions, as:
def supergaussian(x, A, mu, sigma, offset, N=8):
"""Supergaussian function, amplitude A, centroid mu, st dev sigma, exponent N, with constant offset"""
return A * (1/(2**(1+1/N)*sigma*2*scipy.special.gamma(1+1/N))) * numpy.exp(-numpy.absolute(numpy.power(x-mu,N))/(2*sigma**N)) + offset
init_x = numpy.arange(-100,100,1.0)
init_y = supergaussian(init_x, 1, 0, 25, 0, N=12)
Following code just makes a plot of it. For a reason I cannot fathom, this code works fine when using the default value of 8 for N
, or for values of N
up to 13. When N
is 14 or higher, the function crashes with an error message:
AttributeError: 'float' object has no attribute 'exp'
At the return line in the function definition. Any ideas? Since the only thing in that line that use .exp is the numpy.exp
the error message seems to imply that numpy
is being interpreted as a float, but only for large values of N
...
I'm running python 3.3.2 with numpy 1.7.1 and scipy 0.12.0