python - using undetermined number of parameters in scipy function curve_fit -
first question: i'm trying fit experimental datas function of following form:
f(x) = m_o*(1-exp(-t_o*x)) + ... + m_j*(1-exp(-t_j*x))
currently, don't find way have undetermined number of parameters m_j, t_j, i'm forced this:
def fitting_function(x, m_1, t_1, m_2, t_2): return m_1*(1.-numpy.exp(-t_1*x)) + m_2*(1.-numpy.exp(-t_2*x)) parameters, covariance = curve_fit(fitting_function, xexp, yexp, maxfev = 100000)
(xexp , yexp experimental points)
is there way write fitting function this:
def fitting_function(x, li): res = 0. el in range(len(li) / 2): res += li[2*idx]*(1-numpy.exp(-li[2*idx+1]*x)) return res
where li list of fitting parameters , curve_fitting? don't know how tell curve_fitting number of fitting parameters. when try kind of form fitting_function, have errors "valueerror: unable determine number of fit parameters."
second question: there way force fitting parameters positive?
any appreciated :)
see question , answer here. i've made minimal working example demonstrating how done application. make no claims best way - muddling through myself, critiques or simplifications appreciated.
import numpy np scipy.optimize import curve_fit import matplotlib.pyplot pl def wrapper(x, *args): #take list of arguments , break down 2 lists fit function understand n = len(args)/2 amplitudes = list(args[0:n]) timeconstants = list(args[n:2*n]) return fit_func(x, amplitudes, timeconstants) def fit_func(x, amplitudes, timeconstants): #the actual fit function fit = np.zeros(len(x)) m,t in zip(amplitudes, timeconstants): fit += m*(1.0-np.exp(-t*x)) return fit def gen_data(x, amplitudes, timeconstants, noise=0.1): #generate fake data y = np.zeros(len(x)) m,t in zip(amplitudes, timeconstants): y += m*(1.0-np.exp(-t*x)) if noise: y += np.random.normal(0, noise, size=len(x)) return y def main(): x = np.arange(0,100) amplitudes = [1, 2, 3] timeconstants = [0.5, 0.2, 0.1] y = gen_data(x, amplitudes, timeconstants, noise=0.01) p0 = [1, 2, 3, 0.5, 0.2, 0.1] popt, pcov = curve_fit(lambda x, *p0: wrapper(x, *p0), x, y, p0=p0) #call lambda function yfit = gen_data(x, popt[0:3], popt[3:6], noise=0) pl.plot(x,y,x,yfit) pl.show() print popt print pcov if __name__=="__main__": main()
a word of warning, though. linear sum of exponentials going make fit extremely sensitive noise, particularly large number of parameters. can test adding small amount of noise data generated in script - small deviations cause wrong answer entirely while fit still looks valid eye (test noise=0, 0.01, , 0.1). careful interpreting results if fit looks good. it's form allows variable swapping: best fit solution same if swap pairs of (m_i, t_i) (m_j, t_j), meaning chi-square has multiple identical local minima might mean variables swapped around during fitting, depending on initial conditions. unlikely numeriaclly robust way extract these parameters.
to second question, yes, can, defining exponentials so:
m_0**2*(1.0-np.exp(-t_0**2*x)+...
basically, square them in actual fit function, fit them, , square results (which negative or positive) actual parameters. can define variables between range using different proxy forms.
Comments
Post a Comment