2017-03-20 2 views
0

Ich möchte Matlab-Code in Jython-Version übertragen und finde, dass die fminsearch in Matlab durch Apache-Common-Math-Optimierung ersetzt werden kann. Ich schreibe auf dem Mango Medical Image Skript-Manager, der Jython 2.5.3 als Programmiersprache verwendet. Und die Math-Version ist 3.6.1. Hier ist mein Code:Wie verwende ich Apache Commons Math Optimization in Jython?

def f(x,y): 
 
\t return x^2+y^2 
 

 
sys.path.append('/home/shujian/APPs/Mango/lib/commons-math3-3.6.1.jar') 
 
sys.add_package('org.apache.commons.math3.analysis') 
 
from org.apache.commons.math3.analysis import MultivariateFunction 
 
sys.add_package('org.apache.commons.math3.optim.nonlinear.scalar.noderiv') 
 
from org.apache.commons.math3.optim.nonlinear.scalar.noderiv import NelderMeadSimplex,SimplexOptimizer 
 
sys.add_package('org.apache.commons.math3.optim.nonlinear.scalar') 
 
from org.apache.commons.math3.optim.nonlinear.scalar import ObjectiveFunction 
 
sys.add_package('org.apache.commons.math3.optim') 
 
from org.apache.commons.math3.optim import MaxEval,InitialGuess 
 
sys.add_package('org.apache.commons.math3.optimization') 
 
from org.apache.commons.math3.optimization import GoalType 
 

 
initialSolution=[2.0,2.0] 
 
simplex=NelderMeadSimplex([2.0,2.0]) 
 
opt=SimplexOptimizer(2**(-6), 2**(-10)) 
 
solution=opt.optimize(MaxEval(300),ObjectiveFunction(f),simplex,GoalType.MINIMIZE,InitialGuess([2.0,2.0])) 
 

 
skewParameters2 = solution.getPointRef() 
 
print skewParameters2;

Und ich habe den Fehler unten:

TypeError: optimize(): 1st arg can't be coerced to

Ich bin ziemlich verwirrt darüber, wie die Optimierung verwenden in Jython und die Beispiele sind alle Java-Version.

Antwort

0

Ich habe diesen Plan aufgegeben und finde eine andere Methode, um die fminsearch in Jython durchzuführen. Unterhalb der Code Jython Version ist:

import sys 
 
sys.path.append('.../jnumeric-2.5.1_ra0.1.jar') #add the jnumeric path 
 
import Numeric as np 
 
def nelder_mead(f, x_start, 
 
       step=0.1, no_improve_thr=10e-6, 
 
       no_improv_break=10, max_iter=0, 
 
       alpha=1., gamma=2., rho=-0.5, sigma=0.5): 
 
    ''' 
 
     @param f (function): function to optimize, must return a scalar score 
 
      and operate over a numpy array of the same dimensions as x_start 
 
     @param x_start (float list): initial position 
 
     @param step (float): look-around radius in initial step 
 
     @no_improv_thr, no_improv_break (float, int): break after no_improv_break iterations with 
 
      an improvement lower than no_improv_thr 
 
     @max_iter (int): always break after this number of iterations. 
 
      Set it to 0 to loop indefinitely. 
 
     @alpha, gamma, rho, sigma (floats): parameters of the algorithm 
 
      (see Wikipedia page for reference) 
 

 
     return: tuple (best parameter array, best score) 
 
    ''' 
 

 
    # init 
 
    dim = len(x_start) 
 
    prev_best = f(x_start) 
 
    no_improv = 0 
 
    res = [[np.array(x_start), prev_best]] 
 

 
    for i in range(dim): 
 
     x=np.array(x_start) 
 
     x[i]=x[i]+step 
 
     score = f(x) 
 
     res.append([x, score]) 
 

 
    # simplex iter 
 
    iters = 0 
 
    while 1: 
 
     # order 
 
     res.sort(key=lambda x: x[1]) 
 
     best = res[0][1] 
 

 
     # break after max_iter 
 
     if max_iter and iters >= max_iter: 
 
      return res[0] 
 
     iters += 1 
 

 
     # break after no_improv_break iterations with no improvement 
 
     print '...best so far:', best 
 

 
     if best < prev_best - no_improve_thr: 
 
      no_improv = 0 
 
      prev_best = best 
 
     else: 
 
      no_improv += 1 
 

 
     if no_improv >= no_improv_break: 
 
      return res[0] 
 

 
     # centroid 
 
     x0 = [0.] * dim 
 
     for tup in res[:-1]: 
 
      for i, c in enumerate(tup[0]): 
 
       x0[i] += c/(len(res)-1) 
 

 
     # reflection 
 
     xr = x0 + alpha*(x0 - res[-1][0]) 
 
     rscore = f(xr) 
 
     if res[0][1] <= rscore < res[-2][1]: 
 
      del res[-1] 
 
      res.append([xr, rscore]) 
 
      continue 
 

 
     # expansion 
 
     if rscore < res[0][1]: 
 
      xe = x0 + gamma*(x0 - res[-1][0]) 
 
      escore = f(xe) 
 
      if escore < rscore: 
 
       del res[-1] 
 
       res.append([xe, escore]) 
 
       continue 
 
      else: 
 
       del res[-1] 
 
       res.append([xr, rscore]) 
 
       continue 
 

 
     # contraction 
 
     xc = x0 + rho*(x0 - res[-1][0]) 
 
     cscore = f(xc) 
 
     if cscore < res[-1][1]: 
 
      del res[-1] 
 
      res.append([xc, cscore]) 
 
      continue 
 

 
     # reduction 
 
     x1 = res[0][0] 
 
     nres = [] 
 
     for tup in res: 
 
      redx = x1 + sigma*(tup[0] - x1) 
 
      score = f(redx) 
 
      nres.append([redx, score]) 
 
     res = nres

Und das Testbeispiel ist wie folgt:

def f(x): 
 
     return x[0]**2+x[1]**2+x[2]**2 
 

 
print nelder_mead(f,[3.4,2.3,2.2])

Eigentlich ist die ursprüngliche Version für Python ist, und der Link unten ist die Quelle: https://github.com/fchollet/nelder-mead