The next example will be to minimise a fairly simple two-dimensional function: f(x,y) =
(1,1) is in a flat region of the valley and is fairly hard to pin down exactly.
the function (which can be imagined as the ‘height’) for each of the test points. Simply the
coordinates are accepted as arguments and the value at that point is passed back.
We will start with a simplistic approach to minimising this function, which will then be
built upon to give something resembling a more normal Monte Carlo search. For the most
basic approach we will define a range of values for the coordinates (here between −5 and
an optimisation problem it helps to know something about the range of sensible answers.
It is important that the range of tested points spans the optimum answer but smaller ranges
result in quicker searches. For combinatorial problems (such as travelling salesman) the
question is generally well bounded but for an arbitrary function there may be no known
limits, so it is up to the programmer to choose sensible bounds for variables and perhaps
perform preliminary tests to establish these.
The search begins by initialising the best point (the one with the smallest value) as a
random coordinate. Here we do this with uniform and record the
initial bestValue as the
value of the function at this point.
bestPoint = uniform(-5, 5, 2)
bestValue = testFunc(bestPoint)
Next there’s a loop, for a given number of steps, which defines a random point between
the pre-set limits. The value of the function at that point is tested. If the value of the latest
point is smaller than the current best value then the best point and corresponding best
value are redefined. In this example we print a line to the screen when a better point is
found so that the progress can be visualised.
numSteps = 100000
for i in range(numSteps):
point = uniform(-5, 5, 2)
value = testFunc(point)
if value < bestValue:
bestPoint = point
bestValue = value
x, y = point
print('%5d x:%.3f y:%.3f value:%.3f' % (i, x, y, value))
The next example illustrates an improvement on the above loop. Rather than choosing a
completely random point to test next the subsequent points are based on the best point so
far; this is done using the normal() function, which selects the next point in a random
Gaussian spread. The variable bestPoint is passed to this function as the centre of the
distribution, i.e. the test point is based on the previous best. The values 0.5 and 2 represent
the spread of the distribution and the number of values (dimensions) respectively. The rest
of the loop remains the same, testing whether the value is an improvement. Because the
test points now follow the good solutions, this example tends to reach the optimum point
more efficiently than the previous example.
normal = random.normal
mumSteps = 100000
for i in range(numSteps): # could use xrange in Python 2
point = normal(bestPoint, 0.5, 2)
value = testFunc(point)
if value < bestValue:
bestPoint = point
bestValue = value
x, y = point
print('%5d x:%.3f y:%.3f value:%e' % (i, x, y, value))
Do'stlaringiz bilan baham: