.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples\ask-and-tell.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_ask-and-tell.py: ======================= Async optimization Loop ======================= Bayesian optimization is used to tune parameters for walking robots or other experiments that are not a simple (expensive) function call. Tim Head, February 2017. Reformatted by Holger Nahrstaedt 2020 .. currentmodule:: skopt They often follow a pattern a bit like this: 1. ask for a new set of parameters 2. walk to the experiment and program in the new parameters 3. observe the outcome of running the experiment 4. walk back to your laptop and tell the optimizer about the outcome 5. go to step 1 A setup like this is difficult to implement with the ***_minimize()** function interface. This is why **scikit-optimize** has a ask-and-tell interface that you can use when you want to control the execution of the optimization loop. This notebook demonstrates how to use the ask and tell interface. .. GENERATED FROM PYTHON SOURCE LINES 27-38 .. code-block:: Python print(__doc__) import numpy as np np.random.seed(1234) import matplotlib.pyplot as plt from skopt import Optimizer from skopt.plots import plot_gaussian_process .. GENERATED FROM PYTHON SOURCE LINES 39-44 The Setup --------- We will use a simple 1D problem to illustrate the API. This is a little bit artificial as you normally would not use the ask-and-tell interface if you had a function you can call to evaluate the objective. .. GENERATED FROM PYTHON SOURCE LINES 44-48 .. code-block:: Python noise_level = 0.1 .. GENERATED FROM PYTHON SOURCE LINES 49-51 Our 1D toy problem, this is the function we are trying to minimize .. GENERATED FROM PYTHON SOURCE LINES 51-61 .. code-block:: Python def objective(x, noise_level=noise_level): return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() * noise_level def objective_wo_noise(x, noise_level=0): return objective(x, noise_level=0) .. GENERATED FROM PYTHON SOURCE LINES 62-63 Here a quick plot to visualize what the function looks like: .. GENERATED FROM PYTHON SOURCE LINES 63-85 .. code-block:: Python # Plot f(x) + contours plt.set_cmap("viridis") x = np.linspace(-2, 2, 400).reshape(-1, 1) fx = np.array([objective(x_i, noise_level=0.0) for x_i in x]) plt.plot(x, fx, "r--", label="True (unknown)") plt.fill( np.concatenate([x, x[::-1]]), np.concatenate( ( [fx_i - 1.9600 * noise_level for fx_i in fx], [fx_i + 1.9600 * noise_level for fx_i in fx[::-1]], ) ), alpha=0.2, fc="r", ec="None", ) plt.legend() plt.grid() plt.show() .. image-sg:: /auto_examples/images/sphx_glr_ask-and-tell_001.png :alt: ask and tell :srcset: /auto_examples/images/sphx_glr_ask-and-tell_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 86-89 Now we setup the :class:`Optimizer` class. The arguments follow the meaning and naming of the ***_minimize()** functions. An important difference is that you do not pass the objective function to the optimizer. .. GENERATED FROM PYTHON SOURCE LINES 89-104 .. code-block:: Python opt = Optimizer( [(-2.0, 2.0)], "GP", acq_func="EI", acq_optimizer="sampling", initial_point_generator="lhs", ) # To obtain a suggestion for the point at which to evaluate the objective # you call the ask() method of opt: next_x = opt.ask() print(next_x) .. rst-class:: sphx-glr-script-out .. code-block:: none [-0.7315058981975282] .. GENERATED FROM PYTHON SOURCE LINES 105-109 In a real world use case you would probably go away and use this parameter in your experiment and come back a while later with the result. In this example we can simply evaluate the objective function and report the value back to the optimizer: .. GENERATED FROM PYTHON SOURCE LINES 109-113 .. code-block:: Python f_val = objective(next_x) opt.tell(next_x, f_val) .. rst-class:: sphx-glr-script-out .. code-block:: none fun: 0.20718649236432957 x: [-0.7315058981975282] func_vals: [ 2.072e-01] x_iters: [[-0.7315058981975282]] models: [] space: Space([Real(low=-2.0, high=2.0, prior='uniform', transform='normalize')]) random_state: RandomState(MT19937) specs: args: dimensions: [(-2.0, 2.0)] base_estimator: GP n_random_starts: None n_initial_points: 10 initial_point_generator: lhs n_jobs: 1 acq_func: EI acq_optimizer: sampling random_state: None model_queue_size: None space_constraint: None acq_func_kwargs: None acq_optimizer_kwargs: None avoid_duplicates: True function: Optimizer .. GENERATED FROM PYTHON SOURCE LINES 114-117 Like ***_minimize()** the first few points are suggestions from the initial point generator as there is no data yet with which to fit a surrogate model. .. GENERATED FROM PYTHON SOURCE LINES 117-124 .. code-block:: Python for i in range(9): next_x = opt.ask() f_val = objective(next_x) res = opt.tell(next_x, f_val) .. GENERATED FROM PYTHON SOURCE LINES 125-127 We can now plot the random suggestions and the first model that has been fit: .. GENERATED FROM PYTHON SOURCE LINES 127-135 .. code-block:: Python _ = plot_gaussian_process( res, objective=objective_wo_noise, noise_level=noise_level, show_next_point=False, show_acq_func=True, ) plt.show() .. image-sg:: /auto_examples/images/sphx_glr_ask-and-tell_002.png :alt: x* = -0.3201, f(x*) = -0.9482 :srcset: /auto_examples/images/sphx_glr_ask-and-tell_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 136-137 Let us sample a few more points and plot the optimizer again: .. GENERATED FROM PYTHON SOURCE LINES 137-152 .. code-block:: Python for i in range(10): next_x = opt.ask() f_val = objective(next_x) res = opt.tell(next_x, f_val) _ = plot_gaussian_process( res, objective=objective_wo_noise, noise_level=noise_level, show_next_point=True, show_acq_func=True, ) plt.show() .. image-sg:: /auto_examples/images/sphx_glr_ask-and-tell_003.png :alt: x* = -0.3201, f(x*) = -0.9482 :srcset: /auto_examples/images/sphx_glr_ask-and-tell_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 153-160 By using the :class:`Optimizer` class directly you get control over the optimization loop. You can also pickle your :class:`Optimizer` instance if you want to end the process running it and resume it later. This is handy if your experiment takes a very long time and you want to shutdown your computer in the meantime: .. GENERATED FROM PYTHON SOURCE LINES 160-168 .. code-block:: Python import pickle with open('my-optimizer.pkl', 'wb') as f: pickle.dump(opt, f) with open('my-optimizer.pkl', 'rb') as f: opt_restored = pickle.load(f) .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 1.249 seconds) .. _sphx_glr_download_auto_examples_ask-and-tell.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/holgern/scikit-optimize/master?urlpath=lab/tree/notebooks/auto_examples/ask-and-tell.ipynb :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: ask-and-tell.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: ask-and-tell.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_