diff git a/docs/source/release_notes.rst b/docs/source/release_notes.rst
index f8abc2ebde2442c93e600228dbc8e33836dfb841..89d3f5eeff56a043b38e7725ab2c846203813cf4 100644
 a/docs/source/release_notes.rst
+++ b/docs/source/release_notes.rst
@@ 4,6 +4,229 @@
Release Notes
*************
+pyMOR 2020.1 (July ??, 2020)
+
+We are proud to announce the release of pyMOR 2020.1! Highlights of this release
+are support for nonintrusive model order reduction using artificial neural networks,
+the subspace accelerated dominant pole algorithm (SAMDP) and the implicitly restarted
+Arnoldi method for eigenvalue computation. Parameter handling in pyMOR has been
+simplified, and a new series of handson tutorials helps getting started using pyMOR
+more easily.
+
+Over 600 single commits have entered this release. For a full list of changes
+see `here `__.
+
+pyMOR 2020.1 contains contributions by Linus Balicki, Tim Keil, Hendrik Kleikamp
+and Luca Mechelli. We are also happy to welcome Linus as a new main developer!
+See `here `__ for more
+details.
+
+
+Release highlights
+^^^^^^^^^^^^^^^^^^
+
+Model order reduction using artificial neural networks
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+With this release, we introduce a simple approach for nonintrusive model order
+reduction to pyMOR that makes use of artificial neural networks
+`[#1001] `_. The method was first
+described in [HU18]_ and only requires being able to compute solution snapshots of
+the fullorder Model. Thus, it can be applied to arbitrary (nonlinear) Models even when no
+access to the model's Operators is possible.
+
+Our implementation internally wraps `PyTorch `_ for the training and evaluation of
+the neural networks. No knowledge of PyTorch or neural networks is required to apply the method.
+
+
+New system analysis and linear algebra algorithms
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The new :meth:`~pymor.algorithms.eigs.eigs` method
+`[#880] `_ computes
+smallest/largest eigenvalues of an arbitary linear real Operator
+using the implicitly restarted Arnoldi method [RL95]_. It can also
+be used to solve generalized eigenvalue problems.
+
+So far, computing poles of an LTIModel was only supported by its
+:meth:`~pymor.models.iosys.LTIModel.poles` method, which uses a dense eigenvalue
+solver and converts the operators to dense matrices.
+The new :meth:`~pymor.algorithms.samdp.samdp` method
+`[#834] `_ implements the
+subspace accelerated dominant pole (SAMDP) algorithm [RM06]_,
+which can be used to compute the dominant poles operators of an
+LTIModel with arbitrary (in particular sparse) system Operators
+without relying on dense matrix operations.
+
+
+Improved parameter handling
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+While pyMOR always had a powerful and flexible system for handling parameters,
+understanding this system was often a challenge for pyMOR newcomers. Therefore,
+we have completely overhauled parameter handling in pyMOR, removing some unneeded
+complexities and making the nomenclature more straightforward. In particular:
+
+ The `Parameter` class has been renamed to :class:`~pymor.parameters.base.Mu`.
+ `ParameterType` has been renamed to Parameters. The items of a Parameters
+ dict are the individual *parameters* of the corresponding ParametricObject.
+ The items of a :class:`~pymor.parameters.base.Mu` dict are the associated
+ *parameter values*.
+ All parameters are now onedimensional NumPy arrays.
+ Instead of manually calling `build_parameter_type` in `__init__`, the parameters
+ of a ParametricObject are now automatically inferred from the object's `__init__`
+ arguments. The process can be customized using the new `parameters_own` and
+ `parameters_internal` properties.
+ `CubicParameterSpace` was renamed to ParameterSpace and is created using
+ `parametric_object.parameters.space(ranges)`.
+
+Further details can be found in `[#923] `_.
+Also see `[#949] `_ and
+`[#998] `_.
+
+
+pyMOR tutorial collection
+~~~~~~~~~~~~~~~~~~~~~~~~~
+Handson tutorials provide a good opportunity for new users to get started with
+a software library. In this release a variety of tutorials have been added which
+introduce important pyMOR concepts and basic model order reduction methods. In
+particular users can now learn about:
+
+ :doc:`tutorial_builtin_discretizer`.
+ :doc:`tutorial_basis_generation`
+ :doc:`tutorial_bt`
+ :doc:`tutorial_mor_with_anns`
+ :doc:`tutorial_external_solver`
+
+
+Additional new features
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Improvements to ParameterFunctionals
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Several improvements have been made to pyMOR's ParameterFunctionals:
+
+ `[#934] [parameters/functionals] Add derivative of products `_
+ `[#950] [parameters/functionals] Add LincombParameterFunctional `_
+ `[#959] verbose name for d_mu functionals `_
+ `[#861] Mintheta approach `_
+ `[#952] add BaseMaxThetaParameterFunctional to generalize maxtheta approach `_
+
+
+Extended Newton algorithm
+~~~~~~~~~~~~~~~~~~~~~~~~~
+Finding a proper parameter for the step size in the Newton algorithm can be a difficult
+task. In this release an Armijo line search algorithm is added which allows for computing
+adequate step sizes in every step of the iteration. Details about the line search
+implementation in pyMOR can be found in `[#925] `_.
+
+Additionally, new options for determining convergence of the Newton method have been added.
+It is now possible to choose between the norm of the residual or the update vector as a
+measure for the error. Information about other noteworthy improvements that are related to
+this change can be found in `[#956] `_, as well as
+`[#932] `_.
+
+
+initial_guess parameter for apply_inverse
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The :meth:`~pymor.operators.interface.Operator.apply_inverse` and
+:meth:`~pymor.operators.interface.Operator.apply_inverse_adjoint` methods of the Operator interface
+have gained an additional `initial_guess` parameter that can be passed to iterative linear solvers.
+For nonlinear Operators the initial guess is passed to the :meth:`~pymor.algorithms.newton.newton`
+algorithm `[#941] `_.
+
+
+manylinux 2010+2014 wheels
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+In addition to `manylinux1 `_ wheels we are now also shipping wheels
+conforming with the `manylinux2010 `_ and
+`manylinux2014 `_ standards. The infrastructure for this was added in
+`[#846] `_.
+
+
+Debugging improvements
+~~~~~~~~~~~~~~~~~~~~~~
+The :meth:`~pymor.core.defaults.defaults` decorator has been refactored to make stepping through it
+with a debugger faster `[#864] `_. Similar improvements
+have been made to :meth:`RuleTable.apply `. The new
+:meth:`~pymor.algorithms.rules.RuleTable.breakpoint_for_obj` and
+:meth:`~pymor.algorithms.rules.RuleTable.breakpoint_for_name` methods allow setting conditional
+breakpoints in :meth:`RuleTable.apply ` that match
+specific objects to which the table might be applied `[#945] `_.
+
+
+WebGLbased visualizations
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+This release enables our `pythreejs `_based visualization module
+for Jupyter Notebook environments by default. It acts as a drop in replacement for the previous default, which was
+matplotlib based. This new module improves interactive performance for visualizations
+with a large number of degrees of freedom by utilizing the user's graphics card via the browser's WebGL API.
+The old behaviour can be reactivated using
+
+.. jupyterexecute::
+
+ from pymor.basic import *
+ set_defaults({'pymor.discretizers.builtin.gui.jupyter.get_visualizer.backend': 'MPL'})
+
+
+Backward incompatible changes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Renamed interface classes
+~~~~~~~~~~~~~~~~~~~~~~~~~
+The names of pyMOR's interface classes have been shortened
+`[#859] `_. In particular:
+
+ `VectorArrayInterface`, `OperatorInterface`, `ModelInterface` were renamed to
+ VectorArray, Operator, Model. The corresponding modules were renamed from
+ `pymor.*.interfaces` to `pymor.*.interface`.
+ `BasicInterface`, `ImmutableInterface`, `CacheableInterface` were renamed to
+ BasicObject, ImmutableObject, CacheableObject. `pymor.core.interfaces` has
+ been renamed to :mod:`pymor.core.base`.
+
+The base classes `OperatorBase`, `ModelBase`, `FunctionBase` were merged into
+their respective interface classes `[#859] `_,
+`[#867] `_.
+
+
+Module cleanup
+~~~~~~~~~~~~~~
+Modules associated with pyMOR's builtin discretization toolkit were moved to the
+:mod:`pymor.discretizers.builtin` package `[#847] `_.
+The `domaindescriptions` and `functions` packages were made subpackages of
+:mod:`pymor.analyticalproblems` `[#855] `_,
+`[#858] `_. The obsolete code in
+`pymor.discretizers.disk` was removed `[#856] `_.
+Further, the `playground` package was removed `[#940] `_.
+
+
+State ids removed and caching simplified
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The unnecessarily complicated concept of *state ids*, which was used to build cache keys
+based on the actual state of a CacheableObject, has been completely removed from pyMOR.
+Instead, now a `cache_id` has to be manually specified when persistent caching over multiple
+program runs is desired `[#841] `_.
+
+
+Further API changes
+~~~~~~~~~~~~~~~~~~~
+ `[#938] Fix order of parameters in thermalblock_problem `_
+ `[#980] Set gram_schmidt tolerances in POD to 0 to never truncate pod modes `_
+ `[#1012] Change POD default rtol and fix analyze_pickle demo for numpy master `_
+
+
+Further notable improvements
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ `[#885] Implement VectorArrayOperator.apply_inverse `_
+ `[#888] Implement FenicsVectorSpace.from_numpy `_
+ `[#895] Implement VectorArray.__deepcopy__ via VectorArray.copy(deep=True) `_
+ `[#905] Add from_files method to SecondOrderModel `_
+ `[#919] [reductors.coercive] remove unneccessary initialization in SimpleCoerciveReductor `_
+ `[#926] [Operators] Speed up apply methods for LincombOperator `_
+ `[#937] Move NumpyListVectorArrayMatrixOperator out of the playground `_
+ `[#943] [logger] adds a ctx manager that restores effective level on exit `_
+
+
+
+
+
pyMOR 2019.2 (December 16, 2019)

We are proud to announce the release of pyMOR 2019.2! For this release we have
diff git a/docs/source/substitutions.py b/docs/source/substitutions.py
index 6c78be0fd0b9e78c74abe082fe94ce78c5669f8f..019b76e60e2f43fa8a70f41036ca745ecea42a97 100644
 a/docs/source/substitutions.py
+++ b/docs/source/substitutions.py
@@ 53,6 +53,7 @@ common = '''
.. defaults replace:: :mod:`~pymor.core.defaults`
.. CacheRegion replace:: :class:`~pymor.core.cache.CacheRegion`
+.. CacheableObject replace:: :class:`~pymor.core.cache.CacheableObject`
.. StationaryProblem replace:: :class:`~pymor.analyticalproblems.elliptic.StationaryProblem`
.. InstationaryProblem replace:: :class:`~pymor.analyticalproblems.instationary.InstationaryProblem`
diff git a/docs/source/tutorial_basis_generation.rst b/docs/source/tutorial_basis_generation.rst
index 47c7bdf265802c562e46a0dab67bf2d8a2e95744..f350430b8e76bba6e9f2b55cc2eee05631cc641d 100644
 a/docs/source/tutorial_basis_generation.rst
+++ b/docs/source/tutorial_basis_generation.rst
@@ 7,7 +7,7 @@ In this tutorial we will learn more about VectorArrays and how to
construct a reduced basis using pyMOR.
A reduced basis spans a low dimensional subspace of a Model's
:attr:`~pymor.models.interface.Model.solution_space`, in which the
+:attr:`~pymor.models.interface.Model.solution_space`, in which the
:meth:`solutions ` of the Model
can be well approximated for all parameter values. In this context,
time is treated as an additional parameter. So for timedependent problems,
@@ 20,8 +20,8 @@ Kolmogorov :math:`N`width :math:`d_N` given as
.. math::
d_N := \inf_{\substack{V_N \subseteq V\\ \operatorname{dim}(V_N) \leq N}}\,
 \sup_{\mu \in \mathcal{P}}\,
 \inf_{v \in V_N}\,
+ \sup_{\mu \in \mathcal{P}}\,
+ \inf_{v \in V_N}\,
\u(\mu)  v\.
In this formula :math:`V` denotes the
@@ 309,7 +309,7 @@ only 25 basis vectors.
Now, the Euclidean norm will just work fine in many cases.
However, when the fullorder model comes from a PDE, it will be usually not the norm
we are interested in, and you may get poor results for problems with
strongly anisotropic meshes.
+strongly anisotropic meshes.
For our diffusion problem with homogeneous Dirichlet boundaries,
the Sobolev seminorm (of order one) is a natural choice. Among other useful products,
@@ 321,7 +321,7 @@ as
fom.h1_0_semi_product
.. note::
+.. note::
The `0` in `h1_0_semi_product` refers to the fact that rows and columns of
Dirichlet boundary DOFs have been cleared in the matrix of the Operator to
@@ 342,7 +342,7 @@ projection error, we can simply pass it as the optional `product` argument to
R = trivial_basis[:10].inner(V, product=fom.h1_0_semi_product)
lambdas = np.linalg.solve(G, R)
V_h1_proj = trivial_basis[:10].lincomb(lambdas.T)

+
fom.visualize((V, V_h1_proj, V  V_h1_proj), separate_colorbars=True)
As you might have guessed, we have additionally opted here to only use the
@@ 387,7 +387,7 @@ and then extract appropriate submatrices:
trivial_errors = compute_proj_errors(trivial_basis, V, fom.h1_0_semi_product)
Here we have used the fact that we can form multiple linear combinations at once by passing
multiple rows of linear coefficients to
+multiple rows of linear coefficients to
:meth:`~pymor.vectorarrays.interface.VectorArray.lincomb`. The
:meth:`~pymor.vectorarrays.interface.VectorArray.norm` method returns a
NumPy array of the norms of all vectors in the array with respect to
@@ 452,7 +452,7 @@ We compute the approximation errors for the validation set as before:
.. jupyterexecute::
greedy_errors = compute_proj_errors(greedy_basis, V, fom.h1_0_semi_product)

+
plt.figure()
plt.semilogy(trivial_errors, label='trivial')
plt.semilogy(greedy_errors, label='greedy')
@@ 518,7 +518,7 @@ numbers should be near 1:
G_trivial = trivial_basis.gramian(fom.h1_0_semi_product)
G_greedy = greedy_basis.gramian(fom.h1_0_semi_product)

+
print(f'trivial: {np.linalg.cond(G_trivial)}, '
f'greedy: {np.linalg.cond(G_greedy)}')
@@ 535,26 +535,26 @@ compute these errors now more easily by exploiting orthogonality:
V_proj = basis[:N].lincomb(v)
errors.append(np.max((V  V_proj).norm(product)))
return errors

+
trivial_errors = compute_proj_errors_orth_basis(trivial_basis, V, fom.h1_0_semi_product)
greedy_errors = compute_proj_errors_orth_basis(greedy_basis, V, fom.h1_0_semi_product)

+
plt.figure()
plt.semilogy(trivial_errors, label='trivial')
plt.semilogy(greedy_errors, label='greedy')
plt.ylim(1e1, 1e1)
plt.legend()
plt.show()

+
Proper Orthogonal Decomposition

Another popular method to create a reduced basis out of snapshot data is the socalled
Proper Orthogonal Decomposition (POD) which can be seen as a noncentered version of
Principal Component Analysis (PCA). First we build a snapshot matrix
+Principal Component Analysis (PCA). First we build a snapshot matrix
.. math::
+.. math::
A :=
\begin{bmatrix}
@@ 590,7 +590,7 @@ the following error identity holds:
Thus, the linear spaces produced by the POD are actually optimal, albeit in a different
error measure: instead of looking at the worstcase bestapproximation error over all
parameter values, we minimize the :math:`\ell^2`sum of all bestapproximation errors.
+parameter values, we minimize the :math:`\ell^2`sum of all bestapproximation errors.
So in the mean squared average, the POD spaces are optimal, but there might be parameter values
for which the bestapproximation error is quite large.
@@ 628,7 +628,7 @@ bestapproximation error:
.. jupyterexecute::
pod_errors = compute_proj_errors_orth_basis(pod_basis, V, fom.h1_0_semi_product)

+
plt.figure()
plt.semilogy(trivial_errors, label='trivial')
plt.semilogy(greedy_errors, label='greedy')
@@ 641,7 +641,7 @@ As it turns out, the POD spaces perform even slightly better than the greedy spa
Why is that the case? Note that for finite training or validation sets, both considered error measures
are equivalent. In particular:
.. math::
+.. math::
\sup_{k = 1,\ldots,K} \inf_{v \in V_N} \u(\mu_k)  v\ \leq
\left[\sum_{k = 1}^K \inf_{v \in V_N} \u(\mu_k)  v\^2\right]^{1/2} \leq
\sqrt{K} \cdot \sup_{k = 1,\ldots,K} \inf_{v \in V_N} \u(\mu_k)  v\.
@@ 668,7 +668,7 @@ approximating the solutions at the subdomain interfaces.
Weak greedy algorithm

Both POD and the strong greedy algorithm require the computation of all
+Both POD and the strong greedy algorithm require the computation of all
:meth:`solutions ` :math:`u(\mu)`
for all parameter values :math:`\mu` in the `training_set`. So it is
clear right from the start that we cannot afford very large training sets.
@@ 707,11 +707,11 @@ We won't go into any further details in this tutorial, but for nice problem clas
(linear coercive problems with an affine dependence of the system matrix on the Parameters),
one can derive a posteriori error estimators for which the equivalence with the bestapproximation
error can be shown and which can be computed efficiently, independently from the size
of the fullorder model. Here we will only give a simple example how to use the
+of the fullorder model. Here we will only give a simple example how to use the
:meth:`weak greedy ` algorithm for our problem at hand.
In order to do so, we need to be able to build a reducedorder
model with an appropriate error estimator. For the given (linear coercive) thermal block problem
+model with an appropriate error estimator. For the given (linear coercive) thermal block problem
we can use :class:`~pymor.reductors.coercive.CoerciveRBReductor`:
.. jupyterexecute::
@@ 757,7 +757,7 @@ Let's see, how the weakgreedy basis performs:
.. jupyterexecute::
weak_greedy_errors = compute_proj_errors_orth_basis(weak_greedy_basis, V, fom.h1_0_semi_product)

+
plt.figure()
plt.semilogy(trivial_errors, label='trivial')
plt.semilogy(greedy_errors, label='greedy')