diff --git a/README.md b/README.md
index 3c0ddeab568fa38e419d671422520e2b79a9534a..6a63d99b4016145a1f0964d33b89b10cbbc36805 100644
--- a/README.md
+++ b/README.md
@@ -59,6 +59,93 @@ The full build system is described in the dune-common/doc/buildsystem (Git versi
 
 Here are the [meeting notes](doc/meeting-notes.md).
 
+# Program component structure
+
+- `main()`, in [solution01.cc](src/solution01.cc), reads the configuration
+  file and constructs the grid (either as a structured grid from the
+  information in the configuration file, or by reading a `.msh` file
+  containing the description of an unstructured mesh).  It then passes the
+  configuration and a view of the constructed grid to the `driver()` function.
+- `driver()`, in [driver.hh](src/driver.hh), is responsible for creating most
+  of the components needed to solve the problem, in the right order.  It then
+  invokes `newton.apply()` to solve the problem, and finally write the output
+  to a `.vtu` file.
+- `class Newton`, in [newton.hh](src/newton.hh), is the nonlinear solver.  It
+  uses the `GridOperator` to compute the initial residual, as well as to
+  create coefficient vectors and the *linear operator*s that represent
+  linearized versions of the problem.  It uses the *linear solver* to repeatly
+  solve linearized problem.
+- The *linear solver*s, in [linearsolver.hh](src/linearsolver.hh), solve the
+  linearized problem represented by the *linear operator*s.
+  - `class ISTLBackend_SEQ_AMG` is used as a reference solver, i.e. we will
+    not be able to run it on the GPU.  It solves the linearized problem using
+    an iterative solver (CG a.k.a. Conjugate Gradients in our case)
+    preconditioned with an AMG (algebraic multigrid), which in turn uses a
+    smoother (SSOR in our case).  AMG operates on a matrix, and so only works
+    with an *linear operator* that can provide one.
+  - `class ISTLBackend_Richardson` is the solver we want to use on the GPU.
+    It also uses CG as the iterative solver, but Richardson as the
+    precoditioner -- which is just another way to say that there is no
+    preconditioner.  Richardson does not require a matrix, and thus it is
+    sufficient if the *linear operator* can compute the result of applying the
+    "matrix" to a vector.
+- The *linear operator*s, in [linearsolver.hh](src/linearsolver.hh), represent
+  the linearized problem.  They can be thought of sort of like a matrix that
+  can be right-multiplied with a vector, except that they don't neccesarily
+  actually assemble a matrix internally.  The operators used here have a
+  method `linearizeAt(z)`, which informs them about the linearization point to
+  use when they a applied to a vector.
+  - `class AssembledLinearizedOperator` uses the `GridOperators`'s
+    `jacobian()` method to actually assemble a matrix at the linearization
+    point `z`.  Applying the operator to a vector then becomes a matrix-vector
+    multiplication.  The matrix format used is sparse, i.e. most entries are
+    not stored and implicitly zero.  The matrix can be extracted by linear
+    solvers that require it to operate on it directly, for instance to examine
+    individual entries.
+  - `class LinearizedMatrixFreeOperator` uses the `GridOperator`'s
+    `jacobian_apply()` to compute the result of applying the operator to a
+    vector.  The linearization point is stored internally as a coefficient
+    vector.  As this operator has no matrix, it cannot be used with advanced
+    preconditioners like AMG.
+- `class GridOperator`, in [gridoperator.hh](src/gridoperator.hh) is
+  reponsible for iterating over the grid and for each grid element to gather
+  data from global containers, perform some action defined by a *local
+  operator*, and then to scatter back the result to global containers.  It
+  also applies constraints on the result in a post-processing step.
+  
+  Besides grid elements ("volume"), it would also be responsible for iterating
+  over intersections between grid elements ("skeleton"), and between grid
+  elements and the boundary of the computational domain ("boundary") -- though
+  that should no longer be neccessary for our sample problem pending
+  resolution of issue #30.
+- The *local operator*s are collections of kernels that operate on grid
+  elements (or intersections).  They determine the particular numerical method
+  that is used to solve a problem (e.g. continuous finite elements,
+  discontinuous Galerkin, finite volume) and the general class of problem
+  (e.g. heat equation, richards equation, Stokes flow, etc.).  They are
+  typically parameterized by a *problem parameter* class that supplies
+  parameters such as boundary condition values or source terms to make the
+  abstract problem concrete.
+  - `class NonlinearPoissonFEM` in
+    [nonlinearpoissonfem.hh](src/nonlinearpoissonfem.hh) is a *local operator*
+    that solves a poisson equation that includes a non-linear term using the
+    (continuous) finite element method ("fem").  It describes a range of
+    physical problems.  One of them would be heat transport, for instance in a
+    water boiler: *u* would then denote the temperature, *f(x)* would be the
+    heat supplied by the heating elements, and *g(x)* would be 100°C at the
+    boundary where the heat-conducting material meets the boiling water (at
+    least in the brief steady state when the water has reached boiling
+    temperature but has not yet fully evaporated...).
+    
+    (*TODO:* find some equivalent in the water boiler for the nonlinearity
+    *q(u)*)
+- The *problem parameter* class supplies concrete parameters to local
+  operators.  Typically, local operators that solve the same physical
+  problem, but perhaps with different numerical methods, would use the same
+  interface for this class.
+  - `class NonlinearPoissonProblem` in [problem.hh](src/problem.hh) is out
+    test problem's *problem parameter* class.
+
 # How to inspect the program output
 
 The test programs produce VTK files (`*.vtu`, "Visualization Toolkit