\section{Global constraint solver}
\label{sec-global-solver}
A common problem in computing interpolating splines is solving a
global system of constraints. For example, in the framework of
two-parameter extensional splines, the constraints are curvature
continuity across control points. Other splines may demand a more
complex set of constraints, such as the generalized four-parameter
splines of Section \ref{sec-constraints}.
Most generally, the goal of the solver is to produce a vector of
parameters, such that all the stated constraints are met. Most often,
the relationship between the parameters and the corresponding
constraints is \emph{nearly linear}. For example, as demonstrated in
Section \ref{solve-euler}, in the small angle approximation the
relationship between endpoint tangent angles and curvature is given by
linear Equations \ref{solveeuler-linear-approx}. Thus, one viable
approach is to express the relationship between the parameters and
constraints as a matrix, and solve it numerically. Hobby's spline
\cite{Hobby85} uses this approach, and achieves approximate
$G^2$-continuity. The resulting matrix is tridiagonal, which admits a
particularly fast $O(n)$ and simple solution.
However, this approach yields only an approximate solution, and as the
bending angles increase, so do the curvature discontinuities in the
resulting spline. A good general approach is to use Newton iteration
on the entire set of vector parameters, using the Jacobian matrix of partial
derivatives to refine each iteration. When a solution exists,
convergence is quadratic, and experience shows that three or four
iterations tend to suffice. This general technique resurfaces a
number of times in the literature, including Malcolm \cite{Malcolm1977},
Stoer \cite{Stoer82}, and Edwards \cite{Edwards92}, the latter being an
especially clear and general explication of using Newton-style
iteration to solve nonlinear splines.
Newton iteration is one of the most efficient techniques, but far from
the only workable approach. Since many of the splines are expressed in
terms of minimizing an energy functional, a number of researchers use
a gradient descent approach, such as Glass \cite{Glass66}, and, more
recently, Moreton \cite{Moreton92}. However, gradient descent solvers
tend to be much slower than Newton approaches. For example, Moreton's
conjugate gradient solver for MVC curves took 137 iterations, and 75
seconds, to compute the optimal solution to a 7 point spline
\cite[p. 99]{Moreton92}. By contrast, the techniques in this chapter
easily achieve interactive speeds even in a JavaScript implementation
that runs in standard Web browsers.
\subsection{Two-parameter solver}
For two-parameter splines, the solution is particularly easy. The
vector of parameters is the tangent angle at each control point, and
the constraints represent $G^2$-continuity across control
points. Thus, the vector to be solved consists of the difference in
curvature across each control point.
The vector of tangent angles is initialized to some reasonable values
(in our implementation, the initial angles are those assigned by
S\'equin, Lee, and Yen's circle spline
\cite{DBLP:journals/cad/SequinLY05}, although likely a simpler approach
would suffice). At each iteration, the curvatures at the endpoints of
each segment are computed ($\kappa^l_i$ representing the curvature at
the left endpoint of segment $i$, and $\kappa^r_i$ the curvature at
the right). The differences in curvature $\Delta \kappa_i = \kappa^r_i
- \kappa^l_{i+1}$ form the error vector. At the same time, the
Jacobian matrix represent the partial derivatives of the error vector
with respect to the tangent angles,
\begin{equation}
J_{ij} = \frac{\partial \Delta\kappa_i}{\partial \theta_j}\:.
\end{equation}
Then, a correction vector ${\bf \Delta\theta}$ is obtained, by solving the
Jacobian matrix for the given error vector. Since the Jacobian matrix
is tridiagonal, the solution is very fast.
\begin{equation}
{\bf \Delta\theta} = \textbf{J}^{-1}{\bf \Delta\kappa}\:.
\end{equation}
Finally, the $\theta_i$ values are updated for the next
iteration. In all cases but the most extreme, the new value is just
$\theta_i + \Delta\theta_i$. When bending angles are very large, this
iteration can fail, and a second pass is attempted, using only half
$\Delta\theta_i$ as the update; convergence is then only linear rather
than quadratic, but it is more robust.
While each segment is specified by two parameters, this solver is
written in terms of one parameter per control point, the tangent
angle. This approach guarantees $G^1$-continuity by construction, as
the same tangent is used on both sides of a control point.
One approach to determining the second parameter is to use an
additional local solver, determining the two spiral parameters from
the two tangent endpoints. For the Euler spiral spline, this local
solver is described in Section \ref{solve-euler}. Another approach
(described in more detail in Section \ref{sec-lookup}) is
to precompute a two-dimensional lookup table with the values.
In the software distribution, the clearest and most robust
implementation of this solver is in \texttt{spiro.js}. The top-level
Newton iteration is contained in \texttt{refine\_euler}, the
computation of the coefficients of the Jacobian matrix is in
\texttt{get\_jacobian\_g2}, and the solution of the matrix uses the
generic band diagonal solver \texttt{bandec} and \texttt{banbks},
adapted from Numerical Recipes \cite{NR}.
\subsection{Four-parameter solver}
The same general approach is used for four-parameter splines, but the
layout of parameters is a bit different. For one, the solver directly
computes all parameters of each segment of the spline, rather than
using a two-stage approach. This solver implements the general
constraints as described in Section \ref{sec-constraints}.
The inner loop for computing the error vector is essentially a spiro
integral (see Section \ref{sec-numint}). For the Jacobian matrix, for
simplicity we use central differencing to compute the partial
derivatives, even though an analytical technique would also be
feasible.
Each constraint gets one element in the error vector and one row in
the Jacobian matrix. For a constraint of the form, say, $\kappa''_2 =
\kappa''_3$, the entry in the error vector is the deviation from that
equality, in this case $\kappa''_2 - \kappa''_3$. Each column in the
Jacobian matrix represents one parameter of the polynomial spline,
i.e. there are four times as many columns as curve segments. For the
system to be well-determined, there must be an equal number of
(linearly independent) constraints as parameters, i.e. the Jacobian
matrix must be square. Thus, the constraints described in
\ref{sec-constraints} are carefully designed to make the count come
out even. In particular, for a sequence of $G^2$-continuous curve
points, additional constraints zero out the higher-order derivatives
of the polynomial spline, so that the resulting curve segments are
segments of the Euler spiral.
\begin{figure*}
\begin{verbatim}
Set up structure of matrix:
column = parameter of spline
row = constraint
Initialize parameter vector to all zero
Repeat until done:
Compute error vector of deviations from constraints
Compute norm of error vector
Below total error threshold?
If so, done
Compute partial derivative matrix of constraint wrt spline parameter
Invert matrix (band-diagonal)
Compute dot product of error vector and inverse of matrix
Add to parameter vector
\end{verbatim}
\caption{\label{solvefour-pseudo}Pseudocode for computing
four-parameter splines.}
\end{figure*}
The algorithm for the four-parameter solver is presented
in pseudocode form in Figure \ref{solvefour-pseudo}. Complete working
code for the solver is included in the open-source libspiro package.
This package contains an efficient C implementation of this solver,
the numerical methods being contained in \texttt{spiro.c}. The outer
loop of the Newton iteration is contained in the \texttt{spiro\_iter}
function, with the computation of the Jacobian matrix in
\texttt{compute\_pderivs}, and the solution of the matrix in
\texttt{bandec11} and \texttt{banbks11}, again adapted from Numerical
Recipes, specialized to the size of the band.
\begin{figure*}
\begin{center}
\includegraphics[scale=.8]{figs/global_iter}
\caption{\label{four-solver-example}Example of four-parameter solver.}
\end{center}
\end{figure*}
An example of the four-parameter solver at work is shown in Figure
\ref{four-solver-example}. The first iteration (i.e. the linear
approximation) is the thin red curve, and the second is in green. By
the third iteration, the constraints are satisfied to within visual
tolerance.
In practice, both solvers run efficiently and robustly. For the same
problem, the two-parameter solver is a bit more robust than the
four-parameter, but of course the latter is much more general and can
solve a much wider class of spline problems.
\section{2-D Lookup table}
\label{sec-lookup}
The family of two-parameter, extensional splines admits a particularly
simple and efficient implementation based on 2-D lookup tables. Since
the curve segments are defined by two parameters, for any given
generator curve it is practical to precompute a lookup table indexed
by the tangent angles at the endpoints. The lookup table stores the
curvatures at the endpoints (used for enforcing curvature continuity
across control points), as well as the position and tangent angle of a
midpoint, used for subdivision while rendering the curve. With this
precomputation done only once, drawing an arbitrary spline segment is
only slightly more complex than the standard de Casteljau subdivision
for rendering B\'ezier curves; the only difference is the use of a
lookup table rather than an algebraic formula to determine the
midpoint.
\begin{figure*}
\label{euler-lut}
\begin{center}
\includegraphics[scale=1.3]{figs/euler_lut}
\caption{Contour plot of curvature lookup table for Euler spiral.}
\end{center}
\end{figure*}
An example curvature lookup table, for the Euler spiral, is shown in
Figure \ref{euler-lut}. The horizontal axis is the tangent angle, in
radians, on the left side, the vertical axis is the tangent angle on
the right, and the value is the resulting curvature on the left side;
the other value follows by symmetry. Moreover, the table itself is
anti-symmetric, which can further reduce storage requirements. Note
that the relationship is nearly linear for small angles. The slope of
that linear relationship is closely related to locality. Because these
values are so smoothly behaved, a simple representation as a
two-dimensional lookup table with $32 \times 32$ entries, and using
bicubic interpolation for lookups, gives a maximum curvature error of
$10^{-4}$ across the range of the function. This error scales with the
fourth power of the linear size of the table; thus, for example, if
a higher precision is desired, a $64\times 64$ table could reduce the
error to about $6 \times 10^{-6}$.