Smooth parametrization of a family of lines

mathematics
If you ever needed to parametrize a family of lines, you may have stumbled upon a discontinuity problem. When the slope of two consecutive lines in your discrete family change sign, it means that in the continuous family you're trying to fit, there is a vertical line somewhere inbetween. As you know, a vertical line has its slope and y-intercept both undefined, and if you were to plot them for the continuous family you would observe a singularity. So you can't use the slope-intercept form to parametrize your family of lines. Don't despair, here's a trick for you!

The problem

Let L={Lii1,N} be a set of N lines Li in the plane R2, which seem to obey a certain rule as i progresses. For instance, Li+1, as compared to Li could be translated and rotated a bit, by some functions of i. These lines are known by the values ai and bi of their slope and y-intercept respectively, so they all have the following slope-intercept form:

(1)Li:y=aix+bi,LiL

An example of what the lines Li may look like.

Moreover, the slope is unconstrained, and can very well change sign from one line to the next. We would like to find a smooth parametrization L(s), such that each line LiL is generated by a specific value si of the parameter s:

(2)LiL,!siRL(si)Li

Then L(s) forms a continuous family of lines, and LL(s). Note that I will use the same symbol L(s) to refer to either the continuous family of lines, or the line generated by a given value s of the parameter. The context will be sufficient to distinguish between both uses.

Solution

Obviously, as both the slope a(s) and the y-intercept b(s) of the line L(s) are discontinuous functions of s, a very high-order series expansion would be needed to approximate a and b as functions of s, if we were to use a slope-intercept form. But let’s not go there. The discontinuity, akin to coordinate singularities, originates from choosing this form in the first place, and can be removed by choosing a different one.

The Hesse normal form

When I began solving this problem, I remembered that the same singularity issue was circumvented by Duda and Hart in their implementation of the Hough transform, by the use of the Hessian normal form (see [1]). Any line l admits two parameters r and θ such that for every point (x,y)l the following equation must hold:

(3)l:r=xcosθ+ysinθ

This is called the Hesse normal form, and is an alternative to the slope-intercept form to represent a line. A bit of algebraic manipulation gives:

(4)y=1tanθx+rsinθ=ax+b

Then, by identification of the coefficients, we deduce the following formulas to transform a Hesse normal form to a slope-intercept form and back:

(5){a=1tanθb=rsinθ (6){θ=arctan(1a)r=bsinθ=ba1+1a2

Fitting L

Using 6 we can transform the pairs (ai,bi) describing the lines Li into pairs (ri,θi) parametrizing their respective Hesse normal forms:

(7)(ri,θi)=(biai1+1ai2,arctan(1ai))

Then, all is left to do is to fit the ri and θi using an n-th order polynomial regression for example:

(8){r(s)=j=1nαjsjθ(s)=j=1nβjsj

The values of the αj and βj are computed by the regression algorithm as functions of the ri and θi respectively. So we have the following continuous parametrization in Hesse normal form:

(9)L(s):r(s)=xcosθ(s)+ysinθ(s)

Then of course, we can go back to a slope-intercept form using 5, giving:

(10){a(s)=1tan(θ(s))=cotθ(s)b(s)=r(s)sinθ(s)=r(s)cscθ(s)

And as long as a(s) and b(s) are defined, we have the following parametrization in slope-intercept form:

(11)L(s):y=a(s)x+b(s)

The advantage of going down this route is that, because r(s) and θ(s) are continuous functions of s, there is a chance that only a low-order polynomial fit will be required to have good enough precision. So a(s) and b(s), as functions of these two, will be good estimators by design, at a low cost.

Implementation

Let’s do this in Python, with numpy and matplotlib:

import numpy as np
from matplotlib import pyplot as plt

Suppose we have a function get_data() that returns a 3-tuple containing multiple values si of the parameter s, followed by the slopes ai and the y-intercepts bi of the lines Li for each value of si. In the full code (link below), I implemented such a function featuring the same exact experimental data I was fiddling with when this parametrization problem first came up. I also added a bit of context for those of you who are interested! Now, let’s write two functions to convert coefficients from the slope-intercept form to coefficients of the Hesse normal form and vice versa:

def to_hesse_normal_form(a, b):
    r = - b / (a * np.sqrt(1 + 1 / np.power(a, 2)))
    theta = np.arctan(-1 / a)
    return r, theta

def to_slope_intercept_form(r, theta):
    a = - 1 / np.tan(theta)
    b = r / np.sin(theta)
    return a, b

These are direct implementations of 6 and 5. And now, we can import some lines using the get_data() function mentioned above, convert them to Hessian form, fit polynomials on ri and θi, get our a(s) and b(s) and do some plotting:

def main():
    s, a, b = get_data()
    # Fit the Hesse normal form parameters using 2nd order polynomials
    r, theta = to_hesse_normal_form(a, b)
    r_coeffs = np.polyfit(s, r, 2)
    theta_coeffs = np.polyfit(s, theta, 2)
    r_fit = np.poly1d(r_coeffs)
    theta_fit = np.poly1d(theta_coeffs)

    # For multiple values of the parameter s, calculate slope and y-intercept
    sp = np.linspace(np.min(s), np.max(s), 100)
    a_fit, b_fit = to_slope_intercept_form(r_fit(sp), theta_fit(sp))

    # Plot stuff
    # ...

Finally we can plot our fits. The code for this task is a bit lengthy so it was omitted, but you can check the full source code if needed.

On the left in green: the polynomial fits for r and θ. On the right in blue: our estimators for a and b. The data points are shown as red crosses.

Notice how -given the data that I use- only second order polynomials suffice to capture the dependency in s of both r and θ. There is no way a second order would be enough if we were to fit a and b directly! The discontinuity of a and b is quite visible, but was captured effortlessly by this method. Let’s also display a few lines of our smooth parametrized family L(s), together with the original lines Li:

In thick solid orange: the lines Li. In dotted black: some interpolated lines from the continuous family L(s)

That’s pretty convincing, if I may say so myself! So it appears you can use this method to interpolate between lines of a discrete set… My Google-fu is not so strong, but the only similar result I could find was this article on Cross-Linear Interpolation, so I guess that’s the name of the problem we solved here? It seems elementary enough that a myriad of applications could exist, but I’m quite the unimaginative type as far as applications go. Maybe you have some ideas to share?

Sources




The comment section requires the Utterances cookie in order to work properly. If you want to see people's comments or post a comment yourself, please enable the Utterances cookie here.