Pose estimation for augmented reality
Homography estimation

Table of Contents

Introduction

The estimation of an homography from coplanar points can be easily and precisely achieved using a Direct Linear Transform algorithm [4] [7] based on the resolution of a linear system.

Source code

The following source code that uses ViSP is also available in homography-dlt-visp.cpp file. It allows to estimate the homography between matched coplanar points. At least 4 points are required.

#include <visp/vpColVector.h>
#include <visp/vpHomogeneousMatrix.h>
#include <visp/vpMatrix.h>
vpMatrix homography_dlt(const std::vector< vpColVector > &x1, const std::vector< vpColVector > &x2)
{
int npoints = (int)x1.size();
vpMatrix A(2*npoints, 9, 0.);
// We need here to compute the SVD on a (n*2)*9 matrix (where n is
// the number of points). if n == 4, the matrix has more columns
// than rows. This kind of matrix is not supported by GSL for
// SVD. The solution is to add an extra line with zeros
if (npoints == 4)
A.resize(2*npoints+1, 9);
// Since the third line of matrix A is a linear combination of the first and second lines
// (A is rank 2) we don't need to implement this third line
for(int i = 0; i < npoints; i++) { // Update matrix A using eq. 33
A[2*i][3] = -x1[i][0]; // -xi_1
A[2*i][4] = -x1[i][1]; // -yi_1
A[2*i][5] = -x1[i][2]; // -1
A[2*i][6] = x2[i][1] * x1[i][0]; // yi_2 * xi_1
A[2*i][7] = x2[i][1] * x1[i][1]; // yi_2 * yi_1
A[2*i][8] = x2[i][1] * x1[i][2]; // yi_2
A[2*i+1][0] = x1[i][0]; // xi_1
A[2*i+1][1] = x1[i][1]; // yi_1
A[2*i+1][2] = x1[i][2]; // 1
A[2*i+1][6] = -x2[i][0] * x1[i][0]; // -xi_2 * xi_1
A[2*i+1][7] = -x2[i][0] * x1[i][1]; // -xi_2 * yi_1
A[2*i+1][8] = -x2[i][0] * x1[i][2]; // -xi_2
}
// Add an extra line with zero.
if (npoints == 4) {
for (int i=0; i < 9; i ++) {
A[2*npoints][i] = 0;
}
}
vpColVector D;
vpMatrix V;
A.svd(D, V);
double smallestSv = D[0];
unsigned int indexSmallestSv = 0 ;
for (unsigned int i = 1; i < D.size(); i++) {
if ((D[i] < smallestSv) ) {
smallestSv = D[i];
indexSmallestSv = i ;
}
}
#if VISP_VERSION_INT >= VP_VERSION_INT(2, 10, 0)
vpColVector h = V.getCol(indexSmallestSv);
#else
vpColVector h = V.column(indexSmallestSv + 1); // Deprecated since ViSP 2.10.0
#endif
if (h[8] < 0) // tz < 0
h *=-1;
vpMatrix _2H1(3, 3);
for (int i = 0 ; i < 3 ; i++)
for (int j = 0 ; j < 3 ; j++)
_2H1[i][j] = h[3*i+j];
return _2H1;
}
int main()
{
int npoints = 4;
std::vector< vpColVector > x1(npoints);
std::vector< vpColVector > x2(npoints);
std::vector< vpColVector > wX(npoints); // 3D points in the world plane
std::vector< vpColVector > c1X(npoints); // 3D points in the camera frame 1
std::vector< vpColVector > c2X(npoints); // 3D points in the camera frame 2
for (int i = 0; i < npoints; i++) {
x1[i].resize(3);
x2[i].resize(3);
wX[i].resize(4);
c1X[i].resize(4);
c2X[i].resize(4);
}
// Ground truth pose used to generate the data
vpHomogeneousMatrix c1Tw_truth(-0.1, 0.1, 1.2, vpMath::rad(5), vpMath::rad(0), vpMath::rad(45));
vpHomogeneousMatrix c2Tc1(0.01, 0.01, 0.2, vpMath::rad(0), vpMath::rad(3), vpMath::rad(5));
// Input data: 3D coordinates of at least 4 coplanar points
double L = 0.2;
wX[0][0] = -L; wX[0][1] = -L; wX[0][2] = 0; wX[0][3] = 1; // wX_0 ( -L, -L, 0, 1)^T
wX[1][0] = 2*L; wX[1][1] = -L; wX[1][2] = 0; wX[1][3] = 1; // wX_1 (-2L, -L, 0, 1)^T
wX[2][0] = L; wX[2][1] = L; wX[2][2] = 0; wX[2][3] = 1; // wX_2 ( L, L, 0, 1)^T
wX[3][0] = -L; wX[3][1] = L; wX[3][2] = 0; wX[3][3] = 1; // wX_3 ( -L, L, 0, 1)^T
// Input data: 2D coordinates of the points on the image plane
for(int i = 0; i < npoints; i++) {
c1X[i] = c1Tw_truth * wX[i]; // Update c1X, c1Y, c1Z
x1[i][0] = c1X[i][0] / c1X[i][2]; // x1 = c1X/c1Z
x1[i][1] = c1X[i][1] / c1X[i][2]; // y1 = c1Y/c1Z
x1[i][2] = 1;
c2X[i] = c2Tc1 * c1Tw_truth * wX[i]; // Update cX, cY, cZ
x2[i][0] = c2X[i][0] / c2X[i][2]; // x2 = c1X/c1Z
x2[i][1] = c2X[i][1] / c2X[i][2]; // y2 = c1Y/c1Z
x2[i][2] = 1;
}
vpMatrix _2H1 = homography_dlt(x1, x2);
std::cout << "2H1 (computed with DLT):\n" << _2H1 << std::endl;
return 0;
}

Source code explained

First of all we inlude ViSP headers that are requested to manipulate vectors and matrices.

#include <visp/vpColVector.h>
#include <visp/vpHomogeneousMatrix.h>
#include <visp/vpMatrix.h>

Then we introduce the function that does the homography estimation.

vpMatrix homography_dlt(const std::vector< vpColVector > &x1, const std::vector< vpColVector > &x2)

From a vector of planar points ${\bf x_1} = (x_1, y_1, 1)^T$ in image $I_1$ and a vector of matched points ${\bf x_2} = (x_2, y_2, 1)^T$ in image $I_2$ it allows to estimate the homography $\bf {^2}H_1$:

\[\bf x_2 = {^2}H_1 x_1\]

The implementation of the Direct Linear Transform algorithm to estimate $\bf {^2}H_1$ is done next. First, for each point we update the values of matrix A using equation (33). Then we solve the system ${\bf Ah}=0$ using a Singular Value Decomposition of $\bf A$. Finaly, we determine the smallest eigen value that allows to identify the eigen vector that corresponds to the solution $\bf h$.

int npoints = (int)x1.size();
vpMatrix A(2*npoints, 9, 0.);
// We need here to compute the SVD on a (n*2)*9 matrix (where n is
// the number of points). if n == 4, the matrix has more columns
// than rows. This kind of matrix is not supported by GSL for
// SVD. The solution is to add an extra line with zeros
if (npoints == 4)
A.resize(2*npoints+1, 9);
// Since the third line of matrix A is a linear combination of the first and second lines
// (A is rank 2) we don't need to implement this third line
for(int i = 0; i < npoints; i++) { // Update matrix A using eq. 33
A[2*i][3] = -x1[i][0]; // -xi_1
A[2*i][4] = -x1[i][1]; // -yi_1
A[2*i][5] = -x1[i][2]; // -1
A[2*i][6] = x2[i][1] * x1[i][0]; // yi_2 * xi_1
A[2*i][7] = x2[i][1] * x1[i][1]; // yi_2 * yi_1
A[2*i][8] = x2[i][1] * x1[i][2]; // yi_2
A[2*i+1][0] = x1[i][0]; // xi_1
A[2*i+1][1] = x1[i][1]; // yi_1
A[2*i+1][2] = x1[i][2]; // 1
A[2*i+1][6] = -x2[i][0] * x1[i][0]; // -xi_2 * xi_1
A[2*i+1][7] = -x2[i][0] * x1[i][1]; // -xi_2 * yi_1
A[2*i+1][8] = -x2[i][0] * x1[i][2]; // -xi_2
}
// Add an extra line with zero.
if (npoints == 4) {
for (int i=0; i < 9; i ++) {
A[2*npoints][i] = 0;
}
}
vpColVector D;
vpMatrix V;
A.svd(D, V);
double smallestSv = D[0];
unsigned int indexSmallestSv = 0 ;
for (unsigned int i = 1; i < D.size(); i++) {
if ((D[i] < smallestSv) ) {
smallestSv = D[i];
indexSmallestSv = i ;
}
}
#if VISP_VERSION_INT >= VP_VERSION_INT(2, 10, 0)
vpColVector h = V.getCol(indexSmallestSv);
#else
vpColVector h = V.column(indexSmallestSv + 1); // Deprecated since ViSP 2.10.0
#endif
if (h[8] < 0) // tz < 0
h *=-1;

From now the resulting eigen vector $\bf h$ that corresponds to the minimal eigen value of matrix $\bf A$ is used to update the homography $\bf {^2}H_1$.

vpMatrix _2H1(3, 3);
for (int i = 0 ; i < 3 ; i++)
for (int j = 0 ; j < 3 ; j++)
_2H1[i][j] = h[3*i+j];

Finally we define the main function in which we will initialize the input data before calling the previous function.

int main()

First in the main we create the data structures that will contain the 3D points coordinates wX in the world frame, their coordinates in the camera frame 1 c1X and 2 c2X and their coordinates in the image plane x1 and x2 obtained after perspective projection. Note here that at least 4 coplanar points are requested to estimate the 8 parameters of the homography.

int npoints = 4;
std::vector< vpColVector > x1(npoints);
std::vector< vpColVector > x2(npoints);
std::vector< vpColVector > wX(npoints); // 3D points in the world plane
std::vector< vpColVector > c1X(npoints); // 3D points in the camera frame 1
std::vector< vpColVector > c2X(npoints); // 3D points in the camera frame 2
for (int i = 0; i < npoints; i++) {
x1[i].resize(3);
x2[i].resize(3);
wX[i].resize(4);
c1X[i].resize(4);
c2X[i].resize(4);
}

For our simulation we then initialize the input data from a ground truth pose c1Tw_truth that corresponds to the pose of the camera in frame 1 with respect to the object frame. For each point, we compute their coordinates in the camera frame 1 c1X = (c1X, c1Y, c1Z). These values are then used to compute their coordinates in the image plane x1 = (x1, y1) using perspective projection.

Thanks to the ground truth transformation c2Tc1 between camera frame 2 and camera frame 1, we compute also the coordinates of the points in camera frame 2 c2X = (c2X, c2Y, c2Z) and their corresponding coordinates x2 = (x2, y2) in the image plane.

// Ground truth pose used to generate the data
vpHomogeneousMatrix c1Tw_truth(-0.1, 0.1, 1.2, vpMath::rad(5), vpMath::rad(0), vpMath::rad(45));
vpHomogeneousMatrix c2Tc1(0.01, 0.01, 0.2, vpMath::rad(0), vpMath::rad(3), vpMath::rad(5));
// Input data: 3D coordinates of at least 4 coplanar points
double L = 0.2;
wX[0][0] = -L; wX[0][1] = -L; wX[0][2] = 0; wX[0][3] = 1; // wX_0 ( -L, -L, 0, 1)^T
wX[1][0] = 2*L; wX[1][1] = -L; wX[1][2] = 0; wX[1][3] = 1; // wX_1 (-2L, -L, 0, 1)^T
wX[2][0] = L; wX[2][1] = L; wX[2][2] = 0; wX[2][3] = 1; // wX_2 ( L, L, 0, 1)^T
wX[3][0] = -L; wX[3][1] = L; wX[3][2] = 0; wX[3][3] = 1; // wX_3 ( -L, L, 0, 1)^T
// Input data: 2D coordinates of the points on the image plane
for(int i = 0; i < npoints; i++) {
c1X[i] = c1Tw_truth * wX[i]; // Update c1X, c1Y, c1Z
x1[i][0] = c1X[i][0] / c1X[i][2]; // x1 = c1X/c1Z
x1[i][1] = c1X[i][1] / c1X[i][2]; // y1 = c1Y/c1Z
x1[i][2] = 1;
c2X[i] = c2Tc1 * c1Tw_truth * wX[i]; // Update cX, cY, cZ
x2[i][0] = c2X[i][0] / c2X[i][2]; // x2 = c1X/c1Z
x2[i][1] = c2X[i][1] / c2X[i][2]; // y2 = c1Y/c1Z
x2[i][2] = 1;
}

From here we have initialized ${\bf x_1} = (x1,y1,1)^T$ and ${\bf x_2} = (x2,y2,1)^T$. We are now ready to call the function that does the homography estimation.

vpMatrix _2H1 = homography_dlt(x1, x2);

Resulting homography estimation

If you run the previous code, it we produce the following result that shows that the estimated pose is equal to the ground truth one used to generate the input data:

2H1 (computed with DLT):
-0.5425233874 0.04785624324 -0.03308292557
-0.04764480242 -0.5427592709 -0.005830349194
0.02550335177 0.005978041063 -0.6361649707