dsacstar
DSAC* for Visual Camera ReLocalization (RGB or RGBD)
view repo
We describe a learningbased system that estimates the camera position and orientation from a single input image relative to a known environment. The system is flexible w.r.t. the amount of information available at test and at training time, catering to different applications. Input images can be RGBD or RGB, and a 3D model of the environment can be utilized for training but is not necessary. In the minimal case, our system requires only RGB images and ground truth poses at training time, and it requires only a single RGB image at test time. The framework consists of a deep neural network and fully differentiable pose optimization. The neural network predicts so called scene coordinates, i.e. dense correspondences between the input image and 3D scene space of the environment. The pose optimization implements robust fitting of pose parameters using differentiable RANSAC (DSAC) to facilitate endtoend training. The system, an extension of DSAC++ and referred to as DSAC*, achieves stateoftheart accuracy an various public datasets for RGBbased relocalization, and competitive accuracy for RGBD based relocalization.
READ FULL TEXT VIEW PDF
Popular research areas like autonomous driving and augmented reality hav...
read it
We introduce UprightNet, a learningbased approach for estimating 2DoF c...
read it
Fitting model parameters to a set of noisy data points is a common probl...
read it
Neural Radiance Fields (NeRFs) have recently emerged as a powerful parad...
read it
Camera localization aims to estimate 6 DoF camera poses from RGB images....
read it
In this paper, we propose a deep neural network approach for mapping the...
read it
This paper presents the evaluation methodology, datasets, and results of...
read it
DSAC* for Visual Camera ReLocalization (RGB or RGBD)
None
The ability to relocalize ourselves has revolutionized our daily lives. GPSenabled smart phones already facilitate car navigation without a codriver sweating over giant foldout maps, or they enable the search for a rare vegetarian restaurant in the urban jungle of Seoul. On the other hand, the limits of GPSbased relocalization are clear to anyone getting lost in vast indoor spaces or in between sky scrapers. When the satellite signals are blocked or delayed, GPS does not work or becomes inaccurate. At the same time, upcoming technical marvels, like autonomous driving [25] or impending updates of reality itself (i.e. augmented/extended/virtual reality [43]), call for reliable, high precision estimates of camera position and orientation.
Visual camera relocalization systems offer a viable alternative to GPS by matching an image of the current environment, e.g. taken by a handheld device, with a database representation of said environment. From a single image, stateoftheart visual relocalization methods estimate the camera position to the centimeter, and the camera orientation up to a fraction of a degree, both indoors and outdoors.
Existing relocalization approaches rely on varying types of information to solve the task, effectively catering to different application scenarios. Some use RGBD images as input which facilitates highest precision suitable for augmented reality [67, 75, 5, 63]. However, they require capturing devices with active or passive stereo capabilities, where the former only works indoors, and the latter requires a large stereo baseline for reliable depth estimates outdoors. Approaches based on featurematching use an RGB image as input and also offer high precision [59]. But they require a structurefrommotion (SfM) reconstruction [69, 77, 65] of the environment for relocalization. Such reconstructions might be cumbersome to obtain indoors due to textureless surfaces and repeating structures obstructing reliable feature matching [76]
. Finally, approaches based on image retrieval or pose regression require only a database of RGB images and ground truth poses for relocalization, but suffer from low precision comparable to GPS
[61].In this work, we describe a versatile, learningbased framework for visual camera relocalization that covers all aforementioned scenarios. In the minimal case, it requires only a database of RGB images and ground truth poses of an environment for training, and relocalizes based on a single RGB image at test time with high precision. In such a scenario the system automatically discovers the 3D geometry of the environment during training, see Fig. 1 for an example. If a 3D model of the scene exists, either as a SfM reconstruction or a 3D scan, we can utilize it to help the training process. The framework exploits depth information at training or test time if an RGBD sensor is available.
We base our approach on scene coordinate regression initially proposed by Shotton et al. [67]
for RGBDbased camera relocalization. A learnable function, a random forest in
[67], regresses for each pixel of an input image the corresponding 3D coordinate in the environment’s reference frame. This induces a dense correspondence field between the image and the 3D scene that serves as basis for RANSACbased pose optimization. In our work, we replace the random forest of [67]with a fully convolutional neural network
[41], and derive differentiable approximations to all steps of pose optimization. Most prominently, we derive a differentiable approximation of the RANSAC robust estimator, called differentiable sample consensus (DSAC) [6]. Additionally, we describe an efficient differentiable approximation for calculating gradients of the perspectivenpoint problem [7]. These ingredients make our framework endtoend trainable, ensuring that the neural network predicts scene coordinates that result in high precision camera poses.This article is a summary and extension of our previous work on camera relocalization published in [6] as DSAC, and its followup DSAC++ [7]. In particular, we describe an improved version under the name DSAC* with the following properties.
We extent DSAC++ to optionally utilize RGBD inputs. The corresponding pose solver is naturally differentiable, and other components require only minor adjustments. When using RGBD, DSAC* achieves accuracy comparable to stateoftheart RGBD indoor relocalization methods.
We propose a simplified training procedure which unifies the two separate initialization steps used in DSAC++. As a result, the training time of DSAC* reduces from 6 days to 2.5 days on identical hardware.
The improved initialization also leads to better accuracy. Particularly, when training without a 3D model, results improve significantly from 53.1% (DSAC++) to 71.6% (DSAC*) for indoor relocalization.
We utilize an improved network architecture for scene coordinate regression which we introduced in [9, 8]. The architecture, based on ResNet [28], reduces the memory footprint by 75% compared to the network of DSAC++. A forward pass of the new network takes 50ms instead of 150ms on identical hardware. Together with better pose optimization parameters we decrease the total inference time from 200ms for DSAC++ to 75ms for DSAC*.
In new ablation studies, we investigate the impact of the receptive field of our neural network on accuracy, as well as the impact of endtoend training. Furthermore, we provide extensive visualizations of our pose estimates, and of the 3D geometry that the network learns from a ground truth 3D model or discovers by itself.
This article is organized as follows: We give an overview of related work in Sec. 2. In Sec. 3, we formally introduce the task of camera relocalization and how we solve it via scene coordinate regression. In Sec. 4, we discuss how to train the scene coordinate network using auxiliary losses defined on the scene coordinate output. In Sec. 5, we discuss how to train the whole system endtoend, optimizing a loss on the estimated camera pose. We present experiments for indoor and outdoor camera relocalization, including ablation studies in Sec. 6. We conclude this article in Sec. 7.
In the following, we discuss the main strains of research for solving visual camera relocalization. We also discuss related work on differentiable robust estimators other than DSAC.
Early examples of visual relocalization rely on efficient image retrieval [62]. The environment is represented as a collection of data base images with known camera poses. Given a query image, we search for the most similar data base image by matching global image descriptors, such as DenseVLAD[73], or its learned successor NetVLAD [1]. The metric to compare global descriptors can be learned as well [12]. The sampling density of data base images inherently limits the accuracy of retrievalbased system. However, they scale to very large environments, and can serve as an efficient initialization for local pose refinement [60, 72].
Absolute pose regression methods [32, 76, 33, 48, 11]
aim at overcoming the precision limitation of image retrieval while preserving efficiency and scalability. Interpreting the data base images as a training set, a neural network learns the relationship between image content and camera pose. In theory, the network could learn to interpolate poses of training images, or even generalize to novel view points. In practise, however, absolute pose regression fails to consistently outperform the accuracy of image retrieval methods
[61].Relative pose regression methods [3, 56] train a neural network to predict the relative transformation between the query image, and the most similar data base image found by image retrieval. Initial relative pose regression methods suffered from similarly low accuracy as absolute pose regression [61]. However, recent work [20] suggests that relative pose regression can achieve accuracy comparable to structurebased methods which we discuss next.
The camera pose can be recovered by matching sparse, local features like SIFT [42] between the query image and database images [82]. For an efficient data base representation, SfM tools [77, 65] create a sparse 3D point cloud of an environment, where each 3D point has one or several feature descriptors attached to it. Given a query image, feature matching established 2D3D correspondences which can be utilized in RANSACbased pose optimization to yield a very precise camera pose estimate [59]. Work on featurebased relocalization has primarily focused on scaling to very large environments [39, 71, 57, 58, 60, 70] enabling city or even world scale relocalization. Other authors worked on efficiency to run featurebased relocalization on mobile devices with low computational budget [40].
While sparse feature matching can achieve high relocalization accuracy, handcrafted features fail in certain scenarios. Feature detectors have difficulty finding stable points under motion blur [32] and for textureless areas [67]. Also, SfM reconstructions tend to fail in indoor environments dominated by ambiguous, repeating structures [76]. Learningbased sparse feature pipelines [79, 19, 54, 21] might ultimately be able to overcome these issues, but currently it is an open research question whether learned sparse features consistently exceed the capabilities of their handcrafted predecessors [64, 4].
Instead of relying on a feature detector to identify salient structures in images suitable for discrete matching, scene coordinate regression [67] aims at directly predicting the corresponding 3D scene point for a given 2D pixel location. In these works, the environment is implicitly represented by a learnable function that can be evaluated for any image pixel to predict a dense correspondence field between image and scene. The correspondences serve as input for RANSACbased pose optimization, similar to sparse feature techniques.
Originally, scene coordinate regression was proposed for RGBDbased relocalization in indoor environments [67, 26, 75, 47]. The depth channel would serve as additional input to a scene coordinate regression forest, and be used in pose optimization by allowing to establish and resolve 3D3D correspondences [30]. Scene coordinate regression forests were later shown to also work well for RGBbased relocalization [5, 46].
Recent works on scene coordinate regression often replace the random forest regressor by a neural network while continuing to focus on RGB inputs [45, 6, 7, 38, 8]. In previous work, we have shown that the RANSACbased pose optimization can be made differentiable to allow for endtoend training of a scene coordinate regression pipeline [6, 7]. In particular, [6] introduced a differentiable approximation of RANSAC [22], and [7] described an efficient analytical approximation of calculating gradients for perspectivenpoint solvers. Furthermore, the predecessor of the current work, DSAC++ [7], introduced the possibility to train scene coordinate regression solely from RGB images and ground truth poses, without the need for image depth or a 3D model of the scene. Li et al. [38] improved on this initial effort by adding additional multiview and photometric consistency constraints to the network training.
In this work, we describe several improvements to DSAC++ that increase accuracy while reducing training and test time. We also demonstrate that the DSAC framework naturally exploits image depth if available, in an attempt to unify previously distinct strains of RGB and RGBDbased relocalization research. In summary, our method is more precise and more flexible than previous scene coordinate regression and sparse featurebased relocalization systems. At the same time, it is as simple to deploy as absolute pose regression systems due to requiring only a set of RGB images with ground truth poses for training in the minimal setting.
Orthogonal to this work, we describe a scalable variant of DSACbased relocalization in [8]. Yang et al. explore the possibility of allowing for sceneindependent coordinate regression [78], and Cavallari et al. adapt scene coordinate regression forests and networks onthefly for deployment as a relocalizer in simultaneous localization and mapping (SLAM) [15, 14, 13].
To allow for endtoend training of our relocalization pipeline, we have introduced a differentiable approximation to the RANSAC [22] algorithm, called differentiable sample consensus (DSAC). DSAC relies on a formulation of RANSAC that reduces to a
operation over model parameters. Instead of choosing model parameters with maximum consensus, we choose model parameters randomly with a probability
proportional to consensus. This allows us to optimize the expected task loss for endtoend training. A DSAC variant using a soft [16] does not work as well since it ignores potential multimodality in the distribution of model parameters. Recently, Lee et al. proposed a kernel soft as an alternative that is robust to multiple modes in the arguments [35]. However, their approximation effectively suppresses gradients of all but the main mode, while the DSAC estimator utilizes gradients of all modes.Alternatively to making RANSAC differentiable, some authors propose to replace RANSAC by a neural network [80, 81, 53, 55, 52]
. In these works, the neural network acts as a classifier for model inliers, effectively acting as a robust estimator for model parameters. However,
NGRANSAC [9] demonstrates that the combination of an inlierscoring network and RANSAC achieves even higher accuracy. In [9], we also discuss a combination of NGRANSAC and DSAC for camera relocalization which leads to higher accuracy in outdoor relocalization by learning to focus on informative image areas.In this section, we introduce the task of camera relocalization, and the principle of scene coordinate regression [67]. We explain how to estimate the camera pose from scene coordinates using RANSAC [22] when the input is a single RGB or RGBD image, respectively.
Given an image , which can be either RGB or RGBD, we aim at estimating camera pose parameters
w.r.t. the reference coordinate frame of a known scene, a task called relocalization. We propose a learnable system to solve the task, which is trained for a specific scene to relocalize within that scene. The camera pose has 6 degrees of freedom (DoF) corresponding to the 3D camera position
and its 3D orientation . In particular, we define the camera pose as the transformation that maps 3D points in the camera coordinate space, denoted as to 3D points in scene coordinate space, denoted as , i.e.(1) 
where denotes the pixel index in image . For notational simplicity, we assume a 4x4 matrix representation of the camera pose and homogeneous coordinates for all points where convenient.
We denote the complete set of scene coordinates for a given image as , i.e. . See Fig. 2 for an explanation and visualization of scene coordinates. Originally proposed by Shotton et al. [67], scene coordinates induce a dense correspondence field between camera coordinate space and scene coordinate space which we can use to solve for the camera pose. To estimate for a given image, we utilize a neural network with learnable parameters :
(2) 
Due to potential errors in the neural network prediction, we utilize a robust estimator, namely RANSAC [22], to recover from . Our RANSACbased pose optimization consists of the following steps:
Sample a set of camera pose hypotheses.
Score each hypothesis and choose the best one.
Refine the winning hypothesis.
We show an overview of our system in Fig. 3. In the following, we describe the three aforementioned steps for the general case, while we elaborate on concrete manifestation for RGB and RGBD input images in Sec. 3.1 and Sec. 3.2, respectively.
1) Sample Hypotheses. Image and scene coordinate prediction define a dense correspondence field over all image pixels . We will specify the concrete nature of correspondences in subsections below because it differs for RGB and RGBD inputs. As first step of robust pose optimization we randomly choose subsets of correspondences, , with . Each correspondence subset corresponds to a camera pose hypothesis , which we recover using a pose solver , i.e.
(3) 
The concrete manifestation of differs for RGB and RGBD inputs. Note that the RANSAC algorithm [22] includes a way to adaptively choose the number of hypotheses
according to an online estimate of the outlier ratio in
, i.e. the amount of erroneous correspondences. In this work, and our previous work [5, 6, 7, 8, 9], we choose a fixed and train the system to adapt to this particular setting. Thereby, becomes a hyperparameter that controls the allowance of the neural network to make inaccurate predictions.2) Choose Best Hypothesis. Following RANSAC, we choose the hypothesis with maximum consensus among all scene coordinates , i.e.
(4) 
We measure consensus by a scoring function that is, by default, implemented as inlier counting:
(5) 
Function measures the residual between pose parameters , and a scene coordinate , evaluates to one if the residual is smaller than an inlier threshold .
3) Refine Best Hypothesis. We refine the chosen hypothesis , which was created from a small subset of correspondences, using all scene coordinates:
(6) 
We implement refinement as resolving for the pose parameters using the complete inlier set of hypothesis , i.e.
(7) 
In practise, we iterate refinement and recalculation of the inlier set until convergence. We refer to the refined, chosen hypothesis as our final camera pose estimate .
Next, we discuss particular choices for pose optimization components in case the input image is RGB or RGBD .
In case the input is an RGB image without a depth channel, correspondences manifest as 2D3D correspondences between the image and 3D scene space:
(8) 
where denotes the 2D image coordinate associated with pixel . Image coordinates and scene coordinates are related by
(9) 
where denotes the camera calibration matrix, or internal calibration parameters of the camera. Using this relation, perspectivenpoint (PnP) solvers [24, 27] recover the camera pose from at least four 2D3D correspondences: . In practise, we use with the solver of Gao et al. [24] when sampling pose hypotheses , and nonlinear optimization of the reprojection error with LevenbergMarquardt [37, 44] when refining the chosen hypothesis with . We utilize the implementation available in OpenCV [10] for all PnP solvers.
As residual function for determining the score of a pose hypothesis in Eq. 5, we calculate the reprojection error:
(10) 
In case the input is an RGBD image, the known depth map allows us to recover the 3D coordinate corresponding to each pixel in the coordinate frame of the camera, denoted as . Together with the scene coordinate prediction , we have dense 3D3D correspondences between camera space and scene space, i.e.
(11) 
To recover the camera pose from 3D3D correspondences we utilize the Kabsch algorithm [30], sometimes also called orthogonal Procrustes, as pose solver . For sampling pose hypotheses , we use , when refining the chosen hypothesis we use .
As residual function for determining the score of an hypothesis in Eq. 5, we calculate the 3D Euclidean distance:
(12) 
In this section, we discuss the neural network architecture for scene coordinate regression, and how to train it using auxiliary losses defined on the scene coordinate output. These auxiliary losses serve as an initialization step prior to training the whole pipeline in an endtoend fashion, see Sec. 5. The initialization is necessary, since endtoend training from scratch will converge to a local minimum without giving reasonable pose estimates.
We implement scene coordinate regression using a fully convolutional neural network [41] with skip connections [28] and learnable parameters . We depict the network architecture in Fig. 3
, top. The network takes a single channel grayscale image as input, and produces a dense scene coordinate prediction subsampled by the factor 8. Subsampling, implemented with stride 2 convolutions, increases the receptive field associated with each pixel output while also enhancing efficiency. The total receptive field of each output scene coordinate is 81px. In experiments on various datasets, we found no advantage in providing the full RGB image as input, in contrast, conversion to grayscale slightly increases the robustness to nonlinear lighting effects.
Relation to our Previous Work. In our first DSACbased relocalization pipeline [6] and in DSAC++ [7], we utilized a VGGNetstyle architecture [68]. It had a larger memory footprint and slower runtime while offering similar accuracy. The receptive field was comparable with 79px. In the experiments of Sec. 6, we conduct an empirical comparison of both architectures. We utilized our updated architecture already in our work on ESAC [8] and NGRANSAC [9].
Training  Test  

Setting  RGB  D  poses  3D model  RGB  D 
RGBD  ✓  ✓  ✓  ✓  ✓  
RGB + 3D model  ✓  ✓  ✓  ✓  
RGB  ✓  ✓  ✓ 
In the following, we discuss different strategies on initializing the scene coordinate neural network, depending on what information is available for training. In particular, we discuss training from RGBD images for RGBDbased relocalization, training from RGB images and a 3D model of the scene for RGBbased relocalization as well as training from RGB images only for RGBbased relocalization. See Table I for a schematic overview. Other combinations are of course possible, e.g. training from RGB images only, but having RGBD images at test time. However, we but restrict our discussion and experiments to the most common settings found in the literature [67, 5, 7].
For RGBDbased pose estimation, we initialize our neural network by minimizing the Euclidean distance between predicted scene coordinates and ground truth scene coordinates .
(13) 
We obtain ground truth scene coordinates by reprojecting depth channels of training images to obtain 3D points in the camera coordinate frame, and transforming them using the ground truth pose , i.e. . We train the network using the average loss over all pixels of a training image:
(14) 
We motivate optimizing the Euclidean distance for RGBDbased relocalization by the fact that the corresponding Kabsch pose solver optimizes the pose over squared Euclidean residuals between camera coordinates and scene coordinates. We found the plain, instead of the squared, Euclidean distance in Eq. 13 superior in [6] due to its robustness to outliers.
In practise, we precalculate ground truth scene coordinates once for the entire training set in the resolution of the neural network output, i.e. subsampled by a factor of 8, mainly due to memory concerns. While we randomly shift the input image within 8px during training of the network, the corresponding ground truth coordinate remains the same for each pixel, introducing small inaccuracies. However, this stage serves merely as an initialization for endtoend training, see Sec. 5, where the network can learn to make the accurate predictions for each pixel.
In case the camera pose is to be estimated from an RGB image, the optimization of scene coordinates w.r.t. a 3D Euclidean distance is not optimal. The PnP solver, which we utilize for pose sampling and pose refinement, optimizes the camera pose w.r.t. the reprojection error of scene coordinates. Hence, for RGBbased pose estimation, we initialize the scene coordinate regression network by minimizing the reprojection error of its predictions, i.e. where denotes the residual function defined for RGB in Eq. 10, and denotes the ground truth camera pose.
Unfortunately, optimizing this objective from scratch fails since the reprojection error is ambiguous w.r.t. the viewing direction of the camera. However, if we assume a 3D model of the environment to be available, we may render ground truth scene coordinates , optimize the RGBD objective of Eq. 13 first, and switch to the reprojection error after a few training iterations:
(15) 
We define a set of valid scene coordinate predictions as for which we optimize the reprojection error. If a scene coordinate does not qualify as valid yet, we optimize the Euclidean distance, instead. A prediction is valid, iff:
, i.e. it lies at least 0.1m in front of the ground truth image plane.
It has a maximum reprojection error of .
It is within a maximum 3D distance w.r.t. to the rendered ground truth coordinate of .
The training objective is flexible w.r.t. to missing ground truth scene coordinates for certain pixels, i.e. if . In this case, we only enforce constraint 1) and 2) for . This allows us to utilize dense 3D models of the scene, sparse SfM reconstructions as well as depth channels with missing measurements to generate . The training objective utilizes a robust version of the RGB residual function of Eq. 10, i.e.
(16) 
This formulation implements a soft clamping by using the square root of the reprojection residual after a threshold of 100px. To train the scene coordinate network, we optimize the average of Eq. 15 over all pixels of a training image, similar to Eq. 14.
Relation to our Previous Work. We introduced a combined training objective based on, firstly, minimizing the 3D distance to ground truth scene coordinates, and, secondly, minimizing the reprojection error in DSAC++ [7]. However, DSAC++ uses separate initalization stages for the two objectives, 3D distance and reprojection error, which is computationally wasteful. The network might concentrate on modelling fine details of the geometry in the first initialization stage which is potentially undone in the second initialization stage. Also, pixels without a ground truth scene coordinate would receive no training signal in the first initalization stage of DSAC++. The new, combined training objective of DSAC* in Eq. 15 switches dynamically from optimizing the 3D distance to optimizing the reprojection error on a perpixel basis. By using one combined initialization stage instead of two, we shorten the pretraining time of DSAC* from 4 days to 2 days compared to DSAC++ on identical hardware.
The previous RGBbased training objective of Eq. 15 relies on the availability of a 3D model of the scene. When a dense 3D scan of an environment is unavailable, SfM tools like VisualSfM [77] or COLMAP [65] offer workable solutions to create a (sparse) 3D model from a collection of RGB images, e.g. from the training set of a scene. However, for some environments, particularly indoors, a SfM reconstruction might fail due to textureless areas or repeating structures. Also, despite SfM tools having matured significantly over many years since the introduction of Bundler [69] they still represent expert tools with their own set of hyperparameters to be tuned. Therefore, it might be attractive to train a camera relocalization system from RGB images and ground truth poses alone, without resorting to an SfM tool for preprocessing. Therefore, we introduce a variation on the RGBbased training objective of Eq. 15 that substitutes ground truth scene coordinates
with a heuristic scene coordinate target
:(17) 
We obtain heuristic targets from the ground truth camera pose and hallucinated 3D camera coordinates reprojected by assuming a constant image depth of 10m. The above formulation relies on switching from the heuristic target to the reprojection error as soon as possible. Therefore, we formulate the following relaxed validity constraints for scene coordinate predictions to form the set :
, i.e. it lies at least 0.1m in front of the ground truth image plane.
, i.e. it lies at most 1000m in front of the ground truth image plane.
It has a maximum reprojection error of .
Relation to our Previous Work. DSAC++ [7] used two separate initialization stages for minimizing the distance to heuristic targets , and optimization of the reprojection error, respectively. The first initialization stage was particularly cumbersome since the heuristic targets are inconsistent w.r.t. the true 3D geometry of the scene. The neural network can easily overfit to which we circumvent in DSAC++ by early stopping and by using only a fraction of the full training data for the first initialization stage. The new, combined formulation of Eq. 17 is more robust by only loosely enforcing the heuristic until the formulation adaptively switches to the reprojection error. Also, as mentioned in the previous section, the new formulation is more efficient by combining two initialization stages into one, thus reducing training time.
Our overall goal is training the complete pose estimation pipeline in an endtoend fashion. That is, we wish to optimize the learnable parameters of scene coordinate prediction in a way that we obtain highly accuracy pose estimates as per Eq. 6 and Eq. 4. Due to the robust nature of our pose optimization, particularly due to deploying RANSAC to estimate model parameters, the relation of the quality of scene coordinates and the estimated pose is nontrivial. For example, some predictions will be removed by RANSAC as outliers, hence they have no influence on . We may neglect such outlier scene coordinates entirely in training without any deterioration in accuracy. To the contrary, it might be beneficial to decrease the accuracy of outlier scene coordinates further to make sure that RANSAC classifies them as outliers. However, we have no prior knowledge which exact predictions for an image should be inliers or outliers of the estimated model.
In this work, we address this problem by making pose optimization itself differentiable, to include it in the training process. By training in an endtoend fashion, the scene coordinate network may adjust its predictions in any way that results in accurate pose estimates. More formally, we define the following loss function on estimated poses:
(18) 
with consisting of translation parameters and rotation parameters . We denote the ground truth pose parameters as and respectively. The weighting factor controls the tradeoff between translation and rotation accuracy. We use in our work, comparing rotation in degree to translation in cm.
The estimated camera pose depends on network parameters via the network prediction through robust pose optimization. In order to optimize the pose loss of Eq. 18, each component involved in pose optimization needs to be differentiable. In the remainder of this section, we discuss the differentiability of each component and derive approximate gradients where necessary. We discuss the differentiability of the Kabsch [30] pose solver for RGBD images in Sec. 5.1. We give an analytical approximation for gradients of PnP solvers for RGBbased pose estimation in Sec. 5.2. In Sec. 5.3, we explain how to approximate gradients of iterative pose refinement. We discuss differentiable pose scoring via soft inlier counting in Sec. 5.4. Finally, we present a differentiable version of RANSAC, called differentiable sample consensus (DSAC) in Sec. 5.5 which also defines our overall training objective.
We utilize the Kabsch pose solver when estimating poses from RGBD inputs. In this setting, we have 3D3D correspondences given between the 3D coordinates in camera space, defined by the given depth map, and 3D coordinates in scene space predicted by our neural network. In the following, we assume that we apply the Kabsch solver over a subset of correspondences either when sampling pose hypothesis from three correspondences, or refining the final pose estimate over an inlier set found by RANSAC:
(19) 
Here, and in the following, we make the dependence of a model hypothesis to the scene coordinate prediction explicit, i.e. we write . The Kabsch solver returns the pose that minimizes the squared residuals over all correspondences:
(20) 
The optimization can be solved in closed form by the following steps [30]. Firstly, we calculate the covariance matrix over the correspondence set:
(21) 
where and
denote the mean over all 3D coordinates in the correspondence set in camera space and scene space, respectively. Secondly, we apply a singular value decomposition (SVD) to the covariance matrix:
(22) 
Using the results of SVD, we reassemble the optimal rotation , and, subsequently, recover the optimal translation :
(23) 
All operations involved in the calculation of are differentiable, particularly the gradients of SVD can be calculated according to [50]
, with current deep learning frameworks like PyTorch
[51] offering corresponding implementations. The differentiability of the Kabsch algorithm has e.g. recently also been utilized in [2].Similar to the Kabsch solver of the previous section, the PnP solver calculates a pose estimate over a subset of all correspondences , i.e.
(24) 
We utilize a PnP solver when estimating camera poses from RGB images, where 2D3D correspondences are given between 2D image positions and 3D scene coordinate . A PnP solver optimizes pose parameters to minimize squared reprojection errors:
(25) 
We construct a residual vector
over all pixels associated with the current correspondence subset:(26) 
where denotes a pixels reprojection error as defined in Eq. 10.
In contrast to the Kabsch optimization objective, we cannot solve the PnP objectve of Eq. 25 in closed form. Different PnP solvers have been proposed in the past with different algorithmic structures, e.g. [24, 36] or the LevenbergMarquardtbased optimization in OpenCV [37, 44, 10]. Instead of trying to propose a differentiable variant of the aforementioned PnP algorithms, we calculate an analytical approximation of PnP gradients derived from the objective function in Eq. 25 [23]. We have introduced this way of differentiating PnP in the context of neural network training in DSAC++ [7].
Given a proper initialization, e.g. by [24, 36], we can optimize Eq. 25 iteratively using the GaussNewton method. Since we are interested only in the gradients of the optimal pose parameters found at the end of optimization, we ignore the influence of initialization itself, avoiding to calculate gradients of complex minimal solvers like [24, 36]. We give the GaussNewton update step to model parameters as
(27) 
where is the pseudoinverse of the Jacobean matrix of the residual vector defined in Eq. 26. In particular, the Jacobean matrix is comprised of the following partial derivatives:
(28) 
As mentioned before, the initial pose may be provided by an arbitrary, nondifferentiable PnP algorithm [24, 36]. We define the pose estimate of the PnP solver as the pose parameters after convergence of the associated optimization problem.
(29) 
Thus, we may calculate approximate gradients of model parameters w.r.t. scene coordinates by fixing the last optimization iteration around the final model parameters:
(30) 
We refine given camera pose parameters , denoted as , by iteratively resolving for the pose using the set of all inliers , and updating the set of inliers with the new pose estimate:
(31)  
We repeat refinement until convergence, e.g. when the inlier set ceases to change, i.e. where corresponds to the final inlier set. Similar to differentiating PnP in the previous section, we approximate gradients of iterative refinement by fixing the last refinement iteration.
(32) 
where function denotes either the Kabsch solver or the PnP solver for RGBD and RGB inputs, respectively. We discussed have the calculation of gradients for already in the previous sections.
We obtain a differentiable approximation of inlier counting of Eq. 5 by substituting the hard comparison of a pixel’s residual to an inlier threshold
with a Sigmoid function
:(33) 
For hyperparameter , which controls the softness of the Sigmoid function, we use the following heuristic in dependence of the inlier threshold : .
Relation to our Previous Work. In the original DSAC pipeline [6] we utilize a designated scoring CNN as a differentiable alternative to traditional inlier counting. However, our followup work on DSAC++ [7] revealed that a scoring CNN is prone to overfitting, and does in general not exceed the accuracy of the simpler soft inlier count.
We can subsume the RANSAC algorithm [22] in the following three steps, as also discussed in Sec. 3: Firstly, generate model hypotheses by random subsampling of correspondences. Secondly, choosing the best hypothesis according to a scoring function. Lastly, refine the winning hypotheses using its inliers. We discussed the differentiability of most components involved in the previous subsections, e.g. calculating gradients of pose solvers used for hypothesis sampling and refinement, and differentiating the inlier count for hypothesis scoring. However, choosing the best hypothesis according to Eq. 4 involves a nondifferentiable operation.
In [6], we introduce a differentiable approximation of hypothesis selection in RANSAC, called differentiable sample consensus (DSAC). In [6], we also argue, and show empirically, that a simple soft approximation, a weighted average of arguments, does not work well. A soft can be unstable when arguments have multimodal structure, i.e. very different arguments have high weights in the average. A standard average might also be overly sensitive to outlier arguments.
The DSAC approximation relies on a probabilistic selection of a model hypothesis according to a probability distribution
over the discrete set of sampled hypotheses with index :(34)  
where denotes the selected hypothesis, and denotes the final, refined estimate of our pipeline. The distribution guiding hypothesis selection is a softmax distribution over scores, i.e.
(35) 
The hyperparameter corresponds to a temperature that controls the softness of the distribution. The larger , the more DSAC will behave like RANSAC in always selecting the hypothesis with maximum score, while providing less signal for learning. In DSAC++ [7], we present a schema do adjust automatically during learning. In this work, we treat as a handtuned and fixed hyperparameter, as we found the camera relocalization problem not overly sensitive to the exact value of , and fixing it simplifies the software architecture of our pipeline.
To learn the pipeline, we optimize the expectation of the pose loss of Eq. 18 w.r.t. randomly selecting hypotheses:
(36) 
where we abbreviate the final, refined camera pose as . To minimize the expectation, the neural network should learn to predict scene coordinates that ensure the following two properties: Firstly, hypotheses with a large loss after refinement should receive a low selection probability, i.e. a low soft inlier count. Secondly, hypotheses with a high soft inlier count should receive a small loss after refinement. We present a schematic overview of all components involved in our DSACbased pipeline in Fig. 4. The figure summarises dependencies between processing steps, and differentiates between deterministic functions and sampling operations. The graph structure illustrates the nontrivial relation between the scene coordinate prediction and pose quality, since scene coordinates directly influence pose hypotheses, scoring and refinement.
The DSAC training objective of Eq. 36 is smooth and differentiable, and its gradients can be formulated as follows:
(37) 
where we use as a standin for the respective function arguments in Eq. 36, and abbreviate the expectation over as . We use Eq. 37 to learn our system in an endtoend fashion, updating neural network parameters of scene coordinate prediction .
We evaluate our camera relocalization pipeline for two indoor datasets and one outdoor dataset. Firstly, in Sec. 6.1 we discuss our experimental setup, including datasets, training schedule, hyperparameters and competitors. Secondly, we report results on 3 different datasets in Sections 6.2, 6.3 and LABEL:sec:exp:cambridge, respectively. Thirdly, we provide several ablation studies in Sections 6.5, 6.6 and 6.7, as well as visualizations of scene representations learned by our system in Sec. 6.8.
Task Variants. We deploy our system in several flavours, catering to different application scenarios where depth measurements or 3D scans of a scene might be available or not. Specifically, we analyze the following settings:
RGBD: We have RGBD images for training as well as at test time. For training, we generate ground truth scene coordinates from depth maps, and we use a Kabsch [30] pose solver for sampling hypotheses and for refining the final estimate.
RGB + 3D model: We have RGB images for training as well as at test time. We can render ground truth scene coordinates for training using a 3D model of the scene. The 3D model can either be a sparse SfM point cloud, or a dense 3D scan. We use the PnP solver of Gao et al. [24] to sample camera pose hypotheses, and the LevenbergMarquardt [37, 44] PnP optimizer of OpenCV [10] for final refinement.
RGB: Same as the previous setting, but we have no information about the 3D geometry of a scene, only RGB images and ground truth poses for training. To initialize scene coordinate regression, we optimize the heuristic objective of Eq. 13.
HyperParameters. We convert input images to grayscale and rescale them such that the shortest side measures 480px. For training, we apply random adjustments of brightness and contrast of the input image within a range. We use an inlier threshold for RGBbased pose optimization, and for RGBDbased pose optimization. We sample RANSAC hypotheses. We reject an hypothesis if the corresponding minimal set of scene coordinates does not satisfy the inlier threshold [17], and sample again. We score hypotheses using a soft inlier count at training and test time. For training, we optimize the expectation over hypothesis selection according to the distribution of Eq. 35 with a temperature of , where corresponds to the number of scene coordinates predicted, resp. to the output resolution of the neural network. At test time, we resort to standard RANSAC, and choose the best hypothesis with highest score. We do at most 100 refinement iterations, but stop early if the inlier set converges which typically takes at most 10 iterations.
We initialize the scene coordinate network for 1M iterations, a batch size of 1 image, and the Adam optimizer [34] with a learning rate of . This stage takes approximately two days on a single Tesla K80 GPU. We train the system endtoend for another 50k iterations, and a learning rate of , which takes 12 hours on the same hardware. We will make our implementation, based on PyTorch [51] publicly available.
Datasets. We evaluate our pipeline on three standard camera relocalization datasets, both indoor and outdoor:
7Scenes[67]: A RGBD indoor relocalization dataset of seven small indoor environments featuring difficult conditions such as motion blur, reflective surfaces, repeating structures and textureless areas. Images were recorded using KinectFusion [29] which also provides ground truth camera poses. For each scene, several thousand frames are available which the authors split into training and test sets. The depth channels of this dataset are not registered to the color images. We register them by projecting the depth maps to 3D points using the depth sensor calibration, and reprojecting them using the color sensor calibration while taking the relative transformation between depth and color sensor into account. A dense 3D scan of each scene is available for rendering ground truth coordinates for training RGBbased relocalization.
12Scenes[74]: A RGBD indoor relocalization dataset similar to 7Scenes, but containing twelve slightly larger indoor environments. Each scene comes with several hundred frames, split by the authors into training and test sets. The depth maps provided by the authors are registered to the color images. A dense 3D scan of each scene is available as well, which we use to render ground truth scene coordinates for training RGBbased relocalization.
Cambridge[32]: A RGB outdoor relocalization dataset of five landmarks in Cambridge, UK. Compared to the previous indoor datasets, each landmark spans an area of several hundred or thousand square meters. Each scene comes with several hundred frames, split by the authors into training and test tests. The authors also provide ground truth camera poses reconstructed using a SfM tool. The sparse SfM point cloud is also available for each scene, which we use to render sparse scene coordinate ground truth for RGBbased relocalization. The dataset contains a sixth scene, an entire street scene, which me omit in our experiments. The corresponding reconstruction is of poor quality containing several outlier camera poses and 3D points as well as duplicated and diverging geometry. Like in our previous work [7, 9] we were unable to achieve reasonable relocalization performance on the street scene.
Competitors. We compare to the following absolute pose regression networks: PoseNet (the updated version of 2017) [33], SpatialLSTM [76], MapNet [11] and SVSPose [48]. We compare to the following relative pose estimation approaches: AnchorNet [56], and retrievalbased InLoc [72]. For featurebased competitors, we report results of the ORB baseline used in [67] and [74], as well as the SIFT baseline used in [74]. For a stateoftheart featurebased pipeline, we compare to ActiveSearch [59]. Several early scene coordinate regression works were based on random forests. We compare to SCoRF of Shotton et al. [67], and its extension to multioutput forests (MO Forests) [26]
and forests predicting Gaussian mixture models (GMM) of scene coordinates, in the variation of Valentain et al.
[75] for RGBD (GMM F. (V)) and of Brachmann et al. for RGB (GMM F. (B)). Furthermore, we compare to the adaptive forests of Cavallari et al. [15] (OtF Forests), the BackTracking Forests of Meng et al. [46] (BTBRF), to the PointLine Forests of Meng et al. [47] (PLForests), and MNG forests [74]. For CNNbased scene coordinate regression, we compare to ForestNet[45], scene coordinate regression with an anglebased loss [38] (ABRLoss), and the visual descriptor learning approach of Schmidt et al. [63] (SSVDL). Finally, we compare to previous iterations of this pipeline, namely DSAC [6] and DSAC++ [7]. We denote our updated pipeline, described in this article, as DSAC*.We train one scene coordinate regression network per scene, and accept a pose estimate for a test image if its rotation error is below 5 and 5cm. We calculate the accuracy per scene, and report the average accuracy over all 7Scenes, see quantitative results in Fig. 5, left.
RGB. For training from RGB images and ground truth poses only, our new training objective of Eq. 17 increases accuracy significantly compared to DSAC++ (+18.5%). DSAC* also achieves slightly higher accuracy than the anglebased loss of Li et al. [38], despite the latter incorporating multiview constraints and a photometric loss.
RGB + 3D model. When a 3D model is available to render ground truth scene coordinates for training, both DSAC++ and DSAC* benefit, with DSAC* still achieving highest accuracy with 77.5% of relocalized frames. Also note that DSAC* is trained for 2.5 days compared to 6 days for DSAC++ on identical hardware.
RGBD. When DSAC* estimates poses from RGBD images, it achieves competitive accuracy compared to stateoftheart methods. Note that the difference in accuracy for DSAC* solely stems from the use of Kabsch as a pose solver, since our network still estimates scene coordinates from a grayscale image. The correct depth of image points allows a relocalization pipeline to trivially infer the distance between camera and scene. Note that all RGBD competitors, including the leading method, SSVLD [63], models the uncertainty of scene coordinates in some form, i.e. predicting full distributions of imagetoscene correspondences. Compared to this, the expressiveness of our framework is limited by only predicting scene coordinate point estimates.
Qualitative Results. We visualize the estimated test trajectory, as well as the pose error, of DSAC* for all scenes and all relocalization settings in Fig. 6. Estimated trajectories are predominately smooth, with outlier predictions concentrated on particular, presumably difficult, areas of each scene. As with previous iterations of our pipeline [6, 7], DSAC* has difficulties to handle the Stairs sequence which is dominated by ambiguous structures. To also visualize the relocalization quality in an augmented reality setup, we compare renderings of 3D models of each scene, using estimated camera poses, with the associated test image in Fig. 7. To give an unbiased impression of the general relocalization quality, we selected the test frame with median pose error for each visualization. Interestingly, even test poses with position errors of 10cm still look visually acceptable, see the visualizations for the difficult Stairs sequence.
We report quantitative results for 12Scenes in Fig. 5, right. DSAC* achieves stateoftheart accuracy in all settings for this dataset, consistently outperforming DSAC++. In general, we would consider this dataset being solved, with an average accuracy above 99% for relocalization from RGB as well as RGBD images. Merely, learning relocalization without a 3D model has some room for improvement, with DSAC* achieving 90.1% accuracy in this setting.
RGB + 3D model  RGB  
Method  Church  Court  Hospital  College  Shop  Church  Court  Hospital  College  Shop  
MapNet [11]            200/4.5  N/A  194/3.9  107/1.9  149/4.2  
SpatialLSTM [76]            152/6.7  N/A  151/4.3  99/1.0  118/7.4  
SVSPose [48]            211/8.1  N/A  150/4.0  106/2.8  63/5.7  
PoseNet17 [33]  149/3.4  700/3.7  217/2.9  99/1.1  105/4.0  157/3.2  683/3.5  320/3.3  88/1.0  88/3.8  
AnchorNet [56]            104/2.7  N/A  121/2.6  57/0.9  52/2.3  
InLoc [72]  18/0.6  120/0.6  48/1.0  46/0.8  11/0.5            
Active Search [59]  19/0.5  N/A  44/1.0  42/0.6  12/0.4            
BTBRF [46]  20/0.4  N/A  30/0.4  39/0.4  15/0.3            
SANet [78]  16/0.6  328/2.0  32/0.5  32/0.5  10/0.5            
DSAC [6]  (4d)  55/1.6  280/1.5  33/0.6  30/0.5  9/0.4           
DSAC++ [7]  (6d)  13/0.4  40/0.2  20/0.3  18/0.3  6/0.3  20/0.7  66/0.4  24/0.5  23/0.4  9/0.4 
NGDSAC++ [9]  (6d)  10/0.3  35/0.2  22/0.4  13/0.2  6/0.3  N/A  N/A  N/A  N/A  N/A 
DSAC*  (2.5d)  14/0.5  38/0.2  22/0.4  16/0.3  5/0.3  18/0.6  33/0.2  22/0.4  18/0.3  6/0.3 
DSAC*  (4.5d)  11/0.4  35/0.2  21/0.4  14/0.3  6/0.3  15/0.5  31/0.2  23/0.4  16/0.3  6/0.3 
We measure the relocalization quality on the Cambridge dataset using the median pose error for each scene, see Table II. Due to the ground truth for this dataset being recovered using a SfM tool, we report results with centimeter precision. We find the expressiveness of millimeter precision dubious given the nature of ground truth poses. Given a 3D model for training, DSAC* achieves only slightly better results than DSAC++, but trains significantly faster. We also report results for DSAC* trained longer, namely 4.5 days (initialized for 4 days and endtoend training for 0.5 days) instead of 2.5 days. While results improve by a small amount, it is unclear whether this benefit is worth significantly longer training time. For many scenes, NGDSAC++ [9], i.e. DSAC++ with neuralguided RANSAC, achieves best results. In principle, we could extend DSAC* to utilize neural guidance as well. Neural guidance is designed to improve RANSAC in high outlier domains. We expect the benefit of coupling it with DSAC* to be rather small, given the quality of results already.
When training without a 3D model, the new training objective of DSAC* achieves higher accuracy than DSAC++ across all scenes. Notably, DSAC* trained without a 3D model achieves higher accuracy than any method (including DSAC*) trained with a 3D model for the Great Court scene. Great Court is the largest landmark in the dataset. The associated SfM reconstruction contains a high outlier ratio, and might hinder the training process due to its low quality.
We visualize the estimated test trajectories of DSAC* in Fig. 8. Due to the very different scene sizes, we derive a scenedependent threshold to colorcode pose errors. The visualizations reveal that high localization error is correlated with the distance of the camera to the scene, particularly obvious for Old Hospital, but also King’s College. In Fig. 9, we depict the median pose error per scene, and observe a high visual quality of relocalization, suitable for augmented reality applications.
Architecture  Size  Time  RF 

Accuracy  


104MB  150ms  73px  DSAC++  74.4%  
DSAC*  76.1%  

28MB  50ms  81px  DSAC++  73.8%  
DSAC*  77.5% 
As explained in Sec. 4, we updated the network architecture compared to DSAC++. To disambiguate the impact of the network, and of the updated training schedule, we conduct an ablation study, see Table. III. We trained both architectures using the training schedule of DSAC++ [7], and DSAC* on the 7Scenes dataset. Both architectures achieve similar accuracy in the different training settings. However, we observer higher benefits for DSAC* using the DSAC* training schedule. Since training time is shorter (2.5 days instead of 6 days on identical hardware), the new, efficient architecture of DSAC* undergoes more parameter updates. The new architecture is significantly faster, with a forward pass taking 50ms. Together with the streamlined pose optimization (e.g. using 64 RANSAC hypotheses instead of 256), we achieve a total runtime of the system of 75ms compared to 200ms for DSAC++ on a single Tesla K80 GPU.
One important factor when designing an architecture for scene coordinate regression is the size of the receptive field. That is, what image area is taken into account for predicting a single scene coordinate, comparable to the image patch size for sparse feature matching. The architecture of DSAC* has a receptive field size of 81px. By substituting individual 3x3 convolutions with 1x1 convolutions and vice versa (cf. Fig. 3) we can increase and decrease the receptive field and study the change in accuracy. The change of the convolution kernel affects also the total count of learnable parameters of the network. To facilitate conclusions with regard to the receptive field alone, we scale the number of channels throughout the network to keep the number of free parameters constant. We report results in Fig. 10, comparing DSAC* with a receptive field of 81px (standard), 49px and 149px. We observe that the median relocalization error increases with a large receptive field of 149px. While a larger receptive field incorporates more image context for predicting a scene coordinate, is also leads to generalization problems. View point changes between training and test set have a higher impact for larger receptive fields. Making the receptive field smaller, with 49px, also decreases accuracy but only slightly. The effect of having less image context is counteracted by better generalization w.r.t. view point changes. For a more extreme argument in favor of architectures with limited receptive field, we conduct an experiment with an autoencoder architecture. Such an architecture encodes the whole image into a global descriptor, and deconvolves it to a full resolution scene coordinate prediction. The receptive field of such an architecture is the whole image, and we ensure again to keep the number of learnable parameters identical. As depicted in Fig. 10 a scene coordinate network with global receptive field achieves a disappointing relocalization accuracy comparable to DenseVLAD [73] based on image retrieval, or PoseNet [33] based on absolute pose regression. This indicates, that the receptive field might be another issue connected with the low accuracy of absolute pose regression methods, orthogonal to the explanations given by Sattler et al. in their study of these methods [61].
We report report results before and after training our system in an endtoend fashion in Fig. 11. For the indoor datasets 7Scenes and 12Scenes we report accuracy using different threshold of 5cm5, 2cm2 and 1cm1. While the impact of endtoend training for a coarse threshold is small, there are significant differences for the finer acceptance thresholds. Endtoend training increases the precision of successful pose estimates, but it does not necessarily decrease the failure rate. We see similar effects in outdoor relocalization for the Cambridge dataset where the pose precision, expressed by the median pose error decreases by ca. 30%. We also provide a qualitative comparison of scene coordinate prediction before and after endtoend training. Particularly, we visualize areas of training images where the reprojection error increased or decreased due to endtoend training. The system learns to focus on certain reliable structures. In general, we observe a tendency of the system to increase the scene coordinate quality for close objects. Presumably such objects are more helpful than distant structures for estimating the camera pose precisely.
Scene coordinate regression methods utilize a learnable function to implicitly encode the map of an environment. We can generate an explicit map representation of the geometry encoded in a network. More precisely, we iterate over all training images, predicting scene coordinates to generate one point cloud of the scene. We can recover the color of each 3D point by reading out the associated color at the pixel position of the training image for which the scene coordinate was predicted. Such a point cloud will in general feature many outlier points that hinder visualization. Therefore, we generate a mesh representation using Poisson surface reconstruction [31]. We show the recovered 3D models in Fig. 12 for 7Scenes and in Fig. 13
for Cambridge. Interestingly, our approach learns the complex 3D geometry of a scene, even when training solely from RGB images and ground truth poses. Furthermore, we are able to recover a dense scene representation, even when training with sparse 3D models for the Cambridge dataset.
We have presented DSAC*, a versatile pipeline for single image camera relocalization based on scene coordinate regression and differentiable RANSAC. In this article, we have derived gradients for all steps of robust pose estimation, including PnP solvers. The resulting system supports RGBDbased as well as RGBbased camera relocalization, and can be trained with or without a 3D model of a scene. Compared to previous iterations of the system, DSAC* trains faster, needs less memory and features low runtime. Simultaneously, DSAC* achieves stateoftheart accuracy on various dataset, indoor and outdoor, and in various settings. We will make the code of DSAC* publicly available, and hope that is serves as a credible baseline in relocalization research.
The authors would like to thank Dehui Lin for implementing an efficient version of the differentiable Kabsch pose solver within the scope of his Master thesis.
This work was supported by the DFG grant COVMAP: Intelligente Karten mittels gemeinsamer GPS und Videodatenanalyse (RO 4804/21 and RO 2497/122). This project has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 programme (grant No. 647769).
The computations were performed on an HPC Cluster at the Center for Information Services and High Performance Computing (ZIH) at TU Dresden.
Photogrammetric Computer Vision – Statistics, Geometry, Orientation and Reconstruction
. Cited by: §5.2.
Comments
There are no comments yet.