Estimate fundamental matrix from corresponding points in stereo images. F = estimateFundamentalMatrix(matchedPoints1,matchedPoints2) returns the 3-by-3 fundamental matrix, F, using the least median of squares (LMedS. Select the Least Trimmed Squares method if you know a minimum percentage of inliers in matchedPoints1. We’ll follow a different approach: an iterative method using while-loops. The most common iterative method of square root calculation is known as the ‘ Heron's method ’ or ‘ Babylonian method ’. First step, estimate a number.Think of this number as your first approach to a root (the closer to the actual square root of x, the fewer iterations will be needed to achieve the desired.
Services on Demand
- Cited by SciELO
Related links
- Similars in SciELO
Print version ISSN 1413-4853On-line version ISSN 1982-2170
http://dx.doi.org/10.1590/S1982-21702015000200019 ArtigosLEAST SQUARES FITTING OF ELLIPSOID USING ORTHOGONAL DISTANCESAdequação do elipsóide usando distâncias ortogonais com mínimos quadrados1OndokuzMayis University, Faculty of Engineering, Geomatics Engineering, 55139 Samsun, [email protected]In this paper, we present techniques for ellipsoid fitting which are based on minimizing the sum of the squares of the geometric distances between the data and the ellipsoid. The literature often uses 'orthogonal fitting' in place of 'geometric fitting' or 'best-fit'. For many different purposes, the best-fit ellipsoid fitting to a set of points is required. The problem offitting ellipsoid is encounteredfrequently intheimage processing, face recognition, computer games, geodesy etc. Today, increasing GPS and satellite measurementsprecisionwill allow usto determine amore realistic Earth ellipsoid. Several studies have shown that the Earth, other planets, natural satellites, asteroids and comets can be modeled as triaxial ellipsoids Burša and Šima (1980), Iz et all (2011). Determining the reference ellipsoid for the Earth is an important ellipsoid fitting application, because all geodetic calculations are performed on the reference ellipsoid. Algebraic fitting methods solve the linear least squares (LS) problem, and are relatively straightforward and fast. Fitting orthogonal ellipsoid is a difficult issue. Usually, it is impossible to reach a solution with classic LS algorithms. Because they are often faced with the problem of convergence. Therefore, it is necessary to use special algorithms e.g. nonlinear least square algorithms. We propose to use geometric fitting as opposed to algebraic fitting. This is computationally more intensive, but it provides scope for placing visually apparent constraints on ellipsoid parameter estimation and is free from curvature bias Ray and Srivastava (2008).Key words: Fitting Ellipsoid; Orthogonal Fitting; Algebraic Fitting; Nonlinear Least Square Problem.Neste trabalho, apresentamos a adequação do elipsóide com base na soma dos mínimos quadrados das distâncias geométricas entre os dados e o elipsóide. A literatura muitas vezes aborda a 'adequação ortogonal' em lugar da 'adequação geométrica' na 'melhor adequação'. Para muitos propósitos diferentes, a melhor adequação do elipsóide para o conjunto de pontos é necessária. O problema da adequação do elipsóide é encontrado frequentemente no processamento de imagens, reconhecimento de superfícies, jogos de computador, geodésia, etc. Hoje, o aumento da precisão das medidas GPS e de satélites permite que se determine um elipsóide da Terra mais realístico. Diversos estudos, tem demonstrado que a Terra, outros planetas, satélites naturais, asteróides e cometas podem ser modelados com o elipsóides triaxiais, Bursa e Sima (1980), Iz et al (2011). Determinar o elipsóide adequado para a Terra é importante porque se aplica a todos os cálculos geodésicos que são executados sobre o elipsóide de referência. Métodos adequados algebricamente resolvem o problema de adequação por mínimos quadrados lineares, e são relativamente rápaidos e diretos. A adequação do elipsóide ortogonal é um assunto difícil. Normalmente, é impossível encontrar uma solução com algorítimos clássicos de mínimos quadrados. A razão é que eles são frequentemente encontrados com problemas de convergência. Portanto, é necessário usar algorítimos especiais, por exemplo, algorítimos de mínimos quadrados com não lineares. Nós propomos o uso da adequação geométrica em contraste com a adequação algébrica. Isto é computacionalmente mais intensivo, mas dá margem ao uso de injunções relativas na estimação a dos parâmetros do elipsóide e elimina as tendências da curvatura, Ray e Srivastava (2008).Palavras-Chave: Adequação do Elipsóide; Adaptação Ortogonal; Adequação Algébrica; Problema de Mínimos Quadrados Não-Linear.1. INTRODUCTIONFitting an ellipsoid to an arbitrary set of points is a problem of fundamental importance in many wide fields of applied science ranging from astronomy, geodesy, digital image processing and robotics to metrology etc. Ellipsoids, though a bit simple in representing 3D shapes in general, are the only bounded and centric quadrics that can provide information of center and orientation of an object. Fitting ellipsoid has been discussed widely and some excellent work has been done in literature. However, most of these fitting techniques are algebraic fitting, but not orthogonal fitting. Various 'least- squares' fitting approaches have been formulated over the years Zhang (1997), but they all fall into two categories; (1) algebraic methods, which are extensively used due to their linear nature, simplicity and computationally efficiency, and (2) geometric methods that solve a nonlinear problemRay and Srivastava (2008).We could not find enough studies with numerical examples in the literature. Turner et al (1999) gave a numerical application, but the application's data are not given Turner et al (1999). No other comparable orthogonal fitting ellipsoid application could be found in literature. Against this background, the purpose of the study is to give an orthogonal fitting ellipsoid with numerical examples. In this article, we demonstrate that the geometric fitting approach, provides a more robust alternative than algebraic fitting approach-although it is computationally more intensive. The paper has eight parts. First, the basic ellipsoid will introduce some mathematical equationsto explainthe concepts. Then, it reviews the extended literature relevant to ellipsoid fitting. And we discussed in this research which estimators is used. Next, comes the part which deals with algebraic fitting, orthogonal fitting and numerical example. You will find ellipsoid fitting application based on both l1-norm and l2-norm methods. The paper concludes with a discussion of theoretical and managerial implications and directions for further research.1.1 EllipsoidAn ellipsoid is a closed quadric surface that is analogue of an ellipse. Ellipsoid has three different axes (ax>ay>b) in Figure 1. Mathematical literature often uses 'ellipsoid' in place of 'Triaxial ellipsoid or General ellipsoid'. Scientific literature (particularly geodesy) often uses 'ellipsoid' in place of 'biaxial ellipsoid, rotational ellipsoid or ellipsoid revolution'. Older literature uses 'spheroid' in place of rotational ellipsoid. The standard equation of an ellipsoid centered at the origin of a cartesian coordinate system and aligned with the axes is shown with this formula:Figure 1: Ellipsoid Although ellipsoid equation is quite simple and smooth, computations are quite difficult on the ellipsoid. The main reason for this difficulty is the lack of symmetry. Generally, an ellipsoid is defined with 9 parameters. These parameters are; 3 coordinates of center (Xo,Yo,Zo), 3 semi-axes (ax, ay, b) and 3 rotational angles ((, (, () which represent rotations around x-,y- and z- axes respectively in Figure 2. These angles control the orientation of the ellipsoid. R1,R2,R3 are plane rotation matricesR-rotation matrix is obtained from R1,R2,R3 by multiplying the reverse order2. FITTING ELLIPSOIDFor the solution of the fitting problem, the linear or linearized relationship, written between the given data points and unknown parameters (one equation per data points), consists of equations, including unknown parameters. Here, A is design matrix, (x is unknown parameters, l is measurements vector or data points,For this minimization problem to have a unique solution the necessary conditions is to be n>= 9 and the data points lie in general position (e.g., not all data points should lie is an elliptic plane). Throughout this paper, we assume that these conditions are satisfied.u=9 : number of unknown parametern: number of given data point (or measurements)f=n-u :degree of freedom-If f = 0 there is only one (exact) solution, algebraic solution-If f<0 there is no solution. The solutioncan be found with based on the extra constraint-If f> 0 is most commonly encountered situation. The given data points (or measurements), which are much greater than the required number cause discrepancy, and in this case, the solution is not unique. There is an over determined system. Because n > u, in other words the number of equations is greater than the number of unknowns.The system of linear equations (4) must be solved. Therefore, this system must be consistent with the rang of design matrix, and design matrix extended with constant terms, must be equal, so that rang(A) =rang(A:l); whereas, the system of (4) is inconsistent, because (x unknown parameters that provide (4), can not be calculated. In this case, rang(A) ≤ u. The extended matrix with l measurements rang(A:l) is generally more than rang(A) . There is no solution of inconsistent equations, and only the approximate solution of the system can be derived. The equation system with approximate solution is calculated by adding ( residuals (or corrections) at the right side of (4). Depending on the choice of ( residuals vector, infinite solutions can be obtained. The unique solution can be derived only according to an estimator (objective function). For example, the LS always give an unique solution Bektas and Sisman (2010). Here, the question of which estimation method to use comes to mind? 3. WHICH ESTIMATOR SHOULD BE USED?It is hoped that the residuals will be small. The more suitable estimation method is one that creates smaller residuals. It is seen that usually the objective functions are formed based on the minimization of corrections or a function of corrections. There are numerous estimators, some of these are l1-norm, l2-norm,lp-norm, Fair, Huber, Cauchy, German-McClure, Welsch and Tukey. Two estimator methods come to the forefront.The most used estimators are shown below:(i) [((] = min. ( l2-norm) Least Squares Method (LSM)(ii) [I(I] = min. ( l1-norm) Least Absolute Values Method (LAVM).3.1. The Comparison of l1 and l2-Norm MethodsThe solution of the l2-norm method is always unique, and this solution is easily calculated. The l2-norm method is widely used in parameters estimation. The l2-norm method has indisputable superiority in parameter estimation.The disadvantages of the l2-norm method are that is affected by outlying (gross errors) and it distributes to the sensitivity measurements. In this case, ellipsoid fitting is a very nice application.With least-squares techniques, even one or two outliers in a large set can wreak havoc! Outlying data give an effect so strong in the minimization that the parameters thus estimated by those outlying data are distorted. Numerous studies have been conducted, which clearly show that least-squares estimators are vulnerable to the violation of these assumptions. Sometimes, even when the data contains only one bad measurement, l2-norm method estimates may be completely perturbed Zhang (1997).The solution of the l1-norm method is not always unique, and there may be several solutions. Also, the solution of the l1-norm method is not generally obtained directly, but iteratively calculations are made. Therefore, the solution is not easily calculated like in the l2-norm method. Notwithstanding, when the computational tools, computer capacity and speed are considered, the difficulty of calculations are eliminated. The advantages of the l1-norm method are non-sensitivity against measurements, including gross errors, and the solution is not or is little affected by these measurements. The author of this study proposed and used the l2-norm method in the solution of parameter estimation (optimization problems, adjustment calculus), after the measurement group cleaned up gross and systematic errors using the l1-norm method. For further information see Bektas and Sisman (2010).4. ALGEBRAIC ELLIPSOID FITTING METHODSThe general equation of an ellipsoid is given as(6) contains ten parameters. In fact, nine of those ten parameters are independent. For example, if all the coefficients in this equation multiply by (-1/K'), we get a new equation which contains nine unknown parameters, and its constant term will be equal to '-1'.In this algorithm, we need to check whether a fitted shape is an ellipsoid. In theory, the conditions that ensure a quadratic surface to be an ellipsoid have been well investigated and explicitly stated in analytic geometry textbooks. An ellipsoid can be degenerated into other kinds of elliptic quadrics, such as an elliptic paraboloid. Therefore, a proper constraint must be added. Li and Griffiths gave the following definitions Li and Griffiths (2004).However 4j-i2> 0 is just a sufficient condition to guarantee that an equation of second degree in three variables represent an ellipsoid, but it is not necessary. In this paper, we assume that these conditions are satisfied.The algebraic method is a linear problem. It is solving the problem directly and easily. The fitting ellipsoid to a given the data set ((x,y,z)i , i=1,2,...,n), is obtained by the solution in the LS sense of in the following:Where (nxu = Design matrixvu = [ A B C D E F G H I ]Tunknown conic parametersln = [1 1 1...1]T unit vector : right side vector ith row of the nx9 matrix (It is solved easily in the LS sense as belowor it is solved easily by MATLAB as belowIf there are differences in weights or correlations between given data points, P weight matrix is added in the solution, and thenP=Kll-1Kll:nxn variance-covariance matrix of data pointsResidual (or correction) vector is computed as belowLS optimization give us ||(|| =min.Algebraic methods all have indisputable advantages of solving linear LS problems. The methods for this are well known and fast. However, it is intuitively unclear what it is we are minimizing geometrically in (7) is often referred to as the 'algebraic distance' to be minimized Ray and Srivastava (2008). A geometric interpretation given by Bookstein (1979) clearly demonstrates that algebraic methods neglect points far from the center. 5. FITTING OF ELLIPSOID USING ORTHOGONAL DISTANCESTo overcome the problems with the algebraic distances, it is natural to replace them by the orthogonal distances which are invariant to transformations in Euclidean space and which do not exhibit the high curvature bias. An ellipsoid of best fit in the LS sense to the given data points can be found by minimizing the sum of the squares of the geometric distances from the data to the ellipsoid. The geometric distance is defined to be the distance between a data point and its closest point on the ellipsoid. Determining best fit ellipsoid is a nonlinear least squares problem which in principle can be solved by using the Levenberg-Marquardt (LM)algorithm. Generally, non-linear least squares is a complicated issue. It is very difficult to develop methods which can find the global minimizer with certainty in this situation. When a local minimizer has been discovered, we do not know whether it is a global minimizer or one of the local minimizer Zisserman (2013).There are a variety of nonlinear optimization techniques. Such as Newton, Gauss-Newton, Gradient Descent, Levenberg-Marquardt approximation etc. However, these fitting techniques involve a highly nonlinear optimization procedure, which often stops at a local minimum and cannot guarantee an optimal solution Li and Griffiths (2004).Away from the minimum, in regions of negative curvature, the Gauss-Newton approximation is not very good. In such regions, a simple steepest-descent step is probably the best plan. The Levenberg-Marquardt method is a mechanism for varying between steepest-descent and Gauss-Newton steps depending on how good the HGN approximation is locally.The Levenberg-Marquardt method uses the modified HessianH(x,λ) = HGN +λ.I ( I : identity matrix )• When λ is small, H approximates the Gauss-Newton Hessian.• When λ is large, H is close to the identity, causing steepest-descent steps to be taken.This algorithm does not require explicit line searches. More iterations than Gauss-Newton, but, no line search required, and more frequently converge suppose that we have a unknowns parameter set v= [ A B C D E F G H I ]T are unknown conic parameters. The general conic equation for an ellipsoid is given as (8)We will reach the solution by establishing relationships between variations in the conical coefficients and the orthogonal distances.The initial parameters were derived from the algebraic fitting ellipsoid. : Projection coordinates (onto ellipsoid) of given Pi data pointsith row of the nx10 matrix J (jakobien matrix)ith row of the right side vector hnx1We obtained the below linearized equationThe fitted orthogonal ellipsoid is obtained by the solution in the LS sense with the L-M algorithm.5.1 The Levenberg-Marquardt Algorithm1-Solve algebraic methods and find initial values for vset λ=1 (say)2- Compute J-jacobien matrix and hi orthogonal distances from all given data points minh= hTh3- Solve ( JTJ + λ I ) dv= JThv=v+dv , new conic parameterFind again hi orthogonal distances from all given data pointsnewh= hTh4- ifnewh<minh % yes there is improvement, reduce λminh=newh; λ=λ/2goto 3else % no improvement, increase λλ=2*λgoto 3end5.2. Finding Orthogonal Distances from the EllipsoidIn this paper, we present techniques for ellipsoid fitting which are based on minimizing the sum of the squares of the geometric distances between the data and the ellipsoid. The most time-consuming part is the computation of the orthogonal distances between each point and the ellipsoid. Our aim to find the orthogonal distances from a shifted-oriented ellipsoid see Figure 2. For detailed information on this subject refer to Bektas (2014). 6. NUMERICAL EXAMPLEFor numerical applications 12 triplets (x,y,z) cartesian coordinates were produced. Here data points coordinates, x: [ 7 7 9 9 11 11 8 8 10 10 12 12 ]y: [22 19 23 19 24 20 21 17 22 18 23 19 ]z: [31 28 31 27 29 26 32 29 32 28 31 28 ]This problem is also solved by Least Absolute Values Method( l1-norm) and the following results were obtained. The conical coefficients in the Least Squares Method is,v=[-0.0006 -0.0008 -0.0010 0.0005 -0.0005 0.0003 0.0092 0.0050 0.0278]The conical coefficients in the Least Absolute Values Method is,v=[ -0.0071 -0.0084 -0.0096 0.0047 -0.0040 0.0023 0.0880 0.0061 0.0271]We show both the algebraic and orthogonal fitting results are as shown in Table 1.Table 1: The result of algebraic and orthogonal fitting ellipsoid *RSS: The residual sum of squares of the orthogonal distances7. DISCUSSIONOrthogonal least-squares has a much sounder basis, but is usually difficult to implement. Why are algebraic distances usually not satisfactory? The big advantage of use of algebraic distances is the gain in computational efficiency, because closed-form solutions can usually be obtained. In general, however, the results are not satisfactory.The function to minimize is usually not invariant under Euclidean transformations. For example, the function with normalization K' = -1 in (6) is not invariant with respect to translations. This is a feature we dislike, because we usually do not know in practice where the best coordinate system to represent the data is. A point may contribute differently to the parameter estimation depending on its position on the conic. If a priori all points are corrupted by the same amount of noise, it is desirable for them to contribute the same way Zhang (1997). More importantly, algebraic methods have an inherent curvature bias - data corrupted by the same amount of noise will misfit unequally at different curvatures Ray and Srivastava (2008). Our experience tells us that if the coordinates of given points consists of a large number this will cause bad condition. Therefore, before fitting, you must shift the given coordinates to the center of gravity, after fitting operation the coordinates of ellipsoid's center must be shifted back to the previous position.8. CONCLUSIONIn this paper, we studied on the orthogonal fitting ellipsoid. The problem offitting ellipsoid is encounteredfrequently intheimage processing, face recognition, computer games, geodesy-determiningmore realistic Earth ellipsoid etc. The paper has presented a new method of orthogonal fitting ellipsoid. The new method relies on solving an over determined system of nonlinear equations with the use of the L-M method. In conclusion, the presented method may be considered as fast, accurate and reliable and may be successfully used in other areas. The presented orthogonal fitting algorithm can be applied easily to biaxial ellipsoid, sphere and also other surfaces such as paraboloid, hyperboloid,etc. REFERENCES Bektas, S., Sisman, Y. The comparison of L1 and L2-norm minimization methods, International Journal of the Physical Sciences, 5(11), 1721 - 1727, 2010 [ Links ] Bektas, S. 'Orthogonal Distance From An Ellipsoid, Boletim de Ciencias Geodesicas, Vol. 20, No. 4 ISSN 1982-2170 , http://dx.doi.org/10.1590/S1982-217020140004000400053, 2014 [ Links ] Bookstein, F. L. 'Fitting conic sections to scattered data', Computer Graphics and Image Processing 9,56-71, 1979. [ Links ] Burša M, Šima Z. Tri-axiality of the Earth, the Moon and Mars, Stud. Geoph. et Geod. 24(3):211-217, 1980. [ Links ] Eberly, D. 'Least Squares Fitting of Data', Geometric Tools, LLC, http://www.geometrictools.com, 2008. [ Links ] Feltens, J. 'Vector method to compute the Cartesian (X, Y ,Z) to geodetic ((, λ, h) transformation on a triaxial ellipsoid', Journal of Geod. 83:129-137, 2009. [ Links ] İz, H. B., Ding X. L., Dai C. L.,Shum C. K. Polyaxial figures of the Moon, J. Geod. Sci., 1, 348-354, 2011. [ Links ] Li, Q., Griffiths J.G. Least Squares Ellipsoid Specific Fitting, Geometric Modelling and Processing, 2004. [ Links ] Ligas, M. Cartesian to geodetic coordinates conversion on a triaxial ellipsoid,. Journal of Geod, 86, 249-256, 2012. [ Links ] Ray, A., Srivastava D.C. Non-linear least squares ellipse fitting using the genetic algorithm with applications to strain analysis, Journal of Structural Geology 30 1593-1602. 2008. [ Links ] Turner, D.A., Anderson I. J., Mason J.C., Cox, M.G. An algorithm for fitting an ellipsoid to data, http://citeseerx.ist.psu.edu/viewdoc/summary? doi=10.1.1.36.2773, 1999. [ Links ] Zhang, Z. Parameter estimation techniques: a tutorial with application to conic fitting, Image and Vision Computing 15,59-76, 1997. [ Links ] Zisserman, A. C25 Optimization, 8 Lectures, Hilary Term 2013,2 Tutorial Sheets,Lectures 3-6 (BK), 2013. [ Links ] This is an open-access article distributed under the terms of the Creative Commons Attribution LicenseActive5 years, 2 months agoDoes anyone know how to make the following Matlab code approximate the exponential function more accurately when dealing with large and negative real numbers?For example when x = 1, the code works well, when x = -100, it returns an answer of 8.7364e+31 when it should be closer to 3.7201e-44.The code is as follows:Any assistance is appreciated, cheers.EDIT:Ok so the question is as follows:Which mathematical function does this code approximate? (I say the exponential function.) Does it work when x = 1? (Yes.) Unfortunately, using this when x = -100 produces the answer s = 8.7364e+31. Your colleague believes that there is a silly bug in the program, and asks for your assistance. Explain the behaviour carefully and give a simple fix which produces a better result. [You must suggest a modification to the above code, or it's use. You must also check your simple fix works.]So I somewhat understand that the problem surrounds large numbers when there is 16 (or more) orders of magnitude between terms, precision is lost, but the solution eludes me.ThanksEDIT:So in the end I went with this:Not sure if it's completely correct but it returns some good approximations.exp(-100) = 3.720075976020836e-044
s = 3.722053303838800e-044After further analysis (and unfortunately submitting the assignment), I realised increasing the number of iterations, and thus increasing terms, further improves efficiency. In fact the following was even more efficient:Which gives:
exp(-100) = 3.720075976020836e-044
s = 3.720075976020701e-044Billy Ray ValentineBilly Ray Valentine 2 Answers
As John points out in a comment, you have an error inside the loop. The y = y*k line does not do what you need. Look more carefully at the terms in the series for exp(x).Anyway, I assume this is why you have been given this homework assignment, to learn that series like this don't converge very well for large values. Instead, you should consider how to do range reduction.For example, can you use the identityto your advantage? Suppose you store the value of exp(1) = 2.7182818284590452353...Now, if I were to ask you to compute the value of exp(1.3), how would you use the above information?But we KNOW the value of exp(1) already. In fact, with a little thought, this will let you reduce the range for an exponential down to needing the series to converge rapidly only for abs(x) <= 0.5.Edit: There is a second way one can do range reduction using a variation of the same identity.Thus, suppose you wish to compute the exponential of large number, perhaps 12.8. Getting this to converge acceptably fast will take many terms in the simple series, and there will be a great deal of subtractive cancellation happening, so you won't get good accuracy anyway. However, if we recognize thatthen IF you could efficiently compute the exponential of 0.8, then the desired value is easy to recover, perhaps by repeated squaring.Note that WHENEVER you do range reduction using methods like this, while you may incur numerical problems due to the additional computations necessary, you will usually come out way ahead due to the greatly enhanced convergence of your series. user85109 Why do you think that's the wrong answer? Look at the last term of that sequence, and it's size, and tell me why you expect you should have an answer that's close to 0.My original answer stated that roundoff error was the problem. That will be a problem with this basic approach, but why do you think 40 is enough terms for the appropriate mathematical ( as opposed to computer floating point arithmetic) answer.100^40 / 40! ~= 10^31.Woodchip has the right idea with range reduction. That's the typical approach people use to implement these kinds of functions very quickly. Once you get that all figured out, you deal with roundoff errors of alternating sequences, by summing adjacent terms within the loop, and stepping with k = 1 : 2 : 40 (for instance). That doesn't work here until you use woodchips's idea because for x = -100, the summands grow for a very long time. You need |x| < 1 to guarantee intermediate terms are shrinking, and thus a rewrite will work.JohnJohn4,13433 gold badges4040 silver badges5757 bronze badges Not the answer you're looking for? Browse other questions tagged matlabmathapproximationexp or ask your own question.
s = 3.722053303838800e-044
s = 3.720075976020701e-044