SVD is a numerical method while PCA is an analysis approach
As @ttnphns and @nick-cox said, SVD is a numerical method and PCA is an analysis approach (like least squares). You can do PCA using SVD, or you can do PCA doing the eigendecomposition of XTXXTX (or XXTXXT), or you can do PCA using many other methods, just like you can solve least squares with a dozen different algorithms like Newton's method or gradient descent or SVD etc.
So there is no "advantage" to SVD over PCA because it's like asking whether Newton's method is better than least squares: the two aren't comparable.
SVD
Any real-valued matix can be decomposed (factorized) to 3 matrices:
and are orthogonal
 is again an orthogonal matrix. The diagonal elements of  are called singular values.
What is this all about?
This means and have same eigenvalues and different eigen vectors from SVD.
For example:
In a real example:
I may want to find the âeigen peopleâ. What we need to find is which represents the most variance i.e., most information.
If you perform the SVD on the Data Matrix, we got here is the same as the first component of the principal component analysis.
What is the difference between SVD and EVD, SVD and PCA?
The main difference between SVD and EigenValue Decomposition (EVD), is that EVD requires to be squared and does not guarantee the eigenvectors to be orthogonal.
The difference between SVD and PCA:
In the following example, matrix is centred. Otherwise for PCA, the matrix needed is the covariance matrix of
Quatation below:
PCA:
Let the data matrix  be of  size, where  is the number of samples and  is the number of variables. Let us assume that it is centred, i.e. column means have been subtracted and are now equal to zero.
Then the  covariance matrix  is given by . It is a symmetric matrix and so it can be diagonalized:
where is a matrix of eigenvectors (each column is an eigenvector) and is a diagonal matrix with eigenvalues in the decreasing order on the diagonal. The eigenvectors are called principal axes or principal directions of the data. Projections of the data on the principal axes are called principal components, also known as PC scores; these can be seen as new, transformed, variables. The principal component is given by th column of . The coordinates of the th data point in the new PC space are given by the th row of.
SVD:
If we now perform the singular value decomposition of XX, we obtain a decomposition where is a unitary matrix and is the diagonal matrix of singular values . From here one can easily see that
meaning that right singular vectors are principal directions and that singular values are related to the eigenvalues of covariance matrix via. Principal components are given by .
Â
Further links
- What is the intuitive relationship between SVD and PCAÂ -- a very popular and very similar thread on math.SE.
- Why PCA of data by means of SVD of the data? -- a discussion of what are the benefits of performing PCA via SVD [short answer: numerical stability].
- PCA and Correspondence analysis in their relation to Biplot -- PCA in the context of some congeneric techniques, all based on SVD.
- Is there any advantage of SVD over PCA? -- a question asking if there any benefits in using SVD instead of PCA [short answer: ill-posed question].
- Making sense of principal component analysis, eigenvectors & eigenvalues -- my answer giving a non-technical explanation of PCA. To draw attention, I reproduce one figure here: