The provenance assignment success of the DArTcap markers was tested with asssignPOP v. 1.2.2 (Chen, Marschall, et al., 2018 (link)) and rubias v.0.3.2 (Anderson et al., 2008 (link); Moran & Anderson, 2019 (link)). Assignment accuracy was tested with assignPOP, using both the Monte‐Carlo and K‐fold cross‐validation procedures to test the assignment of a hold‐out data set with 1000 iterations. We tested power of the markers by selecting a subset of loci with the highest FST values (5%, 10%, 50%, and 100% of all loci) to train the assignment model. Similarly, the assignment accuracy of simulated mixed groups, based on a reference leave‐one‐out dataset, was evaluated with rubias (Anderson et al., 2008 (link)). Known simulated proportions for each reporting unit were compared with the numbers estimated by rubias. Populations with a sample size of one (i.e., Sierra Leone, eastern Atlantic) were excluded from these analyses. We also examined the minimum number of informative markers needed to assign provenance by subsampling 5–500 markers based on loading contributions of each principal component from the DAPC analysis and testing the assignment accuracy with rubias.
Free full text: Click here