To determine the questions and methods folks have been interested in, we searched for capture-recapture papers in the Web of Science. We found more than 5000 relevant papers during the 2000-2025 period.
To make sense of this big corpus, we carried out bibliometric and
textual analyses in the spirit of Nakagawa
et al. 2018. Explanations along with the code and results are in the
next section
Quantitative analyses: Bibliometric and textual analyses.
We also inspected a sample of methodological and ecological papers, see
section
Qualitative analyses: Making sense of the corpus of scientific papers on capture-recapture.
To carry out a bibliometric analysis of the capture-recapture
literature over the 2003-2023, we used the R package bibliometrix. We also carried
out a text analysis using topic modelling, for which we recommend the
book Text Mining with
R.
To collect the data, we used the following settings:
We load the packages we need:
library(bibliometrix) # bib analyses
library(quanteda) # textual data analyses
library(tidyverse) # manipulation and viz data
library(tidytext) # handle text
library(topicmodels) # topic modellingLet us read in and format the data:
# Loading txt or bib files into R environment
D <- c("data/2000-2025/savedrecs.txt",
"data/2000-2025/savedrecs(1).txt",
"data/2000-2025/savedrecs(2).txt",
"data/2000-2025/savedrecs(3).txt",
"data/2000-2025/savedrecs(4).txt",
"data/2000-2025/savedrecs(5).txt",
"data/2000-2025/savedrecs(6).txt",
"data/2000-2025/savedrecs(7).txt",
"data/2000-2025/savedrecs(8).txt",
"data/2000-2025/savedrecs(9).txt",
"data/2000-2025/savedrecs(10).txt")
# Converting the loaded files into a R bibliographic dataframe
# (takes a minute or two)
M <- convert2df(D, dbsource="wos", format="plaintext")##
## Converting your wos collection into a bibliographic dataframe
##
##
## Warning:
## In your file, some mandatory metadata are missing. Bibliometrix functions may not work properly!
##
## Please, take a look at the vignettes:
## - 'Data Importing and Converting' (https://www.bibliometrix.org/vignettes/Data-Importing-and-Converting.html)
## - 'A brief introduction to bibliometrix' (https://www.bibliometrix.org/vignettes/Introduction_to_bibliometrix.html)
##
##
## Missing fields: CR
## Done!
##
##
## Generating affiliation field tag AU_UN from C1: Done!
We ended up with 10947 articles. Note that WoS only allows 1000 items to be exported at once, therefore we had to repeat the same operation 10 times.
We export back as a csv file for further inspection:
WoS provides the user with a bunch of graphs, let’s have a look.
Research areas are:
The number of publications per year is:
The countries of the first author are:
The journals are:
The most productive authors are:
The SDGs are:
We also want to produce our own descriptive statistics. Let’s have a
look to the data with R.
Number of papers per journal:
dat <- as_tibble(M)
dat %>%
group_by(SO) %>%
count() %>%
filter(n > 50) %>%
ggplot(aes(n, reorder(SO, n))) +
geom_col() +
labs(title = "Nb of papers per journal", x = "", y = "")Most common words in titles:
wordft <- dat %>%
mutate(line = row_number()) %>%
filter(nchar(TI) > 0) %>%
unnest_tokens(word, TI) %>%
anti_join(stop_words)
wordft %>%
count(word, sort = TRUE)wordft %>%
count(word, sort = TRUE) %>%
filter(n > 200) %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(n, word)) +
geom_col() +
labs(title = "Most common words in titles", x = "", y = "")Most common words in abstracts:
wordab <- dat %>%
mutate(line = row_number()) %>%
filter(nchar(AB) > 0) %>%
unnest_tokens(word, AB) %>%
anti_join(stop_words)
wordab %>%
count(word, sort = TRUE)wordab %>%
count(word, sort = TRUE) %>%
filter(n > 1500) %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(n, word)) +
geom_col() +
labs(title = "Most common words in abstracts", x = "", y = "")Now we turn to a more detailed analysis of the published articles.
First calculate the main bibliometric measures:
results <- biblioAnalysis(M, sep = ";")
options(width=100)
S <- summary(object = results, k = 10, pause = FALSE)##
##
## MAIN INFORMATION ABOUT DATA
##
## Timespan 2009 : 2019
## Sources (Journals, Books, etc) 808
## Documents 5022
## Annual Growth Rate % -3.7
## Document Average Age 8.98
## Average citations per doc 11.54
## Average citations per year per doc 1.014
## References 134769
##
## DOCUMENT TYPES
## article 4940
## article; book chapter 6
## article; early access 7
## article; proceedings paper 69
##
## DOCUMENT CONTENTS
## Keywords Plus (ID) 10861
## Author's Keywords (DE) 11088
##
## AUTHORS
## Authors 15128
## Author Appearances 23004
## Authors of single-authored docs 174
##
## AUTHORS COLLABORATION
## Single-authored docs 201
## Documents per Author 0.332
## Co-Authors per Doc 4.58
## International co-authorships % 33.43
##
##
## Annual Scientific Production
##
## Year Articles
## 2009 369
## 2010 401
## 2011 456
## 2012 492
## 2013 486
## 2014 526
## 2015 497
## 2016 496
## 2017 527
## 2018 512
## 2019 253
##
## Annual Percentage Growth Rate -3.7
##
##
## Most Productive Authors
##
## Authors Articles Authors Articles Fractionalized
## 1 GIMENEZ O 82 GIMENEZ O 17.95
## 2 PRADEL R 65 ROYLE JA 15.92
## 3 ROYLE JA 59 PRADEL R 12.82
## 4 CHOQUET R 44 BOHNING D 10.78
## 5 BARBRAUD C 40 CHOQUET R 9.91
## 6 BESNARD A 38 BARBRAUD C 9.12
## 7 TAVECCHIA G 34 WHITE GC 7.84
## 8 ORO D 32 SCHAUB M 7.78
## 9 NICHOLS JD 31 KING R 7.69
## 10 SCHAUB M 29 BESNARD A 7.51
##
##
## Top manuscripts per citations
##
## Paper DOI TC TCperYear NTC
## 1 CHOQUET R, 2009, ECOGRAPHY 10.1111/j.1600-0587.2009.05968.x 414 27.6 15.08
## 2 WHITEHEAD H, 2009, BEHAV ECOL SOCIOBIOL 10.1007/s00265-008-0697-y 350 23.3 12.75
## 3 LUIKART G, 2010, CONSERV GENET 10.1007/s10592-010-0050-7 289 20.6 13.89
## 4 GLANVILLE J, 2009, P NATL ACAD SCI USA 10.1073/pnas.0909775106 251 16.7 9.14
## 5 PATTERSON CC, 2012, DIABETOLOGIA 10.1007/s00125-012-2571-8 237 19.8 14.65
## 6 WALLACE BP, 2010, PLOS ONE 10.1371/journal.pone.0015465 207 14.8 9.95
## 7 GOMEZ P, 2011, SCIENCE 10.1126/science.1198767 195 15.0 10.27
## 8 MERTES PM, 2011, J ALLERGY CLIN IMMUN 10.1016/j.jaci.2011.03.003 165 12.7 8.69
## 9 ROYLE JA, 2009, ECOLOGY 10.1890/08-1481.1 158 10.5 5.76
## 10 SOMERS EC, 2014, ARTHRITIS RHEUMATOL 10.1002/art.38238 156 15.6 13.01
##
##
## Corresponding Author's Countries
##
## Country Articles Freq SCP MCP MCP_Ratio
## 1 USA 1784 0.3567 1454 330 0.185
## 2 AUSTRALIA 326 0.0652 202 124 0.380
## 3 FRANCE 318 0.0636 198 120 0.377
## 4 UNITED KINGDOM 318 0.0636 184 134 0.421
## 5 CANADA 304 0.0608 199 105 0.345
## 6 SPAIN 157 0.0314 95 62 0.395
## 7 ITALY 148 0.0296 89 59 0.399
## 8 GERMANY 146 0.0292 66 80 0.548
## 9 NEW ZEALAND 133 0.0266 74 59 0.444
## 10 BRAZIL 129 0.0258 95 34 0.264
##
##
## SCP: Single Country Publications
##
## MCP: Multiple Country Publications
##
##
## Total Citations per Country
##
## Country Total Citations Average Article Citations
## 1 USA 21915 12.28
## 2 FRANCE 4422 13.91
## 3 UNITED KINGDOM 4374 13.75
## 4 AUSTRALIA 3740 11.47
## 5 CANADA 3466 11.40
## 6 GERMANY 2003 13.72
## 7 NEW ZEALAND 1931 14.52
## 8 ITALY 1585 10.71
## 9 SWITZERLAND 1464 23.24
## 10 SPAIN 1429 9.10
##
##
## Most Relevant Sources
##
## Sources Articles
## 1 PLOS ONE 219
## 2 JOURNAL OF WILDLIFE MANAGEMENT 170
## 3 ECOLOGY AND EVOLUTION 116
## 4 ECOLOGY 101
## 5 BIOLOGICAL CONSERVATION 99
## 6 JOURNAL OF ANIMAL ECOLOGY 80
## 7 METHODS IN ECOLOGY AND EVOLUTION 77
## 8 JOURNAL OF MAMMALOGY 73
## 9 JOURNAL OF APPLIED ECOLOGY 72
## 10 NORTH AMERICAN JOURNAL OF FISHERIES MANAGEMENT 65
##
##
## Most Relevant Keywords
##
## Author Keywords (DE) Articles Keywords-Plus (ID) Articles
## 1 MARK-RECAPTURE 687 SURVIVAL 647
## 2 CAPTURE-RECAPTURE 460 CONSERVATION 525
## 3 SURVIVAL 326 CAPTURE-RECAPTURE 497
## 4 CAPTURE-MARK-RECAPTURE 246 ABUNDANCE 494
## 5 ABUNDANCE 173 POPULATION 491
## 6 POPULATION DYNAMICS 145 MARKED ANIMALS 404
## 7 DEMOGRAPHY 140 SIZE 371
## 8 DISPERSAL 138 POPULATIONS 339
## 9 CONSERVATION 131 MARK-RECAPTURE 328
## 10 POPULATION SIZE 125 DYNAMICS 302
Visualize:
The 100 most frequent cited manuscripts:
The most frequent cited first authors:
Top authors productivity over time:
Below is an author collaboration network, where nodes represent top 30 authors in terms of the numbers of authored papers in our dataset; links are co-authorships. The Louvain algorithm is used throughout for clustering:
M <- metaTagExtraction(M, Field = "AU_CO", sep = ";")
NetMatrix <- biblioNetwork(M, analysis = "collaboration", network = "authors", sep = ";")
net <- networkPlot(NetMatrix, n = 30, Title = "Collaboration network", type = "fruchterman",
size = TRUE, remove.multiple = FALSE, labelsize = 0.7, cluster = "louvain")Country collaborations:
NetMatrix <- biblioNetwork(M, analysis = "collaboration", network = "countries", sep = ";")
net <- networkPlot(NetMatrix, n = 20, Title = "Country collaborations", type = "fruchterman",
size = TRUE, remove.multiple = FALSE, labelsize = 0.7, cluster = "louvain")A keyword co-occurrences network:
NetMatrix <- biblioNetwork(M, analysis = "co-occurrences", network = "keywords", sep = ";")
netstat <- networkStat(NetMatrix)
summary(netstat, k = 10)##
##
## Main statistics about the network
##
## Size 10867
## Density 0.002
## Transitivity 0.08
## Diameter 6
## Degree Centralization 0.192
## Average path length 2.772
##
net <- networkPlot(NetMatrix, normalize = "association", weighted = T, n = 50,
Title = "Keyword co-occurrences", type = "fruchterman", size = T,
edgesize = 5, labelsize = 0.7)To know everything about textual analysis and topic modelling in particular, we recommend the reading of Text Mining with R.
Clean and format the data:
#dat_backup <- dat
# dat <- dat_backup %>%
# group_by(SO) %>%
# filter(n() > 49) %>%
# ungroup()
#dat <- dat_backup %>%
# filter(PY > 2002)wordfabs <- dat %>%
mutate(line = row_number()) %>%
filter(nchar(AB) > 0) %>%
unnest_tokens(word, AB) %>%
anti_join(stop_words) %>%
filter(str_detect(word, "[^\\d]")) %>%
# filter(!str_detect(word, "population")) %>% #############################
# filter(!str_detect(word, "data")) %>% #############################
group_by(word) %>%
mutate(word_total = n()) %>%
ungroup()
desc_dtm <- wordfabs %>%
count(line, word, sort = TRUE) %>%
ungroup() %>%
cast_dtm(line, word, n)Perform the analysis, takes several minutes:
Visualise results:
top_terms <- tidy_lda %>%
filter(topic < 16) %>%
group_by(topic) %>%
top_n(10, beta) %>%
ungroup() %>%
arrange(topic, -beta)
top_terms %>%
mutate(term = reorder(term, beta)) %>%
group_by(topic, term) %>%
arrange(desc(beta)) %>%
ungroup() %>%
mutate(term = factor(paste(term, topic, sep = "__"),
levels = rev(paste(term, topic, sep = "__")))) %>%
ggplot(aes(term, beta, fill = as.factor(topic))) +
geom_col(show.legend = FALSE) +
coord_flip() +
scale_x_discrete(labels = function(x) gsub("__.+$", "", x)) +
labs(title = "Top 10 terms in each LDA topic",
x = NULL, y = expression(beta)) +
facet_wrap(~ topic, ncol = 4, scales = "free")This is quite informative! A REVOIR Topics can fairly easily be interpreted: 1 is about estimating fish survival, 2 is about photo-id, 3 is general about modeling and estimation, 4 is disease ecology, 5 is about estimating abundance of marine mammals, 6 is about capture-recapture in (human) health sciences, 7 is about the conservation of large carnivores (tigers, leopards), 8 is about growth and recruitment, 9 about prevalence estimation in humans, 10 is about the estimation of individual growth in fish, 11 is (not a surprise) about birds (migration and reproduction), and 12 is about habitat perturbations.
Our objective was to make a list of ecological questions and methods that were addressed in these papers. The bibliometric and text analyses above were useful, but we needed to dig a bit deeper to achieve the objective. Here how we did.
First, we isolated the methodological journals. To do so, we focused the search on journals that had published more than 10 papers about capture-recapture over the last 10 years:
raw_dat <- read_csv(file = 'data/2000-2025/crdat.csv')
raw_dat %>%
group_by(journal) %>%
filter(n() > 10) %>%
ungroup() %>%
count(journal)By inspecting this list, we ended up with these methodological journals:
methods <- raw_dat %>%
filter(journal %in% c('BIOMETRICS',
'ECOLOGICAL MODELLING',
'JOURNAL OF AGRICULTURAL BIOLOGICAL AND ENVIRONMENTAL STATISTICS',
'METHODS IN ECOLOGY AND EVOLUTION',
'ANNALS OF APPLIED STATISTICS',
'ENVIRONMENTAL AND ECOLOGICAL STATISTICS'))
methods %>%
count(journal, sort = TRUE)Now we exported the 219 papers published in these methodological journals in a csv file:
raw_dat %>%
filter(journal %in% c('BIOMETRICS',
'ECOLOGICAL MODELLING',
'JOURNAL OF AGRICULTURAL BIOLOGICAL AND ENVIRONMENTAL STATISTICS',
'METHODS IN ECOLOGY AND EVOLUTION',
'ANNALS OF APPLIED STATISTICS',
'ENVIRONMENTAL AND ECOLOGICAL STATISTICS')) %>%
write_csv('data/2000-2025/papers_in_methodological_journals.csv')The next step was to annotate this file to determine the methods
used. R could not help, and we had to do it by hand. We
read the >200 titles and abstracts and added our tags in an extra
column. The task was cumbersome but very interesting. We enjoyed seeing
what colleagues have been working on. The results are in this
file.
By focusing the annotation on the methodological journals, we ignored all the methodological papers that had been published in other non-methodological journals like, among others, Ecology, Journal of Applied Ecology, Conservation Biology and Plos One which welcome methods. We address this issue below. In brief, we scanned the corpus of ecological papers and tagged all methodological papers (126 in total); we added them to the file of methodological papers and added a column to keep track of the paper original (methodological vs ecological corpus).
Second, we isolated the ecological journals. To do so, we focused the search on journals that had been published more than 50 papers about capture-recapture over the last 10 years, and we excluded the methodological journals:
ecol <- raw_dat %>%
filter(!journal %in% c('BIOMETRICS',
'ECOLOGICAL MODELLING',
'JOURNAL OF AGRICULTURAL BIOLOGICAL AND ENVIRONMENTAL STATISTICS',
'METHODS IN ECOLOGY AND EVOLUTION',
'ANNALS OF APPLIED STATISTICS',
'ENVIRONMENTAL AND ECOLOGICAL STATISTICS')) %>%
group_by(journal) %>%
filter(n() > 50) %>%
ungroup()
ecol %>%
count(journal, sort = TRUE)## [1] 4903
Again, we inspected the papers one by one. We mainly focused the reading on the titles and abstracts. We did not annotate the papers.
This work initially started as a talk we gave at the Wildlife Research and Conservation 2019 conference in Berlin. The slides can be downloaded here. There is also a video recording of the talk there, and a Twitter thread of it. We also presented a poster at the Euring 2021 conference, see here.
R version
used## R version 4.5.0 (2025-04-11)
## Platform: aarch64-apple-darwin20
## Running under: macOS Monterey 12.7.6
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/4.5-arm64/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.5-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.12.1
##
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
##
## time zone: Europe/Athens
## tzcode source: internal
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] topicmodels_0.2-17 tidytext_0.4.3 quanteda_4.3.1 bibliometrix_5.2.1
## [5] lubridate_1.9.4 forcats_1.0.0 stringr_1.6.0 dplyr_1.1.4
## [9] purrr_1.1.0 readr_2.1.5 tidyr_1.3.1 tibble_3.3.0
## [13] ggplot2_4.0.0 tidyverse_2.0.0
##
## loaded via a namespace (and not attached):
## [1] httr2_1.2.1 readxl_1.4.5 rlang_1.1.6
## [4] magrittr_2.0.4 compiler_4.5.0 systemfonts_1.3.1
## [7] reshape2_1.4.4 openalexR_2.0.2 vctrs_0.6.5
## [10] crayon_1.5.3 pkgconfig_2.0.3 fastmap_1.2.0
## [13] labeling_0.4.3 ca_0.71.1 promises_1.3.2
## [16] rmarkdown_2.29 tzdb_0.5.0 ragg_1.4.0
## [19] bit_4.6.0 xfun_0.52 modeltools_0.2-24
## [22] cachem_1.1.0 jsonlite_2.0.0 SnowballC_0.7.1
## [25] later_1.4.2 parallel_4.5.0 stopwords_2.3
## [28] R6_2.6.1 bslib_0.9.0 stringi_1.8.7
## [31] RColorBrewer_1.1-3 jquerylib_0.1.4 cellranger_1.1.0
## [34] Rcpp_1.1.0 knitr_1.50 base64enc_0.1-3
## [37] httpuv_1.6.16 rentrez_1.2.4 Matrix_1.7-3
## [40] igraph_2.1.4 timechange_0.3.0 tidyselect_1.2.1
## [43] rstudioapi_0.17.1 stringdist_0.9.15 pubmedR_0.0.3
## [46] yaml_2.3.10 codetools_0.2-20 qpdf_1.3.5
## [49] lattice_0.22-6 plyr_1.8.9 shiny_1.10.0
## [52] withr_3.0.2 S7_0.2.0 askpass_1.2.1
## [55] evaluate_1.0.3 zip_2.3.3 xml2_1.3.8
## [58] pillar_1.11.1 shinycssloaders_1.1.0 janeaustenr_1.0.0
## [61] DT_0.33 stats4_4.5.0 NLP_0.3-2
## [64] plotly_4.10.4 generics_0.1.4 vroom_1.6.6
## [67] hms_1.1.3 scales_1.4.0 xtable_1.8-4
## [70] contentanalysis_0.2.1 glue_1.8.0 slam_0.1-55
## [73] lazyeval_0.2.2 tools_4.5.0 tm_0.7-16
## [76] data.table_1.17.2 tokenizers_0.3.0 openxlsx_4.2.8.1
## [79] pdftools_3.6.0 visNetwork_2.1.4 XML_3.99-0.18
## [82] fastmatch_1.1-6 grid_4.5.0 rscopus_0.9.0
## [85] dimensionsR_0.0.3 bibliometrixData_0.3.0 cli_3.6.5
## [88] rappdirs_0.3.3 textshaping_1.0.1 viridisLite_0.4.2
## [91] gtable_0.3.6 sass_0.4.10 digest_0.6.37
## [94] ggrepel_0.9.6 htmlwidgets_1.6.4 farver_2.1.2
## [97] htmltools_0.5.8.1 lifecycle_1.0.4 httr_1.4.7
## [100] mime_0.13 bit64_4.6.0-1