We estimate abundance of the Pyrenean brown bear population with data from France and Spain using robust-design capture-recapture models. We consider data from 2008 to 2020 considering the period May to September.
A nice introduction to robust design models can be found here.
Here, we adopt a frequentist approach to explore effects on survival and detection and the temporary emigration structure. Analyses were performed using the R
package RMark
(Laake 2013) that allows calling program Mark
from R
.
Load a few packages we will need.
library(tidyverse)
theme_set(theme_light(base_size=16))
library(lubridate)
library(zoo)
library(RMark)
library(R2jags)
Read in the data and inspect them:
<- read_csv2("dat/databaseCMR2021.csv", col_names = FALSE)
rawdata rawdata
The columns have no name, we’re gonna add some. Let’s start by the obvious ones:
<- rawdata %>%
dataprocessed rename(id = 'X1',
sex = 'X2',
birth_year = 'X3',
first_capture = 'X4',
age_first_capture = 'X5')
dataprocessed
Now we need the dates to name the remaining columns:
<- seq(from = as.Date("2008/5/1"), to = as.Date("2020/9/1"), by = "month") %>%
dates enframe(name = NULL) %>% # make it a tibble
rename(date = value) %>% # rename column
mutate(month = month(date),
year = year(date)) %>% # select month and year
filter(month %in% c(5,6,7,8,9)) # filter May -> September
dates
Rename the columns, at last:
<- dataprocessed %>%
dataprocessed rename_at(vars(starts_with('X')), ~paste0(dates$month, '/', dates$year)) %>%
mutate(id = as_factor(id),
sex = as_factor(sex))
dataprocessed
First, we tidy the data:
<- dataprocessed %>%
tidydata pivot_longer(-c(id,sex,birth_year,first_capture,age_first_capture), names_to = 'date', values_to = '(non-)detections')
tidydata
Now we may compute the number of detections per occasion:
%>%
tidydata mutate(date = as.Date(as.yearmon(date, "%m/%Y")),
month = month(date),
year = year(date)) %>%
group_by(year,month) %>%
summarise(emr = sum(`(non-)detections`)) %>%
ggplot() +
geom_col(aes(x = as_factor(year), y = emr, fill = as_factor(month))) +
scale_fill_viridis_d(name = 'month') +
labs(x = 'year', y = "number of detections") +
coord_flip() +
theme(legend.position = "bottom")
How many individuals have been identified:
%>%
tidydata pivot_wider(names_from = date, values_from = '(non-)detections') %>%
pull(id) %>%
length()
## [1] 98
How many males, females and individuals of unknown sex do we have:
%>%
tidydata pivot_wider(names_from = date, values_from = '(non-)detections') %>%
count(sex)
To format the data for capture-recapture analyses, we first double-check whether all individuals have at least a detection:
<- dataprocessed %>%
dataprocessed mutate(sum = rowSums(across(contains("20"))), .before = id) %>%
filter(sum > 0)
We need to paste the columns of (non-)detections altogether, which can be achieved as follows:
<- dataprocessed %>%
ch select(-c(sum, id,sex,birth_year,first_capture,age_first_capture)) %>%
unite(col = 'ch', sep= '')
ch
We now define the structure of the robust design. We have 13 primary occasions (the years, from 2008 to 2020) and 5 secondary occasions (from May to September). From the help file (see ?robust), we read that ‘The 0 time intervals represent the secondary sessions in which the population is assumed to be closed. The non-zero values are the time intervals between the primary occasions.’. We therefore write:
<- c(0,0,0,0,1, # 2008
time.intervals 0,0,0,0,1, # 2009
0,0,0,0,1, # 2010
0,0,0,0,1, # 2011
0,0,0,0,1, # 2012
0,0,0,0,1, # 2013
0,0,0,0,1, # 2014
0,0,0,0,1, # 2015
0,0,0,0,1, # 2016
0,0,0,0,1, # 2017
0,0,0,0,1, # 2018
0,0,0,0,1, # 2019
0,0,0,0) # 2020
Because we want to have age effects on survival, we need to create an age variable bearing in mind that we have age at first capture. Cubs (< 2 year old) are coded 1, subadults are coded 2 (2 or 3 years old) and adults 3 (> 3 years old):
<- dataprocessed %>%
ageclass mutate(aged = case_when(
< 2 ~ '1',
age_first_capture == 2 | age_first_capture == 3 ~ '2',
age_first_capture > 3 ~ '3')) %>%
age_first_capture pull(aged) %>%
as_factor() %>%
fct_inseq() # reorder factor by numeric value of level
ageclass
## [1] 1 1 3 3 1 1 1 1 2 1 2 1 3 1 1 1 3 3 3 3 1 3 1 1 1 1 1 1 1 1 3 1 3 1 1 2 3 1
## [39] 1 1 1 1 1 1 1 1 1 1 1 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 1 1
## [77] 1 3 3 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## Levels: 1 2 3
For individuals named Balou
, Bercero2013
, Cachou
, Gribouille
, Melloux
, New20_01
, S28Slo3
, ourson_Caramellita_2020_2
and ourson_Caramellita_2020_3
we know the date of death, let’s use this information and right censor them. First, locate them:
%>%
dataprocessed mutate(id = as.character(id)) %>%
pull(id) -> id
<- which(id %in% c("Balou",
mask "Bercero2013",
"Cachou",
"Gribouille",
"Melloux",
"New20_01",
"S28Slo3",
"ourson_Caramellita_2020_2",
"ourson_Caramellita_2020_3"))
<- rep(1, nrow(dataprocessed))
freq <- -1 freq[mask]
Now put together the encounter histories and age class
<- data.frame(ch = ch,
brownbear freq = freq,
age = ageclass)
brownbear
Initialize the number of capture occasions, the time intervals and create an age structure, for both standard robust design and robust design with heterogeneity:
# standard RD
<- process.data(brownbear,
bear.process model = "Robust", # standard robust design
time.intervals = time.intervals, # primary/secondary occasions
groups = "age",
initial.age = c(0,2,4), # specifies age at first capture for each age class
begin.time = 2008)
# RD with heterogeneity
<- process.data(brownbear,
bear.process.mix model = "RDHet", # robust design with heterogeneity
time.intervals = time.intervals, # primary/secondary occasions
groups = "age",
initial.age = c(0,2,4), # specifies age at first capture for each age class
begin.time = 2008)
Create design matrix:
<- make.design.data(bear.process)
bear.ddl <- make.design.data(bear.process.mix) bear.ddl.mix
Create a binned age variable in the design matrix (see, e.g., here):
<- add.design.data(bear.process,
bear.ddl
bear.ddl, parameter = "S",
type = "age",
bins = c(0, 2, 4, 21),
name = "ageclass",
right = FALSE, # to get an interval closed on the left and open on the right
replace = TRUE)
# compare bear.ddl$S$age with bear.ddl$S$ageclass
<- add.design.data(bear.process.mix,
bear.ddl.mix
bear.ddl.mix, parameter = "S",
type = "age",
bins = c(0, 2, 4, 20),
name = "ageclass",
right = FALSE, # to get an interval closed on the left and open on the right
replace = TRUE)
Last, specify structure on parameters, namely we consider survival constant or age-dependent, detection constant, time-dependent (we consider variation between primary occasions and within primary occasions) or heterogeneous, and emigration Markovian, random or no emigration:
<- list(formula=~1) # survival is constant
S <- list(formula=~ageclass) # survival is age-dependent (3 age classes)
S.age <- list(formula=~1, share = TRUE) # detection is constant, share = TRUE is to force c and p to share same columns
p <- list(formula=~time, share = TRUE) # detection is time-varying where time is the occasions within primary occasions
p.time <- list(formula=~session, share = TRUE) # detection is time-varying where session is the primary occasion
p.session <- list(formula=~mixture, share = TRUE) # detection is heterogeneous
p.mix <- list(formula=~1, share = TRUE, fixed = 0) # gamma' = gamma'' = 0, no emigration
GammaDoublePrimeNE <- list(formula=~1) # Markovian emigration
GammaDoublePrimeMK <- list(formula=~1) # Markovian emigration
GammaPrimeMK <- list(formula=~1, share = TRUE) # gamma' = gamma'', random emigration GammaDoublePrimeAL
Before starting the analyses, it doesn’t hurt to have a look to the relevant help file:
?robust
Now let’s fit a bunch of models.
<- mark(bear.process,
modelSp_noemig
bear.ddl, model.parameters=list(S = S,
GammaDoublePrime = GammaDoublePrimeNE,
p = p),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSpt_noemig
bear.ddl, model.parameters=list(S = S,
GammaDoublePrime = GammaDoublePrimeNE,
p = p.time),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSpprimary_noemig
bear.ddl, model.parameters=list(S = S,
GammaDoublePrime = GammaDoublePrimeNE,
p = p.session),
threads = 2,
output = FALSE)
<- mark(bear.process.mix,
modelSph_noemig
bear.ddl.mix, model.parameters=list(S = S,
GammaDoublePrime = GammaDoublePrimeNE,
p = p.mix),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSap_noemig
bear.ddl, model.parameters=list(S = S.age,
GammaDoublePrime = GammaDoublePrimeNE,
p = p),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSapt_noemig
bear.ddl, model.parameters=list(S = S.age,
GammaDoublePrime = GammaDoublePrimeNE,
p = p.time),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSapprimary_noemig
bear.ddl, model.parameters=list(S = S.age,
GammaDoublePrime = GammaDoublePrimeNE,
p = p.session),
threads = 2,
output = FALSE)
<- mark(bear.process.mix,
modelSaph_noemig
bear.ddl.mix, model.parameters=list(S = S.age,
GammaDoublePrime = GammaDoublePrimeNE,
p = p.mix),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSp_mkemig
bear.ddl, model.parameters=list(S = S,
GammaPrime = GammaPrimeMK,
GammaDoublePrime = GammaDoublePrimeMK,
p = p),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSpt_mkemig
bear.ddl, model.parameters=list(S=S,
GammaPrime = GammaPrimeMK,
GammaDoublePrime = GammaDoublePrimeMK,
p=p.time),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSpprimary_mkemig
bear.ddl, model.parameters=list(S=S,
GammaPrime = GammaPrimeMK,
GammaDoublePrime = GammaDoublePrimeMK,
p=p.session),
threads = 2,
output = FALSE)
<- mark(bear.process.mix,
modelSph_mkemig
bear.ddl.mix, model.parameters=list(S=S,
GammaPrime = GammaPrimeMK,
GammaDoublePrime = GammaDoublePrimeMK,
p = p.mix),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSap_mkemig
bear.ddl, model.parameters=list(S = S.age,
GammaPrime = GammaPrimeMK,
GammaDoublePrime = GammaDoublePrimeMK,
p = p),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSapt_mkemig
bear.ddl, model.parameters=list(S = S.age,
GammaPrime = GammaPrimeMK,
GammaDoublePrime = GammaDoublePrimeMK,
p = p.time),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSapprimary_mkemig
bear.ddl, model.parameters=list(S = S.age,
GammaPrime = GammaPrimeMK,
GammaDoublePrime = GammaDoublePrimeMK,
p = p.session),
threads = 2,
output = FALSE)
<- mark(bear.process.mix,
modelSaph_mkemig
bear.ddl.mix, model.parameters=list(S = S.age,
GammaPrime = GammaPrimeMK,
GammaDoublePrime = GammaDoublePrimeMK,
p = p.mix),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSp_rdemig
bear.ddl, model.parameters=list(S = S,
GammaDoublePrime = GammaDoublePrimeAL,
p = p),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSpt_rdemig
bear.ddl, model.parameters=list(S = S,
GammaDoublePrime = GammaDoublePrimeAL,
p = p.time),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSpprimary_rdemig
bear.ddl, model.parameters=list(S = S,
GammaDoublePrime = GammaDoublePrimeAL,
p = p.session),
threads = 2,
output = FALSE)
<- mark(bear.process.mix,
modelSph_rdemig
bear.ddl.mix, model.parameters=list(S = S,
GammaDoublePrime = GammaDoublePrimeAL,
p = p.mix),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSap_rdemig
bear.ddl, model.parameters=list(S = S.age,
GammaDoublePrime = GammaDoublePrimeAL,
p = p),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSapt_rdemig
bear.ddl, model.parameters=list(S = S.age,
GammaDoublePrime = GammaDoublePrimeAL,
p = p.time),
threads = 2,
output = FALSE)
<- mark(bear.process,
modelSapprimary_rdemig
bear.ddl, model.parameters=list(S = S.age,
GammaDoublePrime = GammaDoublePrimeAL,
p = p.session),
threads = 2,
output = FALSE)
<- mark(bear.process.mix,
modelSaph_rdemig
bear.ddl.mix, model.parameters=list(S = S.age,
GammaDoublePrime = GammaDoublePrimeAL,
p = p.mix),
threads = 2,
output = FALSE)
Next step is to try and make sense of the models we fitted. We first collect them all:
<- c('modelSp_noemig',
name_models 'modelSpt_noemig',
'modelSpprimary_noemig',
'modelSph_noemig',
'modelSap_noemig',
'modelSapt_noemig',
'modelSapprimary_noemig',
'modelSaph_noemig',
'modelSp_mkemig',
'modelSpt_mkemig',
'modelSpprimary_mkemig',
'modelSph_mkemig',
'modelSap_mkemig',
'modelSapt_mkemig',
'modelSapprimary_mkemig',
'modelSaph_mkemig',
'modelSp_rdemig',
'modelSpt_rdemig',
'modelSpprimary_rdemig',
'modelSph_rdemig',
'modelSap_rdemig',
'modelSapt_rdemig',
'modelSapprimary_rdemig',
'modelSaph_rdemig')
<- rep(NA, length(name_models))
AICcvalues 1] <- eval(parse(text = paste0(name_models[1],'$results$AICc')))
AICcvalues[for (i in 2:length(name_models)){
<- eval(parse(text = paste0(name_models[i],'$results$AICc')))
AICcvalues[i]
}<- order(AICcvalues)
ord <- data.frame(model = name_models[ord], AICc = AICcvalues[ord])
model_table model_table
We fitted 24 models in total, with 4 detection structures, 2 survival structures and 3 emigration structures. It appears that the models with age-dependent survival and heterogeneous detection are best supported by the data. There is emigration, but it is difficult to distinguish between random or Markovian emigration (the difference in AICc between the two top ranked models is lower than 2 units). There is no need to carry out model averaging because the AICc of these two models is much lower than the other models.
Let’s inspect the parameter estimates of the two best models:
$results$real[1:7,] modelSaph_rdemig
$results$real[1:8,] modelSaph_mkemig
The estimates of survival and detection probabilities are indistinguishable, which will make our life easier as we can rely on either models to get abundance estimates. More precisely, survival of cubs is around \(84\%\), survival of subadults is \(95\%\), and that of adults is \(96\%\). Regarding the observation process, we have a mixture of lowly and highly detectable individuals. More precisely, we have a proportion \(0.72\) of individuals with detection \(42\%\) and a proportion \(0.28\) of individuals with detection \(85\%\).
Clean up:
cleanup(ask = FALSE)
Read again and format raw data.
<- read_csv2("dat/databaseCMR2021.csv", col_names = FALSE)
rawdata <- rawdata %>%
dataprocessed rename(id = 'X1',
sex = 'X2',
birth_year = 'X3',
first_capture = 'X4',
age_first_capture = 'X5') %>%
mutate(sum = rowSums(across(starts_with("X"))), .before = id) %>%
filter(sum > 0) %>%
select(-sum)
<- seq(from = as.Date("2008/5/1"), to = as.Date("2020/9/1"), by = "month") %>%
dates enframe(name = NULL) %>% # make it a tibble
rename(date = value) %>% # rename column
mutate(month = month(date),
year = year(date)) %>% # select month and year
filter(month %in% c(5,6,7,8,9)) # filter May -> September
<- dataprocessed %>%
brownbear rename_at(vars(starts_with('X')), ~paste0(dates$month, '/', dates$year)) %>%
mutate(id = as_factor(id),
sex = as_factor(sex))
<- nrow(brownbear) # number of individuals
n.ind <- rep(1, n.ind)
freq %>%
brownbear mutate(id = as.character(id)) %>%
pull(id) -> id
<- which(id %in% c("Balou",
mask "Bercero2013",
"Cachou",
"Gribouille",
"Melloux",
"New20_01",
"S28Slo3",
"ourson_Caramellita_2020_2",
"ourson_Caramellita_2020_3"))
We compute several quantities that we will need:
<- 13 # number of primary occasions
n.primary <- rep(5, n.primary) # number of secondary occasions per primary occasion
n.secondary <- list(1:5,
index 6:10,
11:15,
16:20,
21:25,
26:30,
31:35,
36:40,
41:45,
46:50,
51:55,
56:60,
61:65) # the secondary occasions
We calculate the number of individuals caught in each primary occasion, which we will need to get an estimate of population size:
<- brownbear %>%
encounter select(-c(id, sex, birth_year, first_capture, age_first_capture)) %>%
as.matrix()
<- rep(NA, n.primary)
caught for (i in 1:n.primary){
<- encounter[,index[[i]]]
tmp <- nrow(tmp[rowSums(tmp)!=0,])
caught[i]
} caught
## [1] 12 16 15 18 22 20 24 27 38 38 36 46 61
We format the data as an array with dimensions the number of individuals times the number of primary occasions times the number of secondary occasions:
<- array(NA, dim = c(n.ind, n.primary, max(n.secondary)))
obs for (i in 1:n.primary){
1:n.secondary[i]] <- encounter[,index[[i]]]
obs[,i,
}dim(obs)
## [1] 97 13 5
Now we format the data as required in the Bayesian implementation of the robust design:
<- matrix(NA, n.ind, n.primary)
ch for (i in 1:n.ind){
for (t in 1:n.primary){
ifelse(any(obs[i,t,1:n.secondary[t]] == 1), ch[i,t] <- 1, ch[i,t] <- 2)
} }
Get first occasion of capture for each individual:
<- function(x)min(which (x != 2))
get.first <- apply(ch,1,get.first)
first == "Inf"] <- NA first[first
Get last occasion of capture for each individual:
#ch[mask,]
<- rep(ncol(ch), nrow(ch))
last <- c(7, 6, 12, 13, 11, 13, 9, 13, 13) last[mask]
Build an age matrix to be applied on survival:
<- matrix(NA, nrow = n.ind, ncol = n.primary - 1)
age <- brownbear$age_first_capture
agefirst for (i in 1:n.ind){
<- agefirst[i]
tmp for (t in wrapr::seqi(first[i],n.primary-1)){
<- tmp
age[i,t] <- tmp + 1
tmp #t
} #i
} <- age
ageclass == 0 | age == 1] <- 1
ageclass[age == 2 | age == 3] <- 2
ageclass[age > 3] <- 3
ageclass[age head(ageclass)
## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
## [1,] NA NA NA NA 1 1 2 2 3 3 3 3
## [2,] NA NA NA NA NA NA NA 1 1 2 2 3
## [3,] 3 3 3 3 3 3 3 3 3 3 3 3
## [4,] 3 3 3 3 3 3 3 3 3 3 3 3
## [5,] 1 2 2 3 3 3 3 3 3 3 3 3
## [6,] NA NA NA NA NA 1 1 2 2 3 3 3
<- matrix(NA, n.ind, n.primary)
total for (i in 1:n.ind){
for (t in first[i]:n.primary){
<- sum(obs[i,t,1:n.secondary[t]])
total[i,t]
}}is.na(total)] <- 0 total[
<- array(NA, dim = c(n.ind, n.primary, max(n.secondary)))
avail for (i in 1:n.ind){
for (t in first[i]:n.primary){
for (j in 1:n.secondary[t]){
if(total[i,t] > 1){
<- 1
avail[i,t,j]
}if(total[i,t] == 1){
<- 1
avail[i,t,j]
}if(total[i,t] == 1 & obs[i,t,j] == 1){
<- 0
avail[i,t,j] <- 0
obs[i,t,j]
}
}
}
}is.na(avail)] <- 0 avail[
Cut individuals released in last primary occasion:
<- which(first != n.primary)
cut <- ch[c(cut),]
ch <- avail[c(cut),,]
avail <- obs[c(cut),,]
obs <- first[c(cut)]
first <- ageclass[c(cut),]
ageclass <- last[c(cut)] last
We consider a capture-recapture model with robust design in which temporary emigration is random, survival is age-dependent survival and there is heterogeneity in the detection process (2-class finite mixture). This is the model best supported by the data, see part 1 of our analyses. We also right censored individuals for which date of death is known.
The code:
<- function() {
model
# priors
for (i in 1:n.ind){
for (t in first[i]:(n.years - 1)){
<- beta[age[i,t]] # survival
phi[i,t]
}
}for (u in 1:3){
~ dunif(0, 1) # Priors for age-specific survival
beta[u]
}~ dunif(0,1) # gamma
gamma 1] ~ dunif(0,1)
mu0[2] ~ dunif(0,1)
mu0[1:2] <- sort(mu0) # to handle label switching issue
mu[~ ddirich(alpha)
prop
# secondary occasions p's
for (i in 1:n.ind){
~ dcat(prop[]) # indicator of whether you belong to class 1 or 2
eta[i] for (t in 1:n.years){
for (j in 1:max(n.sec[1:n.years])){
<- mu[eta[i]] # detection is either mu1 or mu2
p[i,t,j]
}
}
}
# primary occasions p's or pooled detection probability
for (i in 1:n.ind){
for (t in 1:n.years){
<- 1 - prod(1 - p[i,t,1:n.sec[t]])
upstar[i,t]
}
}
# averaged detection over individuals
for (t in 1:n.years){
<- mean(upstar[1:n.ind,t])
pstar[t]
}
# likelihood
for (i in 1:n.ind){
<- ch[i,first[i]]
z[i,first[i]] for (t in first[i]:last[i]){
for (j in 1:n.sec[t]){
<- avail[i,t,j] * p[i,t,j]
mu3[i,t,j] ~ dbern(mu3[i,t,j])
obs[i,t,j]
}
}
for (t in (first[i]+1):last[i]){
<- z[i,t-1] * phi[i,t-1]
mu1[i,t] <- z[i,t] * (1 - gamma) * upstar[i,t]
mu2[i,t] ~ dbern(mu1[i,t])
z[i,t] ~ dbern(mu2[i,t])
ch[i,t]
}
} }
The data
== 2] <- 0 # Bernoulli likelihood
ch[ch <- list(first = first,
dat last = last,
ch = ch,
n.sec = n.secondary,
n.years = ncol(ch),
n.ind = nrow(ch),
avail = avail,
obs = obs,
age = ageclass,
alpha = c(1,1))
Then initial values, parameters to monitor, MCMC settings:
# initial values for the latent states:
<- matrix(NA, nrow(ch), ncol(ch))
z.init for (i in 1:nrow(ch)){
if(first[i] < last[i]){
+ 1):last[i]] <- 1
z.init[i,(first[i]
}
}<- function(){list(z = z.init)}
inits # parameters
#pars <- c('pstar','mean.p','beta','gamma','sdeps')
<- c('pstar','mu','beta','gamma','prop')
pars <- 3
n.chains <- 20000
n.iter <- 5000 n.burnin
We are ready to fit the model to the data:
<- jags(data = dat,
res_random inits = inits,
parameters.to.save = pars,
model.file = model,
n.chains = n.chains,
n.iter = n.iter,
n.burnin = n.burnin)
## Compiling model graph
## Resolving undeclared variables
## Allocating nodes
## Graph information:
## Observed stochastic nodes: 3213
## Unobserved stochastic nodes: 556
## Total graph size: 18146
##
## Initializing model
Posterior density distribution of the parameters:
<- as.mcmc(res_random)
jagsfit.mcmc library(lattice)
densityplot(jagsfit.mcmc)
Display the results:
summary(jagsfit.mcmc)
##
## Iterations = 5001:19986
## Thinning interval = 15
## Number of chains = 3
## Sample size per chain = 1000
##
## 1. Empirical mean and standard deviation for each variable,
## plus standard error of the mean:
##
## Mean SD Naive SE Time-series SE
## beta[1] 8.351e-01 0.03893 0.0007107 0.0007341
## beta[2] 9.425e-01 0.02906 0.0005306 0.0005056
## beta[3] 9.523e-01 0.01617 0.0002953 0.0002927
## deviance 2.399e+03 13.44356 0.2454447 0.2518437
## gamma 6.718e-02 0.02368 0.0004323 0.0004370
## mu[1] 3.654e-01 0.01958 0.0003575 0.0003510
## mu[2] 7.267e-01 0.02426 0.0004429 0.0004573
## prop[1] 7.397e-01 0.06232 0.0011378 0.0011381
## prop[2] 2.603e-01 0.06232 0.0011378 0.0011381
## pstar[1] 9.223e-01 0.01026 0.0001872 0.0002077
## pstar[10] 9.223e-01 0.01026 0.0001872 0.0002077
## pstar[11] 9.223e-01 0.01026 0.0001872 0.0002077
## pstar[12] 9.223e-01 0.01026 0.0001872 0.0002077
## pstar[13] 9.223e-01 0.01026 0.0001872 0.0002077
## pstar[2] 9.223e-01 0.01026 0.0001872 0.0002077
## pstar[3] 9.223e-01 0.01026 0.0001872 0.0002077
## pstar[4] 9.223e-01 0.01026 0.0001872 0.0002077
## pstar[5] 9.223e-01 0.01026 0.0001872 0.0002077
## pstar[6] 9.223e-01 0.01026 0.0001872 0.0002077
## pstar[7] 9.223e-01 0.01026 0.0001872 0.0002077
## pstar[8] 9.223e-01 0.01026 0.0001872 0.0002077
## pstar[9] 9.223e-01 0.01026 0.0001872 0.0002077
##
## 2. Quantiles for each variable:
##
## 2.5% 25% 50% 75% 97.5%
## beta[1] 0.7514 8.097e-01 8.389e-01 8.627e-01 0.9025
## beta[2] 0.8745 9.268e-01 9.469e-01 9.639e-01 0.9864
## beta[3] 0.9168 9.425e-01 9.538e-01 9.639e-01 0.9793
## deviance 2374.7104 2.390e+03 2.399e+03 2.408e+03 2426.5607
## gamma 0.0244 5.019e-02 6.656e-02 8.321e-02 0.1155
## mu[1] 0.3289 3.522e-01 3.646e-01 3.784e-01 0.4056
## mu[2] 0.6794 7.110e-01 7.259e-01 7.423e-01 0.7748
## prop[1] 0.6074 7.006e-01 7.422e-01 7.823e-01 0.8568
## prop[2] 0.1432 2.177e-01 2.578e-01 2.994e-01 0.3926
## pstar[1] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
## pstar[10] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
## pstar[11] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
## pstar[12] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
## pstar[13] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
## pstar[2] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
## pstar[3] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
## pstar[4] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
## pstar[5] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
## pstar[6] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
## pstar[7] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
## pstar[8] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
## pstar[9] 0.9010 9.154e-01 9.226e-01 9.294e-01 0.9416
Provide posterior means for population size:
<- matrix(NA, nrow(res_random$BUGSoutput$sims.list$pstar), n.primary)
Nmcmc for (i in 1:n.primary){
<- caught[i] / res_random$BUGSoutput$sims.list$pstar[,i]
Nmcmc[,i]
}<- apply(Nmcmc,2,mean)
Nmean <- apply(Nmcmc,2,quantile,probs = 2.5/100)
N25 <- apply(Nmcmc,2,quantile,probs = 97.5/100) N975
Now compare the estimates with credible intervals to the counts with no correction for imperfect detection:
<- data.frame(year = 2008:2020,
res Nhat = Nmean,
lwr = N25,
upr = N975,
counts = caught)
res
The abundance estimates are higher than the naive counts, which is what we expect as we correct for imperfect detection and individual heterogeneity in it (e.g. Cubaynes et al. 2010).
Visually, we obtain:
%>%
res ggplot(aes(year, Nhat)) +
geom_point(color = 'firebrick', size = 2) +
geom_line(color = 'firebrick', size = 0.5) +
geom_ribbon(aes(ymin = lwr,
ymax = upr),
alpha=0.3) +
scale_x_continuous(breaks = 2008:2020,
labels = 2008:2020) +
scale_y_continuous(breaks = seq(0,100,by=5),
labels = seq(0,100,by=5)) +
ylab("Estimated abundance") +
xlab("Year") +
labs(
title = 'Estimated abundance of Pyrenean brown bear',
subtitle = 'w/ a Bayesian robust design capture-recapture model')
sessionInfo()
## R version 4.1.0 (2021-05-18)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS Catalina 10.15.7
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRblas.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRlapack.dylib
##
## locale:
## [1] fr_FR.UTF-8/fr_FR.UTF-8/fr_FR.UTF-8/C/fr_FR.UTF-8/fr_FR.UTF-8
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] lattice_0.20-44 R2jags_0.7-1 rjags_4-10 coda_0.19-4
## [5] RMark_2.2.7 zoo_1.8-9 lubridate_1.7.10 forcats_0.5.1
## [9] stringr_1.4.0 dplyr_1.0.7 purrr_0.3.4.9000 readr_2.0.0
## [13] tidyr_1.1.3 tibble_3.1.4 ggplot2_3.3.5 tidyverse_1.3.1
##
## loaded via a namespace (and not attached):
## [1] httr_1.4.2 sass_0.4.0 bit64_4.0.5 vroom_1.5.4
## [5] jsonlite_1.7.2 splines_4.1.0 modelr_0.1.8 bslib_0.2.5.1
## [9] assertthat_0.2.1 expm_0.999-6 highr_0.9 wrapr_2.0.8
## [13] cellranger_1.1.0 yaml_2.2.1 R2WinBUGS_2.1-21 pillar_1.6.2
## [17] backports_1.2.1 glue_1.5.0 digest_0.6.28 rvest_1.0.1
## [21] colorspace_2.0-2 htmltools_0.5.2 Matrix_1.3-3 pkgconfig_2.0.3
## [25] broom_0.7.9 haven_2.4.3 mvtnorm_1.1-3 scales_1.1.1
## [29] tzdb_0.1.2 farver_2.1.0 generics_0.1.0 ellipsis_0.3.2
## [33] withr_2.4.2 cli_3.0.1 survival_3.2-11 magrittr_2.0.1
## [37] crayon_1.4.1 readxl_1.3.1 evaluate_0.14 msm_1.6.8
## [41] fs_1.5.0 fansi_0.5.0 xml2_1.3.2 tools_4.1.0
## [45] hms_1.1.0 lifecycle_1.0.0 munsell_0.5.0 reprex_2.0.1
## [49] compiler_4.1.0 jquerylib_0.1.4 rlang_0.4.12 grid_4.1.0
## [53] rstudioapi_0.13 rmarkdown_2.11 boot_1.3-28 codetools_0.2-18
## [57] gtable_0.3.0 abind_1.4-5 DBI_1.1.1 R6_2.5.1
## [61] knitr_1.36 bit_4.0.4 fastmap_1.1.0 utf8_1.2.2
## [65] stringi_1.7.5 matrixcalc_1.0-5 parallel_4.1.0 Rcpp_1.0.7
## [69] vctrs_0.3.8 dbplyr_2.1.1 tidyselect_1.1.1 xfun_0.28