Skip to main content

Table 3 Performance (AUC) and query time (seconds) of different BERT models for both vax-skeptic and pro-vaxxer content prediction

From: Network embedding aided vaccine skepticism detection

Model

Parent model

Original task

Published

Downloads

Vax-skeptic

Pro-vax

Time

AUC

AUC

Vaccinating-covid-tweets (Pak and Paroubek 2010)

Bertweet-base

Classification

2021

19 K

0.810

0.747

12.98

Bert-small (Bhargava et al. 2021; Turc et al. 2019)

Fill-Mask

2019

161 K

0.793

0.743

3.88

Covid-twitter-bert (Müller et al. 2020)

Bert-large

Fill-Mask

2020

8.5 K

0.787

0.753

34.72

Bert-medium (Bhargava et al. 2021; Turc et al. 2019)

Fill-Mask

2019

72.5 K

0.779

0.732

6.17

Bertweet-covid19-base (Nguyen et al. 2020)

Bertweet-base

Fill-Mask

2020

24.9 K

0.766

0.737

12.89

Bertweet-base (Nguyen et al. 2020)

Bert-base

Fill-Mask

2020

71.4 K

0.765

0.742

13.01

Bert-mini (Bhargava et al. 2021; Turc et al. 2019)

Fill-Mask

2019

54.5 K

0.751

0.701

2.66

Bert-tiny (Bhargava et al. 2021; Turc et al. 2019)

Fill-Mask

2019

129 K

0.709

0.639

1.84

Bert-base (Devlin et al. 2018)

Fill-Mask

2018

16107 K

0.709

0.700

13.53

Bertweet-large (Nguyen et al. 2020)

Bert-large

Fill-Mask

2020

8 K

0.575

0.605

34.05

Bert-large (Devlin et al. 2018)

Fill-Mask

2018

532 K

0.556

0.580

34.52

  1. The best performance is marked by boldface for each task. Typically, models pre-trained on COVID-19-related tweets perform better, while large BERT models overfit. The number of monthly downloads on Huggingface shown in the fifth column was accessed on 11 April 2022