PHYSICAL REVIEW E 102, 062302 (2020)
Dodge and survive: Modeling the predatory nature of dodgeball
Perrin E. Ruth
*
and Juan G. Restrepo
Department of Applied Mathematics, University of Colorado at Boulder, Boulder, Colorado 80309, USA
(Received 23 July 2020; accepted 20 October 2020; published 7 December 2020)
The analysis of games and sports as complex systems can give insights into the dynamics of human
competition and has been proven useful in soccer, basketball, and other professional sports. In this paper, we
present a model for dodgeball, a popular sport in U.S. schools, and analyze it using an ordinary differential
equation (ODE) compartmental model and stochastic agent-based game simulations. The ODE model reveals a
rich landscape with different game dynamics occurring depending on the strategies used by the teams, which can
in some cases be mapped to scenarios in competitive species models. Stochastic agent-based game simulations
confirm and complement the predictions of the deterministic ODE models. In some scenarios, game victory can
be interpreted as a noise-driven escape from the basin of attraction of a stable fixed point, resulting in extremely
long games when the number of players is large. Using the ODE and agent-based models, we construct a strategy
to increase the probability of winning.
DOI: 10.1103/PhysRevE.102.062302
I. INTRODUCTION
Games and sports are emerging as a rich test bed to study
the dynamics of competition in a controlled environment.
Examples include the analysis of passing networks [1,2] and
entropy [3] in soccer games (see also Ref. [4] for a discussion
on data-driven tactical approaches), scoring dynamics [57],
and play-by-play modeling [8,9] in professional sports such
as hockey, basketball, football, and table tennis, penalty kicks
in soccer games [10], and serves in tennis matches [11]. Here
we explore the dynamics of dodgeball, where the number of
players playing different roles changes dynamically and ulti-
mately determines the outcome of the game. While modeling
dodgeball might seem like a very specific task, it is a relatively
clean and well-defined system where the ability of mean-field
techniques [12,13] to describe human competition can be put
to the test. In addition, it complements ongoing efforts to
quantify and model dynamics in sports and games [111].
In this paper, we present and analyze a mathematical model
of dodgeball based on both agent-based stochastic game sim-
ulations and an ordinary differential equation (ODE)–based
compartmental model. By analyzing the stability of fixed
points of the ODE system, we find that different game dy-
namics can occur depending on the teams’ strategies: one of
the teams achieves a quick victory, either team can achieve a
victory depending on initial conditions, or the game evolves
into a stalemate. For the simplest strategy choice, these
regimes can be interpreted in the context of a competitive
Lotka-Volterra model. Numerical simulations of games based
on stochastic behavior of individual players reveal that the
stalemate regime corresponds to extremely long games with
large fluctuations. These long games can be interpreted as a
*
noise-driven escape from the basin of attraction of the stable
stalemate fixed point and are commonly observed in dodge-
ball games (see Fig. 2). Using both the stochastic and ODE
models, we develop a greedy strategy and demonstrate it using
stochastic simulations.
The structure for the paper is as follows. In Sec. II,we
describe the rules of the game we will analyze. In Sec. III,
we present and analyze a compartment-based model of dodge-
ball. In Sec. IV, we present stochastic numerical simulations
of dodgeball games and compare these with the predictions
of the compartmental model. We then discuss the notion of
strategy in the context of this stochastic model. Finally, we
present our conclusions in Sec. V.
II. DESCRIPTION OF DODGEBALL
In this paper, we consider the following variant played
often in elementary schools in the United States (sometimes
called prison dodgeball). Two teams (team 1 and team 2) of N
players each initially occupy two zones adjacent to each other,
which we will refer to as court 1 and court 2 (see Fig. 1).
Players in a court can throw balls at players of the opposite
team in the other court. If a player in a court is hit by such a
ball, they move to their respective team’s jail, an area behind
the opposite team’s court. A player in a court may also throw
a ball to a player of their own team in their jail, and if the
ball is caught, the catching player returns to their team’s court
(illustrated schematically in Fig. 3). We denote the number
of players on team i that are in court i and jail i by X
i
and
Y
i
, respectively. Team i loses when X
i
= 0. For simplicity,
we assume there are always available balls and neglect the
possibility that a player catches a ball thrown at them by an
enemy player.
In practice, games often last a long time without any of
the teams managing to send all the enemy players to jail.
Because of this, such games are stopped at a predetermined
2470-0045/2020/102(6)/062302(9) 062302-1 ©2020 American Physical Society
PERRIN E. RUTH AND JUAN G. RESTREPO PHYSICAL REVIEW E 102, 062302 (2020)
X
1
X
2
Y
1
Y
2
Jail 1Jail 2 Court 1 Court 2
FIG. 1. (a) Setup of dodgeball court. Players in team i make
transitions between court i and jail i, and team i loses when there
are no players in court i.
time and the winner is decided based on other factors (e.g.,
which team has more players on their court). An example of
this is in Fig. 2, which shows the numbers of players in courts
1 and 2, X
1
and X
2
, during two fifth-grade dodgeball games
in Eisenhower Elementary in Boulder, Colorado. The values
of X
1
and X
2
seem to fluctuate without any team obtaining
decisive advantage. The games continued after the time in-
terval shown and were eventually stopped. Our subsequent
model and analysis suggests that this stalemate behavior is
the result of underlying dynamics that has a stable fixed point
about which X
1
and X
2
fluctuate.
FIG. 2. Evolution of two fifth-grade dodgeball games played in
Eisenhower Elementary in Boulder, Colorado, USA. The number of
players in courts 1 and 2, X
1
and X
2
, fluctuate for a long time without
any team gaining a decisive advantage. The games were eventually
stopped and a winner decided on the spot.
FIG. 3. (Top) A player in a court can be sent to jail when hit by
a ball from a player in the opposing court. (Bottom) A player can be
saved from jail when catching a ball thrown by a player from their
court.
III. RATE EQUATION DESCRIPTION OF
GAME DYNAMICS
We begin our description of the game dynamics by adopt-
ing a continuum formulation where the number of players
in courts 1 and 2 are approximated by continuous variables.
These variables evolve following rate equations obtained from
the rates at which the processes described in the previous
section and illustrated in Fig. 3 occur. Since the number of
players in a dodgeball game is not too large (typically less
than 50), and the game is decided when the number of players
in a court drops to zero, one might question the validity of a
continuum description. However, as we will see in Sec. IV,
stochastic simulations with few players show that the rate
equations give useful insights about the dynamics of simulated
games with a finite number of players.
To construct the rate equations, we define λ as the mean
throw rate of the players. Consequently, team i throws balls at
arateofλX
i
. We also define F
i
(X
1
, X
2
) as the fraction of balls
that team i throws that are directed at enemy players, p
e
(X ),
as the probability that a ball thrown at X opposing players hits
one of them, and p
j
(Y ) as the probability that a ball thrown
at Y players in jail is caught. Combining these processes and
using Y
i
= N X
i
, we get the dodgeball equations:
˙
X
1
= λX
1
[1 F
1
(X
1
, X
2
)]p
j
(N X
1
)
λX
2
F
2
(X
1
, X
2
)p
e
(X
1
), (1)
˙
X
2
= λX
2
[1 F
2
(X
1
, X
2
)]p
j
(N X
2
)
λX
1
F
1
(X
1
, X
2
)p
e
(X
2
). (2)
Note that, given the initial conditions X
i
(0) = N, X
i
(t )
[0, N] for all t 0. For simplicity, we assume the functions p
j
and p
e
to be linear, p
j
(Y ) = k
j
Y and p
e
(X ) = k
e
X . Defining
the normalized number of players x
i
= X
i
/N [0, 1] and the
dimensionless time τ = λNk
j
t, we get the simplified dodge-
ball equations:
dx
1
dτ
= x
1
(1 x
1
)[1 f
1
(x
1
, x
2
)] cx
1
x
2
f
2
(x
1
, x
2
), (3)
dx
2
dτ
= x
2
(1 x
2
)[1 f
2
(x
1
, x
2
)] cx
1
x
2
f
1
(x
1
, x
2
), (4)
062302-2
DODGE AND SURVIVE: MODELING THE PREDATORY PHYSICAL REVIEW E 102, 062302 (2020)
TABLE I. Notation used in the dodgeball model, Eqs. (5)and(6).
Symbol Meaning
a
i
Probability that a player in team i tries to hit an
opponent instead of saving a teammate from jail
x
i
Fraction of players in team i in court i
c Probability of hitting/probability of saving
where f
i
(x
1
, x
2
) = F
i
(Nx
1
, Nx
2
) and c = k
e
/k
j
> 0 is the ef-
fectiveness of throwing a ball at an enemy relative to throwing
aballatjail.
Equations(3) and (4) can be interpreted as a two-species
competition model for the population of players in courts 1
and 2. Focusing on Eq. (3), the first term on the right-hand side
represents logistic growth of the number of players in court 1,
which is mediated by rescue of players from jail 1 with rate
[1 f
1
]. The carrying capacity is determined by the fact that
there are no more players available to be rescued when x
1
= 1.
The second term can be interpreted as interspecific competi-
tion, with players in court 1 disappearing via interactions with
players in court 2, occurring with rate cf
2
x
2
.
A. Example: Fixed strategy
As an illustrative example, we will focus on the case
when the strategy for both teams is fixed over the course
of the game, f
i
(x
1
, x
2
) = a
i
(0, 1). We will consider state-
dependent choices for f
i
(i.e., strategies) in Sec. IV. Inserting
f
i
(x
1
, x
2
) = a
i
into Eqs. (3) and (4)gives
dx
1
dτ
= x
1
(1 x
1
)(1 a
1
) cx
1
x
2
a
2
, (5)
dx
2
dτ
= x
2
(1 x
2
)(1 a
2
) cx
1
x
2
a
1
, (6)
which is a two-species competitive Lotka-Volterra sys-
tem [14]. See Table I for a description of the parameters and
variables appearing in Eqs. (5)–(6). In this case, we can use
known results about this system to understand the possible
game scenarios. Specifically, at τ = 0, the system starts at
(x
1
, x
2
) = (1, 1). For τ>0, the solution converges toward
one of the stable fixed points of (5) and (6) in the invariant
square [0, 1] × [0, 1], which are (0,0), (0,1), and (1,0), and
the solutions (x
1
, x
2
) of the linear system
0 = (1 x
1
)(1 a
1
) cx
2
a
2
, (7)
0 = (1 x
2
)(1 a
2
) cx
1
a
1
. (8)
If a
1
a
2
c
2
= (1 a
1
)(1 a
2
), there is a unique solution to
these equations, the fixed point
x
1
=
(1 a
2
)[a
2
c (1 a
1
)]
a
1
a
2
c
2
(1 a
1
)(1 a
2
)
, (9)
x
2
=
(1 a
1
)[a
1
c (1 a
2
)]
a
1
a
2
c
2
(1 a
1
)(1 a
2
)
. (10)
The degenerate case occurs where a
1
a
2
c
2
= (1 a
1
)(1 a
2
)
gives a continuum of fixed points described by
x
1
+ x
2
= 1, (11)
FIG. 4. Stream plots of Eqs. (5)and(6) with c = 0.5 and various
values of a
1
and a
2
.(Topleft)Stalemate:Fora
1
= 1/4, a
2
= 3/4,
both (0,1) and (1,0) are unstable and (x
1
, x
2
) is stable. (Top right)
Team 1 wins:Fora
1
= 9/16, a
2
= 3/4, (1,0) is a stable fixed point
while (0,1) is unstable, giving team 1 the advantage; note that in
this case (x
1
, x
2
) / [0, 1]
2
. (Bottom left) Competitive:Fora
1
= 7/8,
a
2
= 3/4, both (0,1) and (1,0) are stable fixed points, and the winner
is determined by the initial conditions. (Bottom right) Degenerate:
For the special case a
1
= a
2
= (1 + c)
1
, every point on the line
x
1
+ x
2
= 1 is a fixed point.
when a
1
= (1 a
2
)/c and a
2
= (1 a
1
)/c, and no solution
otherwise.
The fixed point (0,0) corresponds to both teams running
out of players, the fixed points (1,0) and (0,1) correspond to
team 1 and team 2 winning, respectively, and the fixed point
(x
1
, x
2
), when it is stable and in (0, 1)
2
, corresponds to a
stalemate situation where the number of players in each court
remains constant in time. By analyzing the linear stability of
the fixed points (see, e.g., Ref. [14]), one finds that the game
dynamics can be classified in the following cases:
(1) Stalemate. This occurs when (0,1), (1,0) are both un-
stable and (x
1
, x
2
)isin[0, 1]
2
and is stable, which occurs
when a
1
< (1 a
2
)/c and a
2
< (1 a
1
)/c. In this scenario,
the solution settles in the fixed point (x
1
, x
2
) and no team
wins in the deterministic version of the game. The flow cor-
responding to this case is shown in Fig. 4 (top left). This
scenario is analogous to the “stable coexistence” of species
in the Lotka-Volterra model.
(2) Competitive. This occurs when (0,1), (1,0) are stable
and the fixed point (x
1
, x
2
)isin[0, 1]
2
and is unstable, which
occurs when a
1
> (1 a
2
)/c and a
2
> (1 a
1
)/c. The stable
manifold of (x
1
, x
2
) acts as a separatrix for the basins of
attraction of the fixed points that correspond to victories for
team 1 and team 2. See Fig. 4 (bottom left). This scenario
is analogous to the “unstable coexistence” of species in the
Lotka-Volterra model.
062302-3
PERRIN E. RUTH AND JUAN G. RESTREPO PHYSICAL REVIEW E 102, 062302 (2020)
(a) (b)
FIG. 5. Deterministic game outcomes based on different strate-
gies (a
1
, a
2
)for(a)c < 1and(b)c > 1.
(3) Team 1 wins. This occurs when (0,1) is unstable and
(1,0) is stable, which occurs when a
1
> (1 a
2
)/c and a
2
<
(1 a
1
)/c. In this scenario, the solution converges toward a
victory by team 1. See Fig. 4 (top right). This scenario is
analogous to the “competitive exclusion” of species in the
Lotka-Volterra model, in which one species is driven to ex-
tinction by the other.
(4) Team 2 wins. This occurs when (0,1) is stable and (1,0)
is unstable, and is analogous to the team 1 wins case. In this
scenario, the solution converges toward a victory by team 2.
(5) Degenerate. This occurs when there is a continuum of
fixed points x
1
+ x
2
= 1. In this scenario, the solution con-
verges toward the line x
1
+ x
2
= 1, and no winner is produced
in the deterministic version of the game. See Fig. 4 (bottom
right).
Figure 4 illustrates these different game dynamics by
showing the flow induced by Eqs. (5) and (6) in the region
0 x
1
1, 0 x
2
1 for various parameter choices. Stable
fixed points are shown as red circles, and unstable fixed points
as yellow circles.
In Fig. 5, we illustrate how the game outcome depends on
the strategies used by both teams. The cases c > 1 and c < 1
are illustrated in Figs. 5(a) and 5(b), respectively. The strategy
phase space (a
1
, a
2
) is divided into four regions separated by
the lines a
1
= (1 a
2
)/c and a
2
= (1 a
1
)/c. When both
teams preferentially save players of their own team from jail,
instead of trying to hit players from the other team (i.e.,
both a
1
and a
2
are small), the game results in a stalemate
(we reiterate that when stochasticity is included, this scenario
corresponds to long games). When both teams preferentially
hit players from the other team (i.e., both a
1
and a
2
are close
to 1), a winner emerges quickly. When teams have opposite
strategies, one of the teams can quickly win, depending on the
value of c.
While the rate equation description provides interesting
insights, it relies on the assumption of an infinite number
of players. Because of this, some of its predictions are not
reasonable for games with a finite number of players. For
example, it predicts that the outcome of games is completely
determined by parameters and initial conditions. In reality,
games are determined by the aggregate behavior of a finite
number of individual players, and chance can play an im-
portant role. In the next section we will model dodgeball
FIG. 6. Stochastic dodgeball game. Players make transitions be-
tween the indicated compartments with the rates shown next to the
arrows. The game ends when either X
1
= 0orX
2
= 0.
games by considering the stochastic behavior of individual
players, and we will find that the insights provided by the rate
equations are useful to understand the stochastic dodgeball
games.
IV. STOCHASTIC DODGEBALL SIMULATIONS
In this section, we present numerical simulations of
dodgeball games using a stochastic agent-based model that
corresponds to the simplified model used in Sec. III.
In the stochastic version of the game, each team starts with
N players in their respective court, X
1
(0) = X
2
(0) = N, and
no players in jail, Y
1
(0) = Y
2
(0) = 0. Players in court 1 make
stochastic transitions to jail 1 at rate λX
2
(t )F
2
(X
1
, X
2
)k
e
X
1
,
and players in jail 1 make transitions to court 1 at rate λX
1
[1
F
1
(X
1
, X
2
)]k
j
(N X
1
), where, as in Sec. III, F
i
(X
1
, X
2
)isthe
probability that a player in court i will throw a ball toward
an enemy player in the opposite court instead of trying to
save a teammate from jail, k
e
is the probability of hitting a
single enemy player, and k
j
is the probability that a player
in jail catches a ball thrown at them. The rates of transi-
tion for players in team 2 are obtained by permuting the
indices 1 and 2. By using the dimensionless time τ = λk
j
t,the
rates of transition per dimensionless time are cX
1
X
2
F
2
(X
1
, X
2
)
and X
1
(N X
1
)[1 F
1
(X
1
, X
2
)] for players to transition from
court 1 to jail 1 and from jail 1 to court 1, respectively, where
c = k
e
/k
j
. The compartmental model corresponding to this
process is shown schematically in Fig. 6. The code used for
simulating the agent-based dodgeball model and finding the
probability that a team wins can be found on the GitHub
repository [15].
062302-4
DODGE AND SURVIVE: MODELING THE PREDATORY PHYSICAL REVIEW E 102, 062302 (2020)
FIG. 7. Simulations of games with the same constants as Fig. 4.
Trajectories (X
1
, X
2
) have stochastic fluctuations on top of the deter-
ministic flow of Fig. 4. The “stalemate” regime (top left) results in
long, back-and-forth games.
A. Stochastic games
In Fig. 7, we show the evolution of four dodgeball games
simulated as described above using the same parameters as
in Fig. 4. The plots show the trajectories of (X
1
, X
2
) starting
from initial conditions (50,50). Note that although the trajec-
tories have significant fluctuations, they follow approximately
the ow shown in Fig. 4. In particular, for the parameters
resulting in the stalemate scenario [i.e., a stable fixed point
(x
1
, x
2
) (0, 1) × (0, 1)] the number of players in courts 1
and 2 fluctuates around (Nx
1
, Nx
2
) (indicated with an arrow).
In practice, these parameters result in extremely long games
that continue until a random fluctuation is large enough to
decrease X
1
or X
2
to zero. To further illustrate this, Fig. 8
shows X
1
(t ) (solid blue) and X
2
(t ) (dotted orange) as a func-
tion of t for the parameters in Fig. 4(a). The evolution of
this game resembles that of the games seen in Fig. 2, which
suggests that those games were in the stalemate regime. In
the degenerate case, Fig. 4(d), the game trajectory has large
fluctuations around the line X
1
+ X
2
= N, which corresponds
to the line of fixed points x
1
+ x
2
= 1 of the deterministic
system. We interpret this behavior as the trajectory diffusing
under the effect of the fluctuations along the marginally stable
line X
1
+ X
2
= N. Note that in the particular trajectory shown,
team 1 wins even after at some point in time they had only
one player in court 1. In Fig. 7(c), the game eventually results
in a victory by team 1, even though the deterministic model
predicts a victory by team 2 [see Fig. 4(c)], because stochastic
fluctuations of the trajectory (X
1
, X
2
) allow it to cross over to
the basin of attraction of (1,0).
FIG. 8. Fraction of players in courts 1 and 2 (solid and dotted
lines) vs dimensionless time τ = λNk
j
t for a stochastic game sim-
ulation with the same parameters as Fig. 4 (top left), i.e., c = 1/2,
a
1
= 1/4, a
2
= 3/4, and N = 50. In the “stalemate” regime, the
fraction of players fluctuates stochastically about the fixed point
values x
1
= x
2
(flat dashed line).
As we see from these examples, the outcome of stochas-
tic dodgeball games is determined both by the underlying
deterministic flow and by the stochastic fluctuations of the
(X
1
, X
2
) trajectories. To account for this, we focus on how
the probability P of winning a game depends on the param-
eters. This probability can be calculated directly from the
outcomes of a large number of simulated games, but it is
much more efficiently calculated by using the properties of
the underlying Markov process, as explained in the Appendix.
To illustrate how the probability of winning can be related
to the deterministic results, we fix c = 2/3 and a
2
= 3/4,
and calculate P
1
as a function of a
1
. Figure 9(a) shows P
1
as a function of a
1
for N = 1, 5, 10, 20, and 50 (solid blue,
dashed orange, dashed dotted yellow, dotted purple, and solid
light green lines, respectively). As a
1
increases from 0 to 1,
different regimes of the deterministic model are traversed. For
the parameters given, let a
s
= (1 a
2
)/c = 3/8 and a
c
= 1
a
2
c = 1/2, which are shown as dashed red lines. For 0 a
1
<
a
s
, the system is in the “stalemate” case; for a
s
< a
1
< a
c
the system is the “team 1 wins” case; and for a
c
< a
1
< 1,
it is in the “competitive” case. Now we interpret how P
1
changes as a
1
is increased. For a
1
< 1 a
2
, the fixed point
(x
1
, x
2
) is closer to (0,1) than it is to (1,0), and since victory
is achieved by escaping the basin of attraction of the fixed
point with random fluctuations, it is much more likely that
this escape will occur to the nearest fixed point, in this case
(0,1). Therefore, P
1
0 in this regime, and it is smaller for
larger N since fluctuations are smaller. For 1 a
2
< a
1
< a
s
,
the game is still in the stalemate regime, but now (x
1
, x
2
)is
closer to (1,0) and therefore P
1
1, and increases with N.
For a
s
< a
1
< a
c
, the game is in the “team 1 wins” regime,
and so P
1
approaches 1 rapidly as N increases. For a
1
> a
c
,
the game is in the “competitive” regime, where the initial
condition (1,1) is in the basin of attraction of (1,0) for a
1
< a
2
and in the basin of attraction of (0,1) for a
1
> a
2
, which is
reflected by the fact that P
1
> 1/2fora
1
< a
2
and P
1
< 1/2
062302-5
PERRIN E. RUTH AND JUAN G. RESTREPO PHYSICAL REVIEW E 102, 062302 (2020)
FIG. 9. (a) Probability that team 1 wins a game P
1
as a func-
tion of a
1
with c = 2/3anda
2
= 3/4forN = 1, 5, 10, 20, and 50
(solid blue, dashed orange, dashed dotted yellow, dotted purple, and
solid light green lines, respectively). The dashed red lines mark
bifurcations in the deterministic dynamics (see text), and the dashed
horizontal line indicates P
1
= 1/2. The leftmost region corresponds
to the “stalemate” regime, which leads to long games. The middle
region represents “team 1 wins, which can be noted by the large
values of P
1
for large values of N. The right region is the “competi-
tive” region in the deterministic model noted by mixed values of P
1
and quicker games. (b) Average duration of games (in dimensionless
time τ = λNk
j
t) with the same parameters as in the bottom panel.
The duration of games in the “stalemate” regime increases with N.
The shaded area around the green curve represents three standard
deviations.
for a
2
< a
1
. We note that for very small N (e.g., N = 1, 5), the
predictions of the deterministic theory break down. This can
be understood in the limiting case N = 1 (solid blue curve),
where the probability of winning can be calculated explicitly
as P
1
= a
1
/(a
1
+ a
2
) = 4a
1
/(4a
1
+ 3).
According to our interpretation, victory in the “stalemate”
regime is achieved by escaping the basin of attraction of
the underlying stable fixed point (x
1
, x
2
) via fluctuations in-
duced by the finite number of players. Since these fluctuations
become less important as the number of players increases,
one would expect that the average time τ to achieve victory
would (i) be largest in the “stalemate” regime and (ii) increase
with N. Figure 9(b) shows the average game duration τ as
a function of a
1
, calculated from direct simulation of 5000
stochastic games when N < 50 and 100 games when N = 50.
Consistent with the interpretation above, τ is much longer
in the “stalemate” regime and increases with N [we have
found that τ scales exponentially with N (not shown), as one
would expect for an escape problem driven by finite size fluc-
tuations]. Furthermore, it is maximum approximately when
FIG. 10. Probability that team 1 wins P
1
as a function of a
1
and
a
2
. The dashed line corresponds to the N = 20 curve in Fig. 9(a).
(x
1
, x
2
) is equidistant to (0,1) and (1,0), i.e., when a
1
= 1 a
2
[see Fig. 9(a)].
To get a broader picture of how the choice of fixed strate-
gies a
1
, a
2
affects the probability of winning, we show in
Fig. 10 the probability that team 1 wins, P
1
, as a function of
a
1
and a
2
, obtained numerically as described in the Appendix
for N = 20 and the same parameters of Fig. 5(a). The curve
for N = 20 in Fig. 9(a) corresponds to the values shown in the
dashed line. There appears to be a saddle point approximately
at (a
1
, a
2
) (1/2, 1/2) corresponding to a Nash equilibrium,
i.e., a set of strategies such that neither team would benefit
from a change of strategy if the other team maintains their
strategy. The issue of the appropriate definition and existence
of Nash equilibria in finite-player stochastic games and their
behavior as the number of players tends to infinity has been
studied in the emerging area of mean-field games [12,13]. We
leave a more detailed study of Nash equilibria in dodgeball for
future study.
It is interesting to compare the deterministic and agent-
based stochastic simulations. In particular, Figs. 5(a) and 10
show the outcome of the game in the deterministic and
stochastic cases, respectively, for the same parameters. While
some differences are apparent, they can be understood by con-
sidering the interplay between the deterministic flow and finite
size fluctuations. For example, the region labeled “stalemate”
in Fig. 5(a) corresponds to a region with a stable fixed point
toward which the initial condition is attracted, and therefore
no winner is produced. In contrast, in the same region in
Fig. 10, the probabilities of team 1 or team 2 winning are
either extremely close to one or extremely small, meaning one
of the teams has a large probability of winning. The reason
for the apparent discrepancy is that in this region, victory is
indeed achieved by one of the teams in the agent-based model,
but only after a time which grows exponentially with the
number of players. In addition, victory is achieved with over-
whelming probability by escaping toward the closest fixed
point of (0,1) or (1,0), which leads to the extreme imbalance in
this region. Therefore, the apparent discrepancy is resolved if
one takes into account the prohibitively long game times in the
stalemate region. Similarly, the outcome of the game in the re-
062302-6
DODGE AND SURVIVE: MODELING THE PREDATORY PHYSICAL REVIEW E 102, 062302 (2020)
gion labeled “competitive” in Fig. 5(a) is strongly affected by
stochastic fluctuations in the agent-based model. A treatment
of the effect of finite-size fluctuations using the so-called “lin-
ear noise approximation” [16] could allow one to study quan-
titatively the escape from the basin of attraction of the stale-
mate fixed point, but we do not attempt this approach here.
B. Heuristic s trategy
In the example treated in the previous sections, the prob-
ability that a player in team i decides to throw a ball to
an enemy player instead of rescuing a teammate from jail,
F
i
(X
1
, X
2
) is fixed throughout the game at the value a
i
.In
reality, players may adjust this probability in order to optimize
the probability of winning. In this section, we will develop a
heuristic greedy strategy with the goal of trying to optimize
victory. For this purpose, it is useful to define the quantities
H
i
as
H
1
=
X
1
X
1
+ X
2
, H
2
=
X
2
X
1
+ X
2
. (12)
These quantities have the advantage that they are normalized
between 0 and 1, with H
i
= 0(H
i
= 1) corresponding to a
loss (victory) by team i. In addition, H
i
corresponds to the
probability that team i will throw a ball next, and therefore it
is a good indicator of how much control team i has. Therefore,
it is reasonable for team i to apply a strategy to increase
H
i
. To develop such a strategy, we define H
i
and H
+
i
as the
values of H
i
before and after a ball is thrown. Similarly, we
define X
i
and X
+
i
as the values of X
i
before and after a ball is
thrown. For definiteness, we will present the strategy for team
1, and the strategy for team 2 will be similar. The basis of the
strategy is to choose the value of F
1
(X
1
, X
2
) that maximizes
the expected value of H
+
1
, E[H
+
1
]. Since F
1
is the probability
that the ball is thrown at enemy players, p
e
is the probability
that such a ball actually hits an enemy player, 1 F
1
is the
probability that the ball is thrown at a teammate in jail, and p
j
is the probability that such a ball is successful in rescuing a
teammate, the expected value of H
+
1
is given by
E[H
+
1
] = F
1
X
1
X
1
+ X
2
1
p
e
+
X
1
X
1
+ X
2
(1 p
e
)
+ (1 F
1
)
X
1
+ 1
X
1
+ X
2
+ 1
p
j
+
X
1
X
1
+ X
2
(1 p
j
)
,
(13)
which can be rewritten as
E[H
+
1
] = A +
B
X
1
+ X
2
F
1
, (14)
where
B =
X
t
1
X
t
1
+ X
t
2
1
p
e
X
t
2
X
t
1
+ X
t
2
+ 1
p
j
(15)
and A is independent of F
1
.
Since Eq. (14) is linear in F
1
, it is maximized by choosing
F
1
= 1 when B > 0 and F
1
= 0 when B < 0. Therefore, the
choice of F
1
that maximizes the expected value of H
+
1
, F
1
,is
F
1
=
1,
X
1
X
1
+X
2
1
p
e
(X
2
)
X
2
X
1
+X
2
+1
p
j
(N X
1
),
0, otherwise.
(16)
FIG. 11. Probability of team 1 winning with the heuristic strat-
egy F
1
against a fixed strategy a
2
. Number of players in each game is
set to N = 20.
When X
1
, X
2
1, the strategy simplifies to
F
1
1, X
1
p
e
(X
2
) X
2
p
j
(N X
1
),
0, otherwise.
(17)
We note that this can also be derived by maximizing dH
1
/dt
by using Eqs. (3) and (4). Furthermore, for the case considered
in Secs. III and IV, where p
e
(X
i
) = k
e
X
i
and p
j
(Y
i
) = k
j
Y
i
,the
strategy reduces to
F
1
=
1, k
e
X
1
k
j
(N X
1
),
0, otherwise.
(18)
For example, when k
e
= k
j
(i.e., the probability of success
in hitting an enemy player is the same as the probability of
succeeding in rescuing a teammate from jail), the strategy for
team 1 consists in trying always to rescue teammates from jail
1 when the majority of team 1 player’s are in jail 1, and in
trying to hit players from team 2 when the majority of team
1’s players are in court 1. Interestingly, in the limit X
1
, X
2
1
the strategy for team 1 is independent of X
2
.
To validate the effectiveness of this strategy, we simu-
late dodgeball games in which team 1 adopts the strategy
F
1
(X
1
, X
2
) = F
1
given by Eq. (16) and team 2 uses the fixed
strategy F
2
(X
1
, X
2
) = a
2
.InFig.11, we plot the probability
that Team 1 wins, P
1
, as a function of a
2
for c = 2/3, 1, 3/2,
and (solid blue, dashed orange, dashed dotted yellow, and
dotted purple lines, respectively). As the figure shows, using
the strategy F
1
consistently results in a probability of winning
higher than 1/2. In general, the strategy F
1
does best when
c is small and N is large. Note the probability of team 1
winning is 1/2 only when c =∞, i.e., the chance of saving
a player in jail is 0. In this case, the strategy a
2
= 1 is clearly
optimal.
V. CONCLUSIONS
In this paper, we presented a mathematical model of dodge-
ball, which we analyzed via an ODE-based compartmental
model and numerical simulations of a stochastic agent-based
model. These two complementary methods of analysis re-
062302-7
PERRIN E. RUTH AND JUAN G. RESTREPO PHYSICAL REVIEW E 102, 062302 (2020)
vealed a rich dynamical landscape. Depending on teams’
strategies, the dynamics and outcome of the game are deter-
mined by a combination of the stability of the fixed points
of the underlying dynamical system and the stochastic fluctu-
ations caused by the random behavior of individual players.
Additionally, we derived a greedy strategy in the context of
the stochastic model of dodgeball. While our strategy was
shown to be effective against fixed strategies (i.e., F
2
= a
2
),
it is not necessarily optimal. This suggests the future work of
finding an optimal strategy as well as studying the topic of
Nash equilibriums in the context of dodgeball.
More data are needed to verify some of the predictions of
the dodgeball model. While the time series from real games
shown in Fig. 2 appear to be consistent with the stalemate
regime, a quantitative comparison would need estimation of
the quantities k
e
, k
j
, a
1
, and a
2
. In principle, these probabilities
could be estimated from recorded dodgeball games. Never-
theless, the continuous model of dodgeball is able to offer
reasonable insights into the behavior of stochastic agent-based
games with a realistic number of players.
Our model and analysis relied on various assumptions and
simplifications, and relaxing some of these assumptions could
be a useful topic for future work as well. One significant
assumption used is that a ball thrown at an enemy player
will not be caught. However, it is possible for balls to be
caught, and this causes the thrower to be sent to jail. The
dodgeball model could be extended to include this situation.
Whom a player decides to target currently only depends on
the number of remaining enemies in play and the number
of people in jail, but this could be generalized to account
for heterogeneous targeting probabilities. The last assumption
that will be discussed here is that this model assumes uniform
behavior of the players. Individual ability could be modeled
by including an individual’s ability to catch balls, hit an enemy
target, and hit shots on jail. Finally, we assumed that players
behave independently (which is a reasonable approximation
in elementary school games). Coordinated strategies such as
those used in professional games are not considered here.
ACKNOWLEDGMENTS
We thank James Meiss, Nicholas Landry, Daniel Lar-
remore, and Max Ruth for their useful comments. We also
thank Eisenhower Elementary for allowing us to use the data.
APPENDIX
In this Appendix, we provide details about the numerical
simulation of the stochastic dodgeball games and the numeri-
cal computation of winning probabilities P
i
.
1. Agent-based stochastic simulations
Here we describe the simulation of a single, stochastic
agent-based dodgeball game. At t = 0, the game starts with
N players on each team, X
1
= X
2
= N. Since λ is the rate at
which players throw balls, and we assume that players throw
balls independently of each other, the next ball throw in the
game is exponentially distributed with rate
r = (X
1
+ X
2
)λ. (A1)
The probability that team i throws a ball next (before the other
team), which we denote p
i
, is given by
p
1
=
X
1
X
1
+ X
2
, (A2)
p
2
=
X
2
X
1
+ X
2
. (A3)
The pseudocode for simulating a game is below. Recall that
F
i
(X
1
, X
2
) is the probability that team i throws a ball toward
the enemy instead of toward their jail, p
e
is the probability
that a ball thrown toward the enemy hits a target, and p
j
is
the probability that a ball thrown toward jail is successfully
caught. In addition, we stop the simulation if the number of
throws k exceeds K
max
= 50N
2
.
Algorithm 1 Simulate dodgeball game.
At t = 0, set X
1
= X
2
= N and k = 0.
while (X
1
> 0andX
2
> 0) and k K
max
do
k k + 1
t t + Exponential random variable with mean 1/r.
Choose throwing team, 1 or 2, with probabilities p
1
, p
2
. Let the
throwing team be i and the other team be j.
Choose to throw ball at enemy or rescue from jail
with probabilities F
i
(X
1
, X
2
), 1 F
i
(X
1
, X
2
).
if Throw to enemy then
X
j
X
j
1 with probability p
e
(X
j
)
end if
if Throw to jail then
X
i
X
i
+ 1 with probability p
j
(N X
i
)
end if
end while
2. Calculation of winning probabilities P
i
Here we explain how the probability that team i wins, P
i
,is
calculated for a given set of parameters.
First, we define as
v
k
the column vector whose entries are
the probabilities that the game is in each of the (N + 1)
2
possi-
ble states (X
1
, X
2
)afterthekth ball is thrown. Accordingly,
v
0
is the vector that represents the initial condition (N, N ). Then,
we define M as the (N + 1)
2
× (N + 1)
2
matrix of transition
probabilities between these states. Because the game is a
Markov process, we have
v
k+1
= M
v
k
. (A4)
Now we let
u
i
be a vector that is 1 in each state in which team
i wins and 0 otherwise. Then,
P
i
= lim
k→∞
v
T
k
u
i
. (A5)
In practice, we stop the iteration when
|P
1
+ P
2
1|=|
v
k
· (
u
1
+
u
2
) 1| < 10
4
, (A6)
or k > K
max
= 50N
2
. When the game is in stalemate, the
expected length of games grows exponentially with N, and the
calculation above becomes impractical for moderate values of
N. In this case, we instead evolve the vector
v
k
in steps that
are powers of 2, as
v
2
j
= M
2
j
v
0
=
M
2
j1
2
v
0
, (A7)
062302-8
DODGE AND SURVIVE: MODELING THE PREDATORY PHYSICAL REVIEW E 102, 062302 (2020)
In practice, we stop this iteration when j > J
max
= 256.
The iteration described by Eq. (A7) uses repeated nonsparse
matrix multiplications, while Eq. (A4) uses faster sparse
matrix-vector products. However, since games can be ex-
tremely long in the stalemate regime, the method described
by Eq. (A7) is still faster in that regime. We choose the values
J
max
and K
max
such that in practice Eqs. (A4) and (A7)take
similar amounts of time in the stalemate regime.
[1] J. M. Buldú, J. Busquets, J. H. Martínez, J. L. Herrera-Diestra,
I. Echegoyen, J. Galeano, and J. Luque, Front. Psychol. 9, 1900
(2018).
[2] I. G. McHale and S. D. Relton, Eur. J. Oper. Res. 268, 339
(2018).
[3] J. H. Martínez, D. Garrido, J. L. Herrera-Diestra, J. Busquets,
R. Sevilla-Escoboza, and J. M. Buldú, Entropy 22, 172 (2020).
[4] R. Rein and D. Memmert, SpringerPlus 5, 1 (2016).
[5] S. Merritt and A. Clauset, EPJ Data Sci. 3, 4 (2014).
[6] A. Clauset, M. Kogan, and S. Redner, Phys. Rev. E 91, 062815
(2015).
[7] D. P. Kiley, A. J. Reagan, L. Mitchell, C. M. Danforth, and P. S.
Dodds, Phys. Rev. E 93, 052314 (2016).
[8] P. Vra
ˇ
car, E. Štrumbelj, and I. Kononenko, Expert Syst. Appl.
44, 58 (2016).
[9] J. Wang, K. Zhao, D. Deng, A. Cao, X. Xie, Z. Zhou, H. Zhang,
and Y. Wu, IEEE Trans. Visualization Comput. Graph. 26, 407
(2019).
[10] I. Palacios-Huerta, Rev. Econ. Stud. 70, 395 (2003).
[11] M. Walker and J. Wooders, Am. Econ. Rev. 91, 1521
(2001).
[12] J.-M. Lasry and P.-L. Lions, Jpn. J. Math 2, 229 (2007).
[13] A. Bensoussan, J. Frehse, and P. Yam, Mean Field Games and
Mean Field Type Control Theory, Springer Science & Business
Media (Springer, 2013).
[14] N. Gotelli, A Primer of Ecology (Oxford University Press, Ox-
ford, UK, 2001).
[15] [https://github.com/Dodgeball-code/Dodgeball]
[16] N. G. Van Kampen, Stochastic Processes in Physics and Chem-
istry (Elsevier, 2011).
062302-9