In this practical matplotlib.animation.FuncAnimation will be used to animate the model in a separate window. Some stopping conditions will be added to halt the model and exit.
In your local code repository 'src' directory duplicate your 'abm6' directory and name it 'abm7'.
Open the new 'model.py' file from the 'abm7' directory in Spyder.
Add the following import statement:
import matplotlib.animation as anim
After initialising agents add the following code block:
# Animate
# Initialise fig and carry_on
fig = matplotlib.pyplot.figure(figsize=(7, 7))
ax = fig.add_axes([0, 0, 1, 1])
carry_on = True
data_written = False
animation = anim.FuncAnimation(fig, update, init_func=plot, frames=gen_function, repeat=False)
Define a new function called 'plot' to contain the 'Plot agents' code. Add a line to clear fig at the start of the function and specify 'ite' as a global variable before it is used:
fig.clear()
plt.ylim(y_min, y_max)
plt.xlim(x_min, x_max)
plt.imshow(environment)
for i in range(n_agents):
plt.scatter(agents[i].x, agents[i].y, color='black')
# Plot the coordinate with the largest x red
lx = max(agents, key=operator.attrgetter('x'))
plt.scatter(lx.x, lx.y, color='red')
# Plot the coordinate with the smallest x blue
sx = min(agents, key=operator.attrgetter('x'))
plt.scatter(sx.x, sx.y, color='blue')
# Plot the coordinate with the largest y yellow
ly = max(agents, key=operator.attrgetter('y'))
plt.scatter(ly.x, ly.y, color='yellow')
# Plot the coordinate with the smallest y green
sy = min(agents, key=operator.attrgetter('y'))
plt.scatter(sy.x, sy.y, color='green')
global ite
filename = '../../data/output/images/image' + str(ite) + '.png'
plt.savefig(filename)
images.append(imageio.imread(filename))
plt.show()
Change the 'main simulation loop' code block into a function called 'update' that has a parameter called 'frames'. At the end of this call the 'plot' function. Specify 'carry_on' as a global variable and add a random stopping condition as follows:
def update(frames):
# Model loop
#for ite in range(1, n_iterations + 1):
print("Iteration", frames)
# Move agents
print("Move and eat")
for i in range(n_agents):
agents[i].move(x_min, y_min, x_max, y_max)
agents[i].eat()
#print(agents[i])
# Share store
print("Share")
# Distribute shares
for i in range(n_agents):
agents[i].share(neighbourhood)
# Add store_shares to store and set store_shares back to zero
for i in range(n_agents):
#print(agents[i])
agents[i].store = agents[i].store_shares
agents[i].store_shares = 0
#print(agents)
# Print the maximum distance between all the agents
print("Maximum distance between all the agents", get_max_distance())
# Print the total amount of resource
sum_as = sum_agent_stores()
print("sum_agent_stores", sum_as)
sum_e = sum_environment()
print("sum_environment", sum_e)
print("total resource", (sum_as + sum_e))
# Stopping condition
global carry_on
# Random
if random.random() < 0.1:
#if sum_as / n_agents > 80:
carry_on = False
print("stopping condition")
# Plot
plot()
Define a function called 'gen_function' as follows:
def gen_function():
global ite
global carry_on
while (ite <= n_iterations) & (carry_on) :
yield ite # Returns control and waits next call.
ite = ite + 1
global data_written
if data_written == False:
# Write data
print("write data")
io.write_data('../../data/output/out.txt', environment)
imageio.mimsave('../../data/output/out.gif', images, fps=3)
data_written = True
Before running the code, issue the following magic command in the Spyder console so that rather than the plot being directed to the plots pane (where") animation does not work), it is directed to a pop-up window:
%matplotlib qt
If you want to revert this change so that plots are added to the plot plane again issue the following magic command:
%matplotlib inline
The keyword 'yield' is used to pass the value of the variable 'ite' back from 'gen_function' whilst continuing to run the while loop. The '# Write data' code block is included in 'gen_function' and runs only once after the model has stopped.
Add and commit to your local git repository and assuming you are using GitHub - push your changes to GitHub.
Most of your code should now be in functions and organised into modules.
The model simulation runs in a loop until some condition is reached, or until a predefined number of iterations 'n_iterations' is reached.
As yet, the model cannot be re-started. Some data is written to file that could be used to restart the model, but this is incomplete/insufficient. The ability to be able to stop and restart a model can be useful. This is known as 'check pointing'. It is often good to be able to run a simulation model for 'n' iterations and then run for a further 'm' iterations, and for this to produce the same results as for a run of 'm + n' iterations. The 'random.getstate()' and 'random.setstate(state)' methods can be used to store the state of 'random' to get this to work.
The simple agents in the model are not learning or adapting their behaviour based on interaction or the state of the environment. The model is mostly random, so observing complex, adaptive/emergent behaviour from this model should not be expected.
Whilst the model has been framed as an ecological model, the agents could represent other things, they don't necessarily have to communicate by sharing resources, they could share something else, and they don't have to 'eat' the environment.
Some ideas for a more realistic ecological model are:
Alter the stopping condition so that the model stops when the average agent store is greater than 80.
Add and commit to your local git repository and assuming you are using GitHub - push your changes to GitHub.