To get agents communicating they need a way to refer to each other. Code is going to be developed so that those agents that are within a given distance are going to share some of their store.
In your local code repository 'src' directory create a new directory called 'abm6'. Open Spyder and use 'save as' to save your 'model.py' into this directory. Create a new directory called 'my_modules' in 'abm6' and use 'save as' to save your 'agentframework.py' and 'io.py' files there.
Each agent is going to share their store equally amongst all agents within a given distance. The algorithm is as follows:
# Calculate which other agents are within a given distance. # Calculate shares. # Distribute shares.
In order to share resources so that the order in which agents are processed is irrelevant, there is a need to distinguish those resources to be shared with those that have been shared.
Change the Agent contructor to include 'agents' in the parameters, store this as a variable, and add an attribute for storing the shares 'store_shares' so it is as follows:
def __init__(self, agents, i, environment, n_rows, n_cols):
"""
The constructor method.
Parameters
----------
agents : List
A list of Agent instances.
i : Integer
To be unique to each instance.
environment : List
A reference to a shared environment.
n_rows : Integer
The number of rows in environment.
n_cols : Integer
The number of columns in environment.
Returns
-------
None.
"""
self.agents = agents
self.i = i
self.environment = environment
tnc = int(n_cols / 3)
self.x = random.randint(tnc - 1, (2 * tnc) - 1)
tnr = int(n_rows / 3)
self.y = random.randint(tnr - 1, (2 * tnr) - 1)
self.store = 0
self.store_shares = 0
Change 'model.py' so that 'agents' is passed as a parameter in the code that constructs each inidivual Agent class instance.
Test your code works and that from one agent you can access another agent by printing out one agent from another agent. For example after all the agents are initialised try printing the agent with i equal to 1 from the agent with i equal to 0:
print(agents[0].agents[1])
A way to use the 'get_distance' function in 'agentframework.py' and avoid cyclic imports is to move the 'get_distance' function to a new module. Create a new file called 'geometry.py' in the 'my_modules' directory, and move the 'get_distance' method from 'model.py' to it. Add an import statement for the new geometry module in 'model.py' and change the function call to look for the function in the new geometry module by using the dot operator. (In other words change 'get_distance' to 'geometry.get_distance').
Import the geometry module into 'agentframework.py' and add the following method:
def share(self, neighbourhood):
# Create a list of agents in neighbourhood
neighbours = []
#print(self.agents[self.i])
for a in self.agents:
distance = geometry.get_distance(a.x, a.y, self.x, self.y)
if distance < neighbourhood:
neighbours.append(a.i)
# Calculate amount to share
n_neighbours = len(neighbours)
#print("n_neighbours", n_neighbours)
shares = self.store / n_neighbours
#print("shares", shares)
# Add shares to store_shares
for i in neighbours:
self.agents[i].store_shares += shares
This code is using the fact that 'self.i' will be the same as the index of an agent in the 'agents' list. In the first for loop of the 'share' function the distance between 'self' and each agent in the agents list is calculated and if this is less than 'neighbourhood' (a parameter that is passed in), then the index of the agent from the agents list is stored in the 'neighbours' list. The attribute 'self.store' is then divided into 'shares' and added to the 'store_shares' attribute of all the agents with indexes in 'neighbours'.
Replace the 'main simulation loop' in 'model.py' file with:
# Main simulation loop
for ite in range(1, n_iterations + 1):
print("Iteration", ite)
# Move agents
print("Move")
for i in range(n_agents):
agents[i].move(x_min, y_min, x_max, y_max)
agents[i].eat()
#print(agents[i])
# Share store
# Distribute shares
for i in range(n_agents):
agents[i].share(neighbourhood)
# Add store_shares to store and set store_shares back to zero
for i in range(n_agents):
print(agents[i])
agents[i].store = agents[i].store_shares
agents[i].store_shares = 0
print(agents)
# Print the maximum distance between all the agents
print("Maximum distance between all the agents", get_max_distance())
# Print the total amount of resource
sum_as = sum_agent_stores()
print("sum_agent_stores", sum_as)
sum_e = sum_environment()
print("sum_environment", sum_e)
print("total resource", (sum_as + sum_e))
Run 'model.py' and interpret the output. Add more print statements to gain a clear understanding of how the code works.
Move all code in each 'my_modules' module that is not in functions to be within if statement like the following at the end of the file:
if __name__ == '__main__':
Recall that this isolates this code so it is only run if that file is the one run and not when the module is imported.
Make sure to test that your code still produces the same results.
Add the following import statements to the 'model.py' placing these with the other import statements as the first executable statements in the code:
import imageio
import os
Before the main simulation loop add the following code:
# Create directory to write images to.
try:
os.makedirs('../../data/output/images/')
except FileExistsError:
print("path exists")
# For storing images
global ite
ite = 1
images = []
Indent the plotting so that this occurs within the main simulation loop and replace the following line:
plot.show()
With:
filename = '../../data/output/images/image' + str(ite) + '.png'
#filename = '../../data/output/images/image' + str(ite) + '.gif'
plt.savefig(filename)
plt.show()
plt.close()
images.append(imageio.imread(filename))
This code should: create plots; save these as images in PNG format files; show and close them rapidly; then reload the PNG format file and append the image to the images list.
After the end of the main simulation loop the images can be turned into an animated GIF format file using:
imageio.mimsave('../../data/output/out.gif', images, fps=3)
The parameter 'fps' is the number of frames that are shown per second.
Add and commit to your local git repository and assuming you are using GitHub - push your changes to GitHub.
Create some more variable results by randomly setting the 'store' of each agent in initialisation to be a value in the range [0, 99].
Change the 'eat' function so that if an agent 'store' goes above 99, then half the store is added to 'environment' where the agent is located.
Add and commit to your local git repository and assuming you are using GitHub - push your changes to GitHub.