Toy Networks
- Emma Garrison
- Nov 26, 2025
- 5 min read
Updated: Dec 26, 2025
There is a joke that I tell sometimes that being a physicist means that my job is to go to work and think about things. The truth is that the job is 90% not just thinking about things. There are meetings, classes, deadlines, and the ever present time sink of opening a google tab when you forget how to initialize an array in a programing language you use every day. There would be days though - when the schedule clears, when the clouds part - where I get to spend a little time just thinking about something. Just me and the white board like I am back in undergrad recreating proofs in the library. No distractions, just insights. A stand still kind of day.
In my most recent project, I had the opportunity to spend a few afternoons working through the network factors that were a part of my model. These sessions always left me with insights I simply had not expected.
In the Stochastic Actor-Oriented Model, you have a set of factors which tell you the local network structures that are positively or negatively prioritized based upon some weight that they are given. For example, I could have a factor like Degree which is described mathematically as,
where x is our network and x with the subscript ij is 1 if there is a connection between nodes i and j. So, this tells us that this factor counts the number of connections that node i has and increases the more connections that node i has. Meaning that, in the networks below, node i in Network 1 would have a higher value for Degree (4) than node i in Network 2 (2).

However, this is not actually what we comparing in the model. This model calculates these factors for all the possible changes that node i could make building or disconnecting with all of the other nodes to see how individual node priorities create emergent patterns of network change. I picked the factor Degree to start for a reason; if we consider removing one of node i's connections, it is clear that the value for the factor Degree would decrease.

This is a pretty easy example, but I hope you may start to see how I was spending those quiet afternoons. As the sun sunk lower in the sky and the shadows lengthened, I would draw repeated toy networks with varying levels of complex structure until I understood what each factor was doing. Playing, some might call it. We rarely talk about play in science...
Let's take one more interesting example with a bit more complexity. The Betweenness factor is a measure of the number of nodes in unconnected pairs to which node i is connected and that wording is chosen very carefully based upon what is actually happening in the equation.
Now, this equation tells a few things. We are going to add to our sum if
node i is connected to node j
node i is connected to node h
node j is NOT connected to node h
Understanding this, we can construct simple toy networks to understand the shapes that would be included in our sum. Let's do some calculations for the two simple networks below.

Since we have two sums, let's pick one node to be our first node j and let's pick the bottom node of each network. Now for all other nodes (which there is only one other node), we do our calculation.

In Network 1,
node i is connected to node j ✓
node i is connected to node h ✓
node j is NOT connected to node h ✓
In Network 2,
node i is connected to node j ✓
node i is connected to node h ✓
node j is NOT connected to node h 𐄂
Okay, so for Network 1, our count when the bottom node is node j is 1 because we met all of our criteria, but for Network 2, our count is 0 because our last criteria failed. We then have to repeat this with all the other possible nodes as node j. This means that our top node is now node j and we add the values with all the possible nodes as h. This brings our count to 2 for Network 1 and 0 for Network 2.
This exercise told us a couple important things about this factor. First, this factor counts open triangles (Network 1) and does not count closed triangles (Network 2). Second, this factor has not been altered for the lack of directionality in the network (If there is a connection between node i and node j then there is a connection between node j and node i). We can see this by the fact that we double counted our open triangle in Network 1. This helps us come to the conclusion that we are counting the number of nodes in unconnected pairs to which node i is connected and not the number of unconnected pairs.
I have one more thought experiment with this factor that we will explore. It involves comparing possible changes again. If you want to try on your own, here is the problem. In the network below, which network change would change the Betweenness more 1. node i dissolving the connection with node A or 2. node i dissolving the connection with node B?
![A diagram of the network represented by this adjacency matrix [,A,B,C,D,E,F,i;A,0,0,0,0,0,0,1;B,0,0,1,1,1,0,1;C,0,1,0,1,0,0,1;D,0,1,1,0,0,00;E,0,1,0,0,0,1,1;F,0,0,0,0,1,0,1;i,1,1,1,0,1,1,0]. node i is connected to all nodes except node D and node A is only connected node i.](https://static.wixstatic.com/media/67e037_e38c896bd2f4447f9743fddadc4d3159~mv2.png/v1/fill/w_896,h_597,al_c,q_90,enc_avif,quality_auto/67e037_e38c896bd2f4447f9743fddadc4d3159~mv2.png)
If we calculate the Betweenness for each change we can observe the differences that they make to this factor and find the answer to our quandary.

We can see from these calculations that dissolving a connection with node A decreases the value of the Betweenness factor by more. This kind of toy network example can help us draw some conclusions about the factor. In this case, we might recall that many types of networks (and specific to my work, brain networks) have a modular structure where certain groups of nodes are more connected to each other than to the rest of the network. Thus, in a highly modular network, we might see a negative priority on the Betweenness factor (we want the value to decrease) lead to dissolving connections with nodes that are not part of node i's modular community. Just as node i decreased its Betweenness more when dissolving with node A (not part of node i's community) compared to dissolving with node B (part of node i's community).
Network segmentation! I can see the pattern! I can see the way the pattern is drawn from the mathematics. I don't necessarily expect you to feel the same irrational joy as I do staring at toy networks and doing some simple addition, but these are my favorite moments of research. As the sun begins to set behind the windows, I know it is probably time to pack up and go home, but I usually linger at the white board just a little longer. It is a stand still kind of day after all, when the distractions finally cease. It is a rare moment when I actually feel like a physicist going to work to think about things.
Citations
Snijders, T. A. B., Ripley, R., Bóda, Z., Vörös, A., & Preciado, P. (2024). Manual for RSiena. University of Groningen. https://www.stats.ox.ac.uk/~snijders/siena/


Comments