Seach panel:

community
my networks

Try NetworkSci Free and Begin Building in One Click. Create Your First Network(No login required)

Lorem ipsum dolor sit amet, consectetur adipisicing elit. Unde, laboriosam.

create a network
Sign Up Free

Learn > Topics > Erdos renyi random

Erdos renyi random

APPENDIX B: ERDOS-RENYI RANDOM GRAPHS

The foundation of modern network theory is usually attributed to the work of Erdos and Renyi in the 1959 paper "On Random Graphs". Though Gilbert proposed a similar model for creating a graph with tunable density slightly earlier, the better method proposed by Erdos-Renyi involves creating a graph of fixed nodes and initial links, and then selecting two nodes at random from the graph and pairing them. The psuedocode could be generated like this:

  • Ɣ Generate n nodes on an unconnected graph
  • Ɣ Generate initial links m
  • Ɣ Do process until links m = total number of links
  • ż Select two nodes at random and form a link between them with probability p ż Conditions:
  • Ŷ If two nodes are already connected, choose different nodes
  • Ŷ If the two nodes are actually the same node, choose different nodes
  • For example, let's assume that the probability is 0.5 or 50%. We can simulate this numericallys with a coin flip. Thus our algorithm is:
  • Ɣ Select two nodes at random
  • Ɣ Flip a coin, if it is heads choose one, if it is tails choose the other
  • In this algorithm, we will create an empty graph, generate initial links and form a link between at a probability. For those interested in programming, be sure to add a conditional for the possibility of a duplicate.

So we do our coin flips n times and obtain our list of total degrees. We do this by counting up how many edges each node has and obtain a table like this:This looks very familiar to us. It is a Poisson distribution. The peak of the curve is its average.Thus, in random graphs, the average node has the average number of connection, andtherefore the average is dependent upon the initially probability, p. Let's take a slightly modified version of the canonical Erdos-Renyi Random Graph (ERRG). In this case we will use the following algorithm:

  • Ɣ Begin with a network of N nodes
  • Ɣ At each time-step add a new node and connect it at random to one of the nodes

This is because the original ERRG model is actually not a growing network, but rather is an empty network with nodes connected to each other with probability. Instead of taking nodes and adding links, we just add a new node and link at each time-step. In our modified case, the network is growing, so we can analyze the dynamics of it. We get the same result and we not longer need to worry about our conditions., so for our purposes its basically the same thing At each time-step our probability is , and thus growing with the size of the network. Let's consider our initial graph, and see where the new node, F, will join:

We have said that each node will have an equal probability of being selected, and that this probability is equal to . Thus each node has a chance of being 39 selected. An incoming node would choose indiscriminately between the five nodes. So, an incoming node makes a connection decision with these odds:

Let's say that node connects to node . What happens in the next step? Well now N=6, so each probability is thus. If we were to add a new node, the would follow these probabilities:


More ERRG
What is considered classical network theory is centered around the Erdos-Renyi Random Graph (ERRG) model. For our purposes, we will say that a slightly modified ERRG model adds a new agent to the network with equal probability to all agents. Thus, in a 2-agent networks, nodes A and B can be chosen with equal probability, or its 50% likely that either will be selected. We can simply model this by flipping a coin: if it's heads we add a link to a.A, if it's tails, we add it to a.B. coin flip outcome: tails Okay, so we flipped a coin and it's tails, so let's add it to a.B and create our second edge, e.2. Now our network looks like this:

So we've added a.C, and now we want to add another node. The question is, now which one do we pick? This outcome of this decision is ultimately the divergence between classical and complex network theory. In classical network theory, we would seem to follow the same process, except this time we have three nodeas and therefore have a 1/3rd or 33% probability of selecting each node:

If do this process over and over again, we obtain an Erdos-Renyi Random Graph, where the average node has the average number of links. No node has too many links, none too few. The degree distribution of a random graph looks like this:


What does this say? Well the x-axis is the number of degrees. The y-axis is the frequency of this degree occurring. So to interpret this, we would say that 50% of the nodes have 1 link, about 25% have two links, 12.5% have three links. And his decays exponentially relatively quickly. Then at the right side of the x-axis we have the more connected links, which have really low probabilities. For example only 1 node has 17 links, the maximum. This node occurs 0.002% of the time. In fact, less than 1% of the nodes have more than 7 links. So even after adding 20,000 nodes-- twenty thousand nodes--at random, the highest only had 17 connections. This is not a graph that screams efficient nontrivial topological self-organization to anyone. We have in effect created a randomly generated network, one without any discernable macrostructure based on a probability p. But random networks have a significant downfall: they poorly represent 'real-world' networks. Because network generation is a cumulative process, we things don't just happen randomly. In fact, in this highly entropic scenario, non-trivial self- organization does not occur. Nothing forms in random networks. This is precisely because the network growth is dependent upon a distribution of randomness. We have also seen that small rules at the micro-level can result in macro-level patterns. We have seen that rewiring or generating a network can result in a greater emergent network property. So far we have discussed structured and random networks, but the remainder of this Project will be concerned with those of non-trivial structure. It turns out this method of modeling random networks, one which had been generally for about 50 years prior could not describe the networks that were being discovered in the late 90s. The degree distributions of real-world scale-free networks just didn't look like that. And further, this mechanism of straight-up random attachment didn't work; it didn't fit any data, not static, not dynamic. This problem became more obvious when the ability to obtain and analyze humongous data sets began to proliferate and the alleged ubiquity of scale-free networks came into light. To really explain how networks form in the real world, we need a less random mechanism.