117. Genetic Algorithms 2 – a multiple objective genetic algorithm (NSGA-II)

Note: As there is quite a substantial amount of code in this post, you may also copy the code as a single block from here.

If you have not looked at our description of a more simple genetic algorithm, with a single objective, then we advise you to look at that first (here). The example here will assume some basic understand of genetic algorithms.

In our previous example of a genetic algorithm, we looked at a genetic algorithm that optimised a single parameter. But what if we have two or more objectives to optimise?

An example in healthcare modelling is in deciding which hospitals should provide a specialist service in order to 1) minimise travel time for patients who need an emergency hospital admission, while 2) ensuring the hospital has sufficient admissions to maintain expertise and a 24/7 service, and 3) and ensuring no hospital is over-loaded with too many admissions.

One option when considering multiple objectives is to combine different objectives into a single number. This requires standardising the values in some way and giving them weights for their importance. This can simplify the optimisation problem but may be sensitive to how individual objectives are weighted (if following this approach it will probably be sensible to re-run the algorithm using alternative weighting schemes).

Pareto fronts

The approach we will describe here takes an alternative approach and seeks to explicitly investigate the trade-off between different objectives. When one objective cannot be improved without the worsening of another objective we are on what is known as the ‘Pareto front’. That is easy to visualise with two objectives, but Pareto fronts exist across any number of objectives – we are on the Pareto front when we cannot improve one objective without necessarily worsening at least one other objective (as shown by the red line in the figure below).

More detail on Pareto fronts here: https://pythonhealthcare.org/2018/09/27/93-exploring-the-best-possible-trade-off-between-competing-objectives-identifying-the-pareto-front/

NSGA-II

The algorithm implemented here is based on an algorithm called ‘NSGA-II’. This was published by Deb et al.

Deb et al. (2002) A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation. 6, 182-197.

With NSGA-II we maintain a population of a given size (in the original paper that is fixed; in our implementation we define a range – the population must be larger than a minimum size, but not exceed a given maximum size.

The key process steps of NSGA-II are:

1) Start with a random population of solutions (P), encoded in binary form (‘chromosomes’) 2) Create a child population (Q) through crossover and mutation 3) Combine P & Q and score for all objectives 4) Identify the first Pareto front (F1); that is all solutions where there are no other solutions that are at least equally good in all objectives and better in at least one objective 5) If F1 is larger than the maximum permitted solution then reduce the size of F1 by ‘crowding selection’ (see below). 6) If F1 is smaller than the required population size then repeat Pareto selection (after removal of selected already selected). This new set of solutions is F2. If the total number selected solutions is greater than the permitted maximum population size then reduce the size of just the latest selection (in this case F2) so that the number of all selected solutions is equal to the maximum permitted population size. 7) Repeat Pareto selection until the required population size is reached (and then reduce the last selected Pareto front by ‘crowding selection’ as required to avoid exceeding the maximum permitted population size). 8) The selected Pareto fronts for the new population, P. 9) Repeat from (2) for the required number of generations or until some other ‘stop’ criterion is reached. 10) Perform a final Pareto selection so that the final reported population are just those on the first Pareto front.

A minimum population size is required to maintain diversity of solutions. If we only selected the first Pareto front we may have a very small population which would lack the diversity required to reach good final solutions.

This process is shown diagrammatically below (from Deb et al., 2002):

Crowding distances/selection

When we are using a Pareto front to select solutions (as described here), all solutions are on the optimal front, that is in each solution there is no other solution that is at least as good in all scores, and better in at least one score. We therefore cannot rank solutions by performance. In order to select solutions, if we need to control the number of solutions we are generating we can use ‘crowding distances’. Crowding distances give a measure of closeness in performance to other solutions. The crowding distance is the average distance to its two neighbouring solutions. we will bias the selection of solutions to those with greater distances to neighbouring solutions.

We will use a ‘tournament’ method to select between two individual solutions. The crowding distances are calculated for all solutions. Two solutions are picked at random and the one with the greater crowding distance will be selected (with the unselected solution being returned back). This process is repeated until the require number of solutions have been selected.

An example is shown below – by using crowding distances we have reduced the number of points in densely packed regions by more than in the sparsely populated areas.

The Code

We will break the code down into sections and functions.

To enale easier visualisation we will look to identify a Pareto population based on two competing objectives. But all the code will work on as many objectives as you wish (though it is not recommended to try to optimise more than five objectives).

Import required libraries

import random as rn
import numpy as np
import matplotlib.pyplot as plt
# For use in Jupyter notebooks only:
% matplotlib inline

Create reference solutions

In this example we will pick the easiest example possible. We will create two reference chromosomes as ‘ideal solutions’. This function may be used to create more references to mimic optimising against more than two objectives.

We will pick a chromosome length of 25 (this mimics solutions that would be coded as 25 binary ‘genes’).

In real life this binary representation would lead to better or worse solutions. We might, for example, use the binary notation to denote open/closed hospitals in a location problem. Or any number of binary digits might be used to encode the value of a variable.

In real life this binary representation would lead to better or worse solutions. We might, for example, use the binary notation to denote open/closed hospitals in a location problem. Or any number of binary digits might be used to encode the value of a variable.

def create_reference_solutions(chromosome_length, solutions):
    """
    Function to create reference chromosomes that will mimic an ideal solution 
    """
    references = np.zeros((solutions, chromosome_length))
    number_of_ones = int(chromosome_length / 2)
    
    for solution in range(solutions):
        # Build an array with an equal mix of zero and ones
        reference = np.zeros(chromosome_length)
        reference[0: number_of_ones] = 1
    
        # Shuffle the array to mix the zeros and ones
        np.random.shuffle(reference)
        references[solution,:] = reference
    
    return references

Show an example set of reference solutions:

print (create_reference_solutions(25, 2))

Out:
print (create_reference_solutions(25, 2))

[[0. 1. 0. 1. 0. 0. 1. 1. 0. 1. 1. 1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 1. 1.
  0.]
 [0. 1. 0. 1. 0. 0. 0. 1. 0. 0. 1. 0. 1. 1. 0. 1. 1. 1. 0. 0. 1. 0. 1. 1.
  0.]]

Evaluating solutions

Normally this section of the code would be more complex – it decodes the chromosome and applies it to a problem, and assesses the performance. For example we might use the chromosome to store whether hospitals are open/closed and then a piece of code will test travel distances for all patients to the open hospitals, and will also calculate the number of admissions to each hospital.

In our ‘toy’ example here we will simply calculate how many binary digits in each solution are the same as our two reference solutions (so we will have two scores for each solution).

We have two functions. The first will calculate the score (‘fitness’) against a single reference. The second will loop through all references (equivalent to all objectives) and call for the score/fitness to be calculated.

def calculate_fitness(reference, population):
    """
    Calculate how many binary digits in each solution are the same as our 
    reference solution.
    """
    # Create an array of True/False compared to reference
    identical_to_reference = population == reference
    # Sum number of genes that are identical to the reference
    fitness_scores = identical_to_reference.sum(axis=1)
    
    return fitness_scores
def score_population(population, references):
    """
    Loop through all reference solutions and request score/fitness of
    population against that reference solution.
    """
    scores = np.zeros((population.shape[0], references.shape[0]))
    for i, reference in enumerate(references):
        scores[:,i] = calculate_fitness(reference, population)
        
    return scores 

Calculate crowding and select a population based on crowding scores

We have two functions here. The first to calculate the crowding of a population, based on similarity of scores. The second is a Tournament selection method that uses those crowding scores to pick a given number of solutions from a population.

def calculate_crowding(scores):
    """
    Crowding is based on a vector for each individual
    All scores are normalised between low and high. For any one score, all
    solutions are sorted in order low to high. Crowding for chromsome x
    for that score is the difference between the next highest and next
    lowest score. Total crowding value sums all crowding for all scores
    """
    
    population_size = len(scores[:, 0])
    number_of_scores = len(scores[0, :])

    # create crowding matrix of population (row) and score (column)
    crowding_matrix = np.zeros((population_size, number_of_scores))

    # normalise scores (ptp is max-min)
    normed_scores = (scores - scores.min(0)) / scores.ptp(0)

    # calculate crowding distance for each score in turn
    for col in range(number_of_scores):
        crowding = np.zeros(population_size)

        # end points have maximum crowding
        crowding[0] = 1
        crowding[population_size - 1] = 1

        # Sort each score (to calculate crowding between adjacent scores)
        sorted_scores = np.sort(normed_scores[:, col])

        sorted_scores_index = np.argsort(
            normed_scores[:, col])

        # Calculate crowding distance for each individual
        crowding[1:population_size - 1] = \
            (sorted_scores[2:population_size] -
             sorted_scores[0:population_size - 2])

        # resort to orginal order (two steps)
        re_sort_order = np.argsort(sorted_scores_index)
        sorted_crowding = crowding[re_sort_order]

        # Record crowding distances
        crowding_matrix[:, col] = sorted_crowding

    # Sum crowding distances of each score
    crowding_distances = np.sum(crowding_matrix, axis=1)

    return crowding_distances
def reduce_by_crowding(scores, number_to_select):
    """
    This function selects a number of solutions based on tournament of
    crowding distances. Two members of the population are picked at
    random. The one with the higher croding dostance is always picked
    """    
    population_ids = np.arange(scores.shape[0])

    crowding_distances = calculate_crowding(scores)

    picked_population_ids = np.zeros((number_to_select))

    picked_scores = np.zeros((number_to_select, len(scores[0, :])))

    for i in range(number_to_select):

        population_size = population_ids.shape[0]

        fighter1ID = rn.randint(0, population_size - 1)

        fighter2ID = rn.randint(0, population_size - 1)

        # If fighter # 1 is better
        if crowding_distances[fighter1ID] >= crowding_distances[
            fighter2ID]:

            # add solution to picked solutions array
            picked_population_ids[i] = population_ids[
                fighter1ID]

            # Add score to picked scores array
            picked_scores[i, :] = scores[fighter1ID, :]

            # remove selected solution from available solutions
            population_ids = np.delete(population_ids, (fighter1ID),
                                       axis=0)

            scores = np.delete(scores, (fighter1ID), axis=0)

            crowding_distances = np.delete(crowding_distances, (fighter1ID),
                                           axis=0)
        else:
            picked_population_ids[i] = population_ids[fighter2ID]

            picked_scores[i, :] = scores[fighter2ID, :]

            population_ids = np.delete(population_ids, (fighter2ID), axis=0)

            scores = np.delete(scores, (fighter2ID), axis=0)

            crowding_distances = np.delete(
                crowding_distances, (fighter2ID), axis=0)

    # Convert to integer 
    picked_population_ids = np.asarray(picked_population_ids, dtype=int)
    
    return (picked_population_ids)

Pareto selection

Pareto selection involves two functions. The first will select a single Pareto front. The second will, as necessary, repeat Pareto front selection to build a population within defined size limits, and, as necessary, will reduce a Pareto front by applying crowding selection.

def identify_pareto(scores, population_ids):
    """
    Identifies a single Pareto front, and returns the population IDs of
    the selected solutions.
    """
    population_size = scores.shape[0]
    # Create a starting list of items on the Pareto front
    # All items start off as being labelled as on the Parteo front
    pareto_front = np.ones(population_size, dtype=bool)
    # Loop through each item. This will then be compared with all other items
    for i in range(population_size):
        # Loop through all other items
        for j in range(population_size):
            # Check if our 'i' pint is dominated by out 'j' point
            if all(scores[j] >= scores[i]) and any(scores[j] > scores[i]):
                # j dominates i. Label 'i' point as not on Pareto front
                pareto_front[i] = 0
                # Stop further comparisons with 'i' (no more comparisons needed)
                break
    # Return ids of scenarios on pareto front
    return population_ids[pareto_front]
def build_pareto_population(
        population, scores, minimum_population_size, maximum_population_size):
    """
    As necessary repeats Pareto front selection to build a population within
    defined size limits. Will reduce a Pareto front by applying crowding 
    selection as necessary.    
    """
    unselected_population_ids = np.arange(population.shape[0])
    all_population_ids = np.arange(population.shape[0])
    pareto_front = []
    while len(pareto_front) < minimum_population_size:
        temp_pareto_front = identify_pareto(
                scores[unselected_population_ids, :], unselected_population_ids)
        
        # Check size of total parteo front. 
        # If larger than maximum size reduce new pareto front by crowding
        combined_pareto_size = len(pareto_front) + len(temp_pareto_front)
        if combined_pareto_size > maximum_population_size:
            number_to_select = combined_pareto_size - maximum_population_size
            selected_individuals = (reduce_by_crowding(
                    scores[temp_pareto_front], number_to_select))
            temp_pareto_front = temp_pareto_front[selected_individuals]
        
        # Add latest pareto front to full Pareto front
        pareto_front = np.hstack((pareto_front, temp_pareto_front))
        
    
        # Update unselected population ID by using sets to find IDs in all
        # ids that are not in the selected front
        unselected_set = set(all_population_ids) - set(pareto_front)
        unselected_population_ids = np.array(list(unselected_set))

    population = population[pareto_front.astype(int)]
    return population

Population functions

There are four functions concerning the population

1) Create a random population

2) Breed by crossover – two children produced from two parents

3) Randomly mutate population (small probability that any given gene in population will switch between 1/0 – applied to the child population)

4) Breed population -Create child population by repeatedly calling breeding function (two parents producing two children), applying genetic mutation to the child population, combining parent and child population, and removing duplicate chromosomes.

def create_population(individuals, chromosome_length):
    """
    Create random population with given number of individuals and chroosome
    length.
    """    
    # Set up an initial array of all zeros
    population = np.zeros((individuals, chromosome_length))
    # Loop through each row (individual)
    for i in range(individuals):
        # Choose a random number of ones to create
        ones = rn.randint(0, chromosome_length)
        # Change the required number of zeros to ones
        population[i, 0:ones] = 1
        # Sfuffle row
        np.random.shuffle(population[i])
    
    return population
def breed_by_crossover(parent_1, parent_2):
    """
    Combine two parent chromsomes by crossover to produce two children.
    """
    # Get length of chromosome
    chromosome_length = len(parent_1)
    
    # Pick crossover point, avoding ends of chromsome
    crossover_point = rn.randint(1,chromosome_length-1)
    
    # Create children. np.hstack joins two arrays
    child_1 = np.hstack((parent_1[0:crossover_point],
                        parent_2[crossover_point:]))
    
    child_2 = np.hstack((parent_2[0:crossover_point],
                        parent_1[crossover_point:]))
    
    # Return children
    return child_1, child_2
def randomly_mutate_population(population, mutation_probability):
    """
    Randomly mutate population with a given individual gene mutation
    probability. Individual gene may switch between 0/1.
    """
    # Apply random mutation
    random_mutation_array = np.random.random(size=(population.shape))
    
    random_mutation_boolean = \
        random_mutation_array <= mutation_probability

    population[random_mutation_boolean] = \
    np.logical_not(population[random_mutation_boolean])
    
    # Return mutation population
    return population
def breed_population(population):
    """
    Create child population by repetedly calling breeding function (two parents
    producing two children), applying genetic mutation to the child population,
    combining parent and child population, and removing duplicatee chromosomes.
    """
    # Create an empty list for new population
    new_population = []
    population_size = population.shape[0]
    # Create new popualtion generating two children at a time
    for i in range(int(population_size/2)):
        parent_1 = population[rn.randint(0, population_size-1)]
        parent_2 = population[rn.randint(0, population_size-1)]
        child_1, child_2 = breed_by_crossover(parent_1, parent_2)
        new_population.append(child_1)
        new_population.append(child_2)
    
    # Add the child population to the parent population
    # In this method we allow parents and children to compete to be kept
    population = np.vstack((population, np.array(new_population)))
    population = np.unique(population, axis=0)
    
    return population

Main algorithm code

Now let’s put it all together!

# Set general parameters
chromosome_length = 50
starting_population_size = 5000
maximum_generation = 250
minimum_population_size = 500
maximum_population_size = 1000

# Create two reference solutions 
# (this is used just to illustrate GAs)
references = create_reference_solutions(chromosome_length, 2)

# Create starting population
population = create_population(
        starting_population_size, chromosome_length)

# Loop through the generations of genetic algorithm

for generation in range(maximum_generation):
    if generation %10 ==0:
        print ('Generation (out of %i): %i '%(maximum_generation, generation))
    
    # Breed
    population = breed_population(population)
    
    # Score population
    scores = score_population(population, references)
    
    # Build pareto front
    population = build_pareto_population(
            population, scores, minimum_population_size, maximum_population_size)


# Get final pareto front
scores = score_population(population, references)
population_ids = np.arange(population.shape[0]).astype(int)
pareto_front = identify_pareto(scores, population_ids)
population = population[pareto_front, :]
scores = scores[pareto_front]

Plot final Pareto front

This is the only part of the code that works only for two objectives.

You will see in this example we have a well-described trade-off between achieving the two objectives. With the population sizes used and the number of generations we achieved 96% maximum score on each objective (a larger population and/or more generations would get us to 100% on this quite simple example, though Pareto algorithms should not be expected to find the perfect solution for all objectives, but rather they should identify good solutions).

# Plot Pareto front (for two scores only)  
x = scores[:, 0]/chromosome_length*100
y = scores[:, 1]/chromosome_length*100
plt.xlabel('Objective A - % maximum obtainable')
plt.ylabel('Objective B - % maximum obtainable')

plt.scatter(x,y)
plt.savefig('pareto.png')
plt.show()

95: Crowding distances: selecting solutions when too many multi-objective solutions exist

Sometimes in multi-objective algorithms we need to thin out the number of solutions we have.

When we are using a Pareto front to select solutions (as described here), all solutions are on the optimal front, that is in each solution there is no other solution that is at least as good in all scores, and better in at least one score. We therefore cannot rank solutions by performance. In order to select solutions, if we need to control the number of solutions we are generating we can use ‘crowding distances’. Crowding distances give a measure of closeness in performance to other solutions. The crowding distance is the average distance to its two neighbouring solutions.

crowding Continue reading “95: Crowding distances: selecting solutions when too many multi-objective solutions exist”

94: Genetic algorithms 1. A simple genetic algorithm

Note: For core code only, without explanation or test code sections see this link: code_only

 

progress

For more discussion on the general concepts of genetic algorithms, which are only presented briefly here (as we will focus on how to code a simple example in Python), see Wikipedia article.

In this example we will look at a basic genetic algorithm (GA). We will set up the GA to try to match a pre-defined ‘optimal. solution. Often with GAs we are using them to find solutions to problems which 1) cannot be solved with ‘exact’ methods (methods are are guaranteed to find the best solution), and 2) where we cannot recognise when we have found the optimal solution. GAs therefore fall into a collection of algorithms called heuristic (from Greek for ‘search’) algorithms that hunt down good solutions, without us knowing how far off the theoretical optimal solution they are.

In this case however we will test the algorithms against a solution we know we are looking for.

Principles of genetic algorithms (GAs)

GAs are iterating algorithms, that is they repeatedly loop through a progress until a target is reached or a maximum number of iterations (called ‘generations’ in GAs) is reached.

The basic steps of a genetic algorithm are:

1) Create a population of randomly generated solutions, coded as binary arrays, and score population for performance (or ‘fitness’) of each individual.

2) Loop (until target performance is reached or a maximum number of generations is reached):

  • Select two parents to ‘breed’. Selection is based on performance (or ‘fitness’) – better performing parents are more likely to be selected.
  • Generate two child solutions from two parents by mixing the genes from each parent and by applying a chance of random mutation.
  • Repeat child generation until a required new population size is reached.
  • Score new population

The solution to ‘hunt down’

We code most GAs to work with binary strings (or a binary NumPy array, as we will use here), so that the solution may be represented as a series of 0 and 1.

A real-life example in healthcare is that the array of 0s and 1s may represents the choices of closed or open hospital units providing a given service. We then evaluate each solution against predetermined criteria.

Here we will define a known solution based on a string of 70 0s or 1s. The number of possible combinations for this is 2^18, or 1.2 x 10^18 – that is 1 followed by eighteen zeros. Or, to put it another way (as these large numbers are difficult to imagine) …… the universe is about 15 billion (15 x 10^9) years old, or 5 x 10^17 seconds old. If we could evaluate 1,000 solutions per second, then a computer would need to run for twice the current age of the universe in order to evaluate all possible combinations. Let’s see how close to the perfect solution we can get in reasonable time!

In GA we will call each potential solution to be evaluated a ‘chromsome’. Each element (a 0 or 1) in that chromsome is a ‘gene’.

import random
import numpy as np

def create_reference_solution(chromosome_length):

    number_of_ones = int(chromosome_length / 2)

    # Build an array with an equal mix of zero and ones
    reference = np.zeros(chromosome_length)
    reference[0: number_of_ones] = 1

    # Shuffle the array to mix the zeros and ones
    np.random.shuffle(reference)
    
    return reference

Let’s test the function and show an example 70-gene reference chromosome.

# Print an example target array
print (create_reference_solution(70))
OUT:

[1. 0. 0. 0. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 1. 0. 1. 1. 1. 0.
 1. 0. 1. 1. 1. 1. 0. 0. 0. 1. 1. 0. 1. 0. 0. 1. 1. 1. 1. 0. 1. 0. 1. 1.
 0. 1. 1. 1. 1. 0. 1. 0. 1. 1. 1. 0. 1. 0. 0. 0. 0. 0. 1. 0. 1. 1.]

Create a starting random population

We will use NumPy to store our population. Each row will represent one solution, which will contain a random sequence of zeros and ones. The number of rows will represent to number of ‘individuals, in our population.

def create_starting_population(individuals, chromosome_length):
    # Set up an initial array of all zeros
    population = np.zeros((individuals, chromosome_length))
    # Loop through each row (individual)
    for i in range(individuals):
        # Choose a random number of ones to create
        ones = random.randint(0, chromosome_length)
        # Change the required number of zeros to ones
        population[i, 0:ones] = 1
        # Sfuffle row
        np.random.shuffle(population[i])
    
    return population
    

Let’s test by showing a random population of 4 individuals with a gene length of 10.

print (create_starting_population(4, 10))
OUT:

[[0. 1. 1. 1. 1. 0. 0. 1. 1. 0.]
 [1. 1. 1. 1. 0. 1. 1. 0. 1. 1.]
 [1. 0. 0. 0. 0. 0. 0. 1. 1. 0.]
 [1. 0. 0. 0. 0. 0. 1. 0. 0. 0.]]

Calculate fitness of population

In GAs we refer to how good each individual in the population is, as ‘fitness’. The calculate_fitness function will be the evaluation procedure you wish to apply in your algorithm. In this example we are going to return the number of genes (elements) in a potential solution (chromosome) that match our f=reference standard.

def calculate_fitness(reference, population):
    # Create an array of True/False compared to reference
    identical_to_reference = population == reference
    # Sum number of genes that are identical to the reference
    fitness_scores = identical_to_reference.sum(axis=1)
    
    return fitness_scores

Let’s test what we have so far:

reference = create_reference_solution(10)
print ('Reference solution: \n', reference)
population = create_starting_population(6, 10)
print ('\nStarting population: \n', population)
scores = calculate_fitness(reference, population)
print('\nScores: \n', scores)
OUT: 

Reference solution: 
 [1. 1. 0. 0. 0. 0. 1. 0. 1. 1.]

Starting population: 
 [[0. 0. 1. 1. 1. 0. 0. 0. 0. 1.]
 [1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 0. 1. 1. 1. 1. 1.]
 [0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
 [1. 1. 1. 1. 0. 0. 1. 1. 0. 1.]
 [1. 1. 0. 0. 0. 0. 0. 1. 1. 1.]]

Scores: 
 [3 5 6 6 6 8]

Choosing individuals to breed with tournament selection

Genetic algorithms mimic biology in that the individuals with the best fitness cores are most likely to breed and pass on their genes. But we do not simply take all the best individuals from our population to breed, as this might risk ‘in-breeding’. Rather, we use a method that means better individuals are moire likely to breed, but low fitness individuals at times may be chosen to breed.

In tournament selection we first choose two individuals at random from our population (it is possible that two low fitness individuals may be chosen). We then pass those individuals to a ‘tournament’ where the individual with the highest fitness will be chosen.

It is possible to further modify this so that the highest fitness individual will win with a given probability, but we will keep it simple here and have the highest fitness individual always winning.
It is also possible to have more than two individuals in a tournament. The more individuals in a tournament the more the picked population will be biased towards the highest fitness individuals.

def select_individual_by_tournament(population, scores):
    # Get population size
    population_size = len(scores)
    
    # Pick individuals for tournament
    fighter_1 = random.randint(0, population_size-1)
    fighter_2 = random.randint(0, population_size-1)
    
    # Get fitness score for each
    fighter_1_fitness = scores[fighter_1]
    fighter_2_fitness = scores[fighter_2]
    
    # Identify undividual with highest fitness
    # Fighter 1 will win if score are equal
    if fighter_1_fitness >= fighter_2_fitness:
        winner = fighter_1
    else:
        winner = fighter_2
    
    # Return the chromsome of the winner
    return population[winner, :]

Let’s test selection of parents:

# Set up and score population
reference = create_reference_solution(10)
population = create_starting_population(6, 10)
scores = calculate_fitness(reference, population)

# Pick two parents and dispplay
parent_1 = select_individual_by_tournament(population, scores)
parent_2 = select_individual_by_tournament(population, scores)
print (parent_1)
print (parent_2)
OUT:

[1. 0. 1. 1. 0. 1. 0. 1. 1. 0.]
[1. 0. 1. 1. 0. 1. 0. 1. 1. 0.]

Producing children from parents – crossover

When two individuals are chosen, the next step is to produce ‘children’ from them. We produce these children by ‘crossover’ mix of their two chromosomes. We choose a random point within the chromosome, and then one ‘child’ will take the left portion (up to, but not including, the crossover point) from parent 1 and the corresponding right portion from parent 2. The result is a mix of genes from each parent. The second ‘child’ will be the opposite of this – portion (up to, but not including) the crossover point) from parent 2 and the corresponding right portion from parent 1.

It is possible to have more than one crossover point, but we will keep it simple and have a single crossover point.

In [9]:
def breed_by_crossover(parent_1, parent_2):
    # Get length of chromosome
    chromosome_length = len(parent_1)
    
    # Pick crossover point, avoding ends of chromsome
    crossover_point = random.randint(1,chromosome_length-1)
    
    # Create children. np.hstack joins two arrays
    child_1 = np.hstack((parent_1[0:crossover_point],
                        parent_2[crossover_point:]))
    
    child_2 = np.hstack((parent_2[0:crossover_point],
                        parent_1[crossover_point:]))
    
    # Return children
    return child_1, child_2

And let’s test it so far, creating a population, scoring it, picking two ‘parents’ and producing ‘two children’.

# Set up and score population
reference = create_reference_solution(15)
population = create_starting_population(100, 15)
scores = calculate_fitness(reference, population)

# Pick two parents and dispplay
parent_1 = select_individual_by_tournament(population, scores)
parent_2 = select_individual_by_tournament(population, scores)

# Get children
child_1, child_2 = breed_by_crossover(parent_1, parent_2)

# Show output
print ('Parents')
print (parent_1)
print (parent_2)
print ('Children')
print (child_1)
print (child_2)
OUT:

Parents
[1. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 1. 1. 1. 0.]
Children
[1. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 1. 1. 1. 0.]
[0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]

Random mutation of genes

In evolution sometimes genes are copied incorrectly. This change may be harmful or beneficial. We mimic this by having a certain probability of that a gene (which is either a 0 or a 1) becomes switched.

Typically this probability is low (e.g. 0.005), though it can be made to be flexible (e.g. increase mutation rate if progress has stalled)

def randomly_mutate_population(population, mutation_probability):
    
    # Apply random mutation
        random_mutation_array = np.random.random(
            size=(population.shape))
        
        random_mutation_boolean = \
            random_mutation_array <= mutation_probability

        population[random_mutation_boolean] = \
        np.logical_not(population[random_mutation_boolean])
        
        # Return mutation population
        return population

Let’s test our function with a high mutation rate (0.25) to see the effects. You can change the mutation rate and see what happens (a mutation rate of 1.0 will invert all genes).

# Set up and score population
reference = create_reference_solution(15)
population = create_starting_population(100, 15)
scores = calculate_fitness(reference, population)

# Pick two parents and display
parent_1 = select_individual_by_tournament(population, scores)
parent_2 = select_individual_by_tournament(population, scores)

# Get children and make new population 
child_1, child_2 = breed_by_crossover(parent_1, parent_2)
population = np.stack((child_1, child_2))

# Mutate population
mutation_probability = 0.25
print ("Population before mutation")
print (population)
population = randomly_mutate_population(population, mutation_probability)
print ("Population after mutation")
print (population)
OUT:

Population before mutation
[[1. 0. 0. 0. 0. 1. 0. 1. 1. 0. 0. 0. 0. 1. 1.]
 [0. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1.]]

Population after mutation
[[1. 1. 0. 1. 0. 1. 0. 1. 1. 1. 1. 1. 0. 1. 1.]
 [0. 0. 1. 1. 0. 0. 0. 1. 1. 0. 0. 0. 1. 0. 1.]]

Putting it all together

We’ve defined all the functions we need. Now let’s put it all together.

# Set general parameters
chromosome_length = 75
population_size = 500
maximum_generation = 200
best_score_progress = [] # Tracks progress

# Create reference solution 
# (this is used just to illustrate GAs)
reference = create_reference_solution(chromosome_length)

# Create starting population
population = create_starting_population(population_size, chromosome_length)

# Display best score in starting population
scores = calculate_fitness(reference, population)
best_score = np.max(scores)/chromosome_length * 100
print ('Starting best score, percent target: %.1f' %best_score)

# Add starting best score to progress tracker
best_score_progress.append(best_score)

# Now we'll go through the generations of genetic algorithm
for generation in range(maximum_generation):
    # Create an empty list for new population
    new_population = []
    
    # Create new popualtion generating two children at a time
    for i in range(int(population_size/2)):
        parent_1 = select_individual_by_tournament(population, scores)
        parent_2 = select_individual_by_tournament(population, scores)
        child_1, child_2 = breed_by_crossover(parent_1, parent_2)
        new_population.append(child_1)
        new_population.append(child_2)
    
    # Replace the old population with the new one
    population = np.array(new_population)
    
    # Score best solution, and add to tracker
    scores = calculate_fitness(reference, population)
    best_score = np.max(scores)/chromosome_length * 100
    best_score_progress.append(best_score)

# GA has completed required generation
print ('End best score, percent target: %.1f' %best_score)

# Plot progress
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(best_score_progress)
plt.xlabel('Generation')
plt.ylabel('Best score (% target)')
plt.show()
OUT:

Starting best score, percent target: 64.0
End best score, percent target: 100.0
progress

 

 

94. Genetic Algorithms 1. A simple genetic algorithm (code only)

For explanation see: 94: Genetic algorithms 1. A simple genetic algorithm

import random
import numpy as np

def create_reference_solution(chromosome_length):

    number_of_ones = int(chromosome_length / 2)

    # Build an array with an equal mix of zero and ones
    reference = np.zeros(chromosome_length)
    reference[0: number_of_ones] = 1

    # Shuffle the array to mix the zeros and ones
    np.random.shuffle(reference)
    
    return reference


def create_starting_population(individuals, chromosome_length):
    # Set up an initial array of all zeros
    population = np.zeros((individuals, chromosome_length))
    # Loop through each row (individual)
    for i in range(individuals):
        # Choose a random number of ones to create
        ones = random.randint(0, chromosome_length)
        # Change the required number of zeros to ones
        population[i, 0:ones] = 1
        # Sfuffle row
        np.random.shuffle(population[i])
    
    return population


def calculate_fitness(reference, population):
    # Create an array of True/False compared to reference
    identical_to_reference = population == reference
    # Sum number of genes that are identical to the reference
    fitness_scores = identical_to_reference.sum(axis=1)
    
    return fitness_scores


def select_individual_by_tournament(population, scores):
    # Get population size
    population_size = len(scores)
    
    # Pick individuals for tournament
    fighter_1 = random.randint(0, population_size-1)
    fighter_2 = random.randint(0, population_size-1)
    
    # Get fitness score for each
    fighter_1_fitness = scores[fighter_1]
    fighter_2_fitness = scores[fighter_2]
    
    # Identify undividual with highest fitness
    # Fighter 1 will win if score are equal
    if fighter_1_fitness >= fighter_2_fitness:
        winner = fighter_1
    else:
        winner = fighter_2
    
    # Return the chromsome of the winner
    return population[winner, :]


def breed_by_crossover(parent_1, parent_2):
    # Get length of chromosome
    chromosome_length = len(parent_1)
    
    # Pick crossover point, avoding ends of chromsome
    crossover_point = random.randint(1,chromosome_length-1)
    
    # Create children. np.hstack joins two arrays
    child_1 = np.hstack((parent_1[0:crossover_point],
                        parent_2[crossover_point:]))
    
    child_2 = np.hstack((parent_2[0:crossover_point],
                        parent_1[crossover_point:]))
    
    # Return children
    return child_1, child_2
    

def randomly_mutate_population(population, mutation_probability):
    
    # Apply random mutation
        random_mutation_array = np.random.random(
            size=(population.shape))
        
        random_mutation_boolean = \
            random_mutation_array <= mutation_probability

        population[random_mutation_boolean] = \
        np.logical_not(population[random_mutation_boolean])
        
        # Return mutation population
        return population
    

# *************************************
# ******** MAIN ALGORITHM CODE ********
# *************************************

# Set general parameters
chromosome_length = 75
population_size = 500
maximum_generation = 200
best_score_progress = [] # Tracks progress

# Create reference solution 
# (this is used just to illustrate GAs)
reference = create_reference_solution(chromosome_length)

# Create starting population
population = create_starting_population(population_size, chromosome_length)

# Display best score in starting population
scores = calculate_fitness(reference, population)
best_score = np.max(scores)/chromosome_length * 100
print ('Starting best score, % target: ',best_score)

# Add starting best score to progress tracker
best_score_progress.append(best_score)

# Now we'll go through the generations of genetic algorithm
for generation in range(maximum_generation):
    # Create an empty list for new population
    new_population = []
    
    # Create new popualtion generating two children at a time
    for i in range(int(population_size/2)):
        parent_1 = select_individual_by_tournament(population, scores)
        parent_2 = select_individual_by_tournament(population, scores)
        child_1, child_2 = breed_by_crossover(parent_1, parent_2)
        new_population.append(child_1)
        new_population.append(child_2)
    
    # Replace the old population with the new one
    population = np.array(new_population)
    
    # Score best solution, and add to tracker
    scores = calculate_fitness(reference, population)
    best_score = np.max(scores)/chromosome_length * 100
    best_score_progress.append(best_score)

# GA has completed required generation
print ('End best score, % target: ', best_score)

# Plot progress
import matplotlib.pyplot as plt
plt.plot(best_score_progress)
plt.xlabel('Generation')
plt.ylabel('Best score (% target)')
plt.show()

93. Exploring the best possible trade-off between competing objectives: identifying the Pareto Front

pareto_2

When considering optimisation of multiple objectives, the Pareto front is that collection of points where one objective cannot be improved without detriment to another objective*. These points are also called ‘non-dominated’. In contrast, points not on the Pareto front, or ‘dominated’ points represents points where it is possible to improve one or more objectives without loss of performance of another objective.

*An example of this type of problem is the planning of emergency department (ED) services. Ideally we all like to be close to an ED, but we also want that ED to be large enough to sustained 24/7 consultant physician presence. So we might for example have two objectives: the proportion of patients living with 30 minutes of an ED, and the proportion of patients who attend an ED with 24/7 consultant presence. The more EDs we have in England the more patients will be within 30 minutes of one, but as we plan for more EDs those EDs get smaller and fewer will be able to sustain 24/7 consultant presence. We may be interested in seeing the nature of the trade-off. We therefore explore lots of potential solutions (i.e. change the number and location of ED departments) and we identify the Pareto frontier.

Here we present code to identify points on the Pareto front. We will use an example with just two objectives (as that is easy to visualise) but the Pareto front principle works for any number of simultaneous objectives.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

# Some dummy data: Each item has two scores

scores = np.array([[97, 23],
                  [55, 77],
                  [34, 76],
                  [80, 60],
                  [99,  4],
                  [81,  5],
                  [ 5, 81],
                  [30, 79],
                  [15, 80],
                  [70, 65],
                  [90, 40],
                  [40, 30],
                  [30, 40],
                  [20, 60],
                  [60, 50],
                  [20, 20],
                  [30,  1],
                  [60, 40],
                  [70, 25],
                  [44, 62],
                  [55, 55],
                  [55, 10],
                  [15, 45],
                  [83, 22],
                  [76, 46],
                  [56, 32],
                  [45, 55],
                  [10, 70],
                  [10, 30],
                  [79, 50]])

And now let’s plot that data.

x = scores[:, 0]
y = scores[:, 1]

plt.scatter(x, y)
plt.xlabel('Objective A')
plt.ylabel('Objective B')
plt.show()

pareto_1

Now we will define our function to identify those points that are on the Pareto front. We start off by assuming all points are on the Pareto front and then change the status of those that are not on the Pareto front. We use two loops. The outer loop (‘i’) will loop through all points in order to compare them to all other points (the comparison is made using the inner loop, ‘j’). For any given point ‘i’, if any other point is at least as good in all objectives and is better in one, that point ‘i’ is known as ‘dominated’ and is not on the Pareto front. As soon as a better point is found another (point is at least as good in all objectives and is better in one), point i is marked as not on the Pareto front and the inner loop can stop.

The function returns the index numbers of points in the original array that are on the Pareto front.

def identify_pareto(scores):
    # Count number of items
    population_size = scores.shape[0]
    # Create a NumPy index for scores on the pareto front (zero indexed)
    population_ids = np.arange(population_size)
    # Create a starting list of items on the Pareto front
    # All items start off as being labelled as on the Parteo front
    pareto_front = np.ones(population_size, dtype=bool)
    # Loop through each item. This will then be compared with all other items
    for i in range(population_size):
        # Loop through all other items
        for j in range(population_size):
            # Check if our 'i' pint is dominated by out 'j' point
            if all(scores[j] >= scores[i]) and any(scores[j] > scores[i]):
                # j dominates i. Label 'i' point as not on Pareto front
                pareto_front[i] = 0
                # Stop further comparisons with 'i' (no more comparisons needed)
                break
    # Return ids of scenarios on pareto front
    return population_ids[pareto_front]

We’ll now apply the function and print out our Pareto front index numbers and scores.

pareto = identify_pareto(scores)
print ('Pareto front index vales')
print ('Points on Pareto front: \n',pareto)

pareto_front = scores[pareto]
print ('\nPareto front scores')
print (pareto_front)

OUT:
Pareto front index vales
Points on Pareto front: 
 [ 0  1  3  4  6  7  8  9 10]

Pareto front scores
[[97 23]
 [55 77]
 [80 60]
 [99  4]
 [ 5 81]
 [30 79]
 [15 80]
 [70 65]
 [90 40]]

To aid plotting, we’ll sort our Pareto front scores in ascending oder of first item. We’ll use a Pandas DataFrame to make sorting easy.

pareto_front_df = pd.DataFrame(pareto_front)
pareto_front_df.sort_values(0, inplace=True)
pareto_front = pareto_front_df.values

And now we can print plot our data again, showing the Pareto front.

x_all = scores[:, 0]
y_all = scores[:, 1]
x_pareto = pareto_front[:, 0]
y_pareto = pareto_front[:, 1]

plt.scatter(x_all, y_all)
plt.plot(x_pareto, y_pareto, color='r')
plt.xlabel('Objective A')
plt.ylabel('Objective B')
plt.show()

 

pareto_2

82. The travelling community nurse problem (aka the Travelling Salesman Problem)

tsp

The travelling salesman problem is a classic geographic problem that may be framed as “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?”.

The same problem may be applied to community nurses: given a list of patients to see in a day, what order should the nurse visit the patients to minimise time spent travelling (to free up the most time for nursing time).

This problem is a class of problems known as ‘np-hard’, that is the problem size grows non-linearly with the number of points to visit, and soon becomes too computationally expensive to compute all possibilities. There is also no algorithm known that efficiently gives a certain optimum solution. For example with 5 points to visit there are 120 possible routes, but with 10 points there are 3,628,800 possible routes, and with 20 points to visit there are 2,432,902,008,176,640,000 routes!

Np-hard problems are ‘solved’ by using a ‘heuristic algorithm’. Heuristic algorithms follow a method designed to efficiently give a sufficiently good solution at the expense of not guaranteeing that an optimal solution is found.

In the case of the travelling salesman problem (or, in our case, a community nurse with a set of patients to visit), there are are many heuristics described. It is a favourite problem of algorithm writers!

In the code below we will use a ‘hill-climbing’ method based on reversing portions of the route (or a ‘pairwise exchange’ approach). Hill-climbing methods perform some search that then leads to them picking the best improvement in that step of the search. The search method is then repeated until no further improvement is found. Hill-climbing methods may get trapped in local optima in complicated problems – the solution found depends on the starting point. To reduce the impact of this the algorithm is repeated multiple times with different start points.

Our algorithm will follow these steps:

1) Pick a route at random (randomly order to points to visit)

2) Examine all possible pairwise exchanges in the route in (e.g. if we have a route A-B-C-D-E-F-G-H, and pairwise exchange C and F we have a new route A-B-F-E-D-C-G-H). Choose the pairwise exchange that gives the best reduction in route distance).

3) Repeat step 2) until no further improvement

4) Repeat from step 1 for a determined number of repeats (or repeat until a maximum algorithm use time is reached).

In order to run an algorithm like this we need to know the travel times or distances between points. In the example below we calculate a straight line (or ‘Euclidean’) distance based on x and y locations. x/y locations are readily available for addresses (such as Eastings and Northings in the UK). Straight line distances or travel times, while only approximate may be good enough to get reasonable solutions.
Here is a diagram of the effect of one exchange (from https://en.wikipedia.org/wiki/Travelling_salesman_problem)

pairwise

Let’s see the code (don’t worry if you don’t understand the plot function, though hopefully you can understand enough to know how to copy and hack it to use yourself).

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.path import Path
import matplotlib.patches as patches
import math


def main_code():
    """Main program code called"""
    
    # Load points to visit with co-ordinates
    points_to_visit, xCo, yCo, number_of_points_to_visit = load_points()

    # Create a lookup dictionary for distances between any two points_to_visit
    distances_lookup = create_dictionary_of_distance(
        number_of_points_to_visit, xCo, yCo)

    # Run route finding algorithm multiple times
    number_of_runs = 50
    (final_best_distance_so_far, final_best_route_so_far, 
     progress_in_best_distance) = (find_shortest_route(
             number_of_runs, points_to_visit, distances_lookup))

    # Print results     
    print_results(final_best_route_so_far, xCo, yCo, points_to_visit, 
                   final_best_distance_so_far, progress_in_best_distance)


def calculate_total_route_distance(route, distances_lookup):
    """Calculates route distances. Accesses 'distances_lookup' dictionary which
    gives distance between any two points. Total distance includes return to 
    starting point at end of route"""

    total = 0
    for i in range(len(route) - 1):
        total += distances_lookup[route[i], route[i + 1]]
    total += distances_lookup[route[-1], route[0]]
    return total


def create_dictionary_of_distance(number_of_points_to_visit, xCo, yCo):
    """Creates a dictions of distances_lookup between all combinations of 
    points_to_visit"""
    
    distances_lookup = {}

    for i in range(number_of_points_to_visit):
        for j in range(i, number_of_points_to_visit):
            x1 = xCo[i]
            y1 = yCo[i]
            x2 = xCo[j]
            y2 = yCo[j]
            distance = math.sqrt((x1 - x2) ** 2 + (y1 - y2) ** 2)
            distances_lookup[i, j] = distance
            distances_lookup[j, i] = distance

    return distances_lookup


def load_points():
    """Usually this function will load point ids and co-ordinates from, for
    example, a CSV file. Here we define a function that will generate random
    points, as an example"""
    
    number_of_points = 25
    points_to_visit = list(range(0, number_of_points))
    x = np.random.uniform(0, 200, number_of_points)
    y = np.random.uniform(0, 200, number_of_points)
    number_of_points_to_visit = len(points_to_visit)
    
    return points_to_visit, x, y, number_of_points_to_visit


def find_shortest_route(runs, points_to_visit, distances_lookup):
    """Main algorithm code"""
    
    final_best_route_so_far = []
    final_best_distance_so_far = 1e99
    progress_in_best_distance = []

    for run in range(runs):
        exchange_point_1 = 0  
        exchange_point_2 = 0  
        improvement_found = True
        best_route_so_far = points_to_visit.copy()  
        np.random.shuffle(best_route_so_far)
        best_distance_so_far = calculate_total_route_distance(
                best_route_so_far, distances_lookup)

        while improvement_found:  # continue until no further improvement
            improvement_found = False

            # Loop through all paiwwise route section reversals
            for i in range(0, len(best_route_so_far)-1):
                for j in range(i, len(best_route_so_far)):
                    test_route = best_route_so_far.copy()
                    test_route = reverse_section(test_route, i, j)
                    test_route_distance = (calculate_total_route_distance
                                (test_route, distances_lookup))
                    if test_route_distance < best_distance_so_far:
                        exchange_point_1 = i
                        exchange_point_2 = j
                        improvement_found = True
                        best_distance_so_far = test_route_distance

            if improvement_found:
                best_route_so_far = reverse_section(
                        best_route_so_far, exchange_point_1, exchange_point_2)
   
        if best_distance_so_far < final_best_distance_so_far:
            final_best_distance_so_far = best_distance_so_far
            final_best_route_so_far = best_route_so_far.copy()

        progress_in_best_distance.append(final_best_distance_so_far)

    return (final_best_distance_so_far, final_best_route_so_far, 
            progress_in_best_distance)


def plot_improvement(progress_in_best_distance):
    """Plot improvement over runs"""
    
    plt.plot(progress_in_best_distance)
    plt.xlabel('Run')
    plt.ylabel('Best distance')
    plt.show()


def plot_route(route, xCo, yCo, label):
    """Plot points and best route found between points"""
    
    # Create figure
    fig = plt.figure(figsize=(8, 5))

   # Plot points to vist
    ax1 = fig.add_subplot(121)
    ax1.scatter(xCo, yCo)
    texts = []
    for i, txt in enumerate(label):
        texts.append(ax1.text(xCo[i] + 1, yCo[i] + 1, txt))

    # Plot best route found between points
    verts = [None] * int(len(route) + 1)
    codes = [None] * int(len(route) + 1)
    for i in range(0, len(route)):
        verts[i] = xCo[route[i]], yCo[route[i]]
        if i == 0:
            codes[i] = Path.MOVETO
        else:
            codes[i] = Path.LINETO
    verts[len(route)] = xCo[route[0]], yCo[route[0]]
    codes[len(route)] = Path.CLOSEPOLY

    path = Path(verts, codes)

    ax2 = fig.add_subplot(122)
    patch = patches.PathPatch(path, facecolor='none', lw=0)
    ax2.add_patch(patch)
    ax2.set_xlim(min(xCo) - 10, max(xCo) + 10)
    ax2.set_ylim(min(yCo) - 10, max(yCo) + 10)

    # give the points a label
    xs, ys = zip(*verts)
    ax2.plot(xs, ys, 'x--', lw=2, color='black', ms=10)

    texts = []
    for i, txt in enumerate(label):
        texts.append(ax2.text(xCo[i] + 1, yCo[i] + 1, txt))

    # Display plot    
    plt.tight_layout(pad=4)
    plt.show()
    return


def print_results(final_best_route_so_far, xCo, yCo, points_to_visit, 
                   final_best_distance_so_far, progress_in_best_distance):
    
    """Prints results to screeen"""
    
    print('Best route found:')
    plot_route(final_best_route_so_far, xCo, yCo, points_to_visit)
    print('\nDistance = %.0f' % final_best_distance_so_far)
    print('\nImprovement in distance over run:')
    plot_improvement(progress_in_best_distance)


def reverse_section(orig_list, point_a, point_b):
    """Reverses section of route between points a and b (inclusive)"""

    low_switch_point = min(point_a, point_b)
    high_switch_point = max(point_a, point_b)
    high_switch_point += 1  # Include high switch point in reversed section
    section_1 = orig_list[0:low_switch_point]
    section_2 = list(reversed(orig_list[low_switch_point:high_switch_point]))
    section_3 = orig_list[high_switch_point:]
    new_list = section_1 + section_2 + section_3
    return (new_list)


# RUN MAIN CODE
# -------------
main_code()
Best route found:
tsp
Distance = 854

Improvement in distance over run:
tsp2