Prompt
stringlengths 16
11.5k
| Completions
stringlengths 84
25.3k
|
---|---|
https:en.wikipedia.orgwikiElectricalreactanceInductivereactance Calculate inductive reactance, frequency or inductance from two given electrical properties then return namevalue pair of the zero value in a Python dict. Parameters inductance : float with units in Henries frequency : float with units in Hertz reactance : float with units in Ohms indreactance35e6, 1e3, 0 Traceback most recent call last: ... ValueError: Inductance cannot be negative indreactance35e6, 1e3, 0 Traceback most recent call last: ... ValueError: Frequency cannot be negative indreactance35e6, 0, 1 Traceback most recent call last: ... ValueError: Inductive reactance cannot be negative indreactance0, 10e3, 50 'inductance': 0.0007957747154594767 indreactance35e3, 0, 50 'frequency': 227.36420441699332 indreactance35e6, 1e3, 0 'reactance': 0.2199114857512855 | # https://en.wikipedia.org/wiki/Electrical_reactance#Inductive_reactance
from __future__ import annotations
from math import pi
def ind_reactance(
inductance: float, frequency: float, reactance: float
) -> dict[str, float]:
"""
Calculate inductive reactance, frequency or inductance from two given electrical
properties then return name/value pair of the zero value in a Python dict.
Parameters
----------
inductance : float with units in Henries
frequency : float with units in Hertz
reactance : float with units in Ohms
>>> ind_reactance(-35e-6, 1e3, 0)
Traceback (most recent call last):
...
ValueError: Inductance cannot be negative
>>> ind_reactance(35e-6, -1e3, 0)
Traceback (most recent call last):
...
ValueError: Frequency cannot be negative
>>> ind_reactance(35e-6, 0, -1)
Traceback (most recent call last):
...
ValueError: Inductive reactance cannot be negative
>>> ind_reactance(0, 10e3, 50)
{'inductance': 0.0007957747154594767}
>>> ind_reactance(35e-3, 0, 50)
{'frequency': 227.36420441699332}
>>> ind_reactance(35e-6, 1e3, 0)
{'reactance': 0.2199114857512855}
"""
if (inductance, frequency, reactance).count(0) != 1:
raise ValueError("One and only one argument must be 0")
if inductance < 0:
raise ValueError("Inductance cannot be negative")
if frequency < 0:
raise ValueError("Frequency cannot be negative")
if reactance < 0:
raise ValueError("Inductive reactance cannot be negative")
if inductance == 0:
return {"inductance": reactance / (2 * pi * frequency)}
elif frequency == 0:
return {"frequency": reactance / (2 * pi * inductance)}
elif reactance == 0:
return {"reactance": 2 * pi * frequency * inductance}
else:
raise ValueError("Exactly one argument must be 0")
if __name__ == "__main__":
import doctest
doctest.testmod()
|
https:en.wikipedia.orgwikiOhm27slaw Apply Ohm's Law, on any two given electrical values, which can be voltage, current, and resistance, and then in a Python dict return namevalue pair of the zero value. ohmslawvoltage10, resistance5, current0 'current': 2.0 ohmslawvoltage0, current0, resistance10 Traceback most recent call last: ... ValueError: One and only one argument must be 0 ohmslawvoltage0, current1, resistance2 Traceback most recent call last: ... ValueError: Resistance cannot be negative ohmslawresistance0, voltage10, current1 'resistance': 10.0 ohmslawvoltage0, current1.5, resistance2 'voltage': 3.0 | # https://en.wikipedia.org/wiki/Ohm%27s_law
from __future__ import annotations
def ohms_law(voltage: float, current: float, resistance: float) -> dict[str, float]:
"""
Apply Ohm's Law, on any two given electrical values, which can be voltage, current,
and resistance, and then in a Python dict return name/value pair of the zero value.
>>> ohms_law(voltage=10, resistance=5, current=0)
{'current': 2.0}
>>> ohms_law(voltage=0, current=0, resistance=10)
Traceback (most recent call last):
...
ValueError: One and only one argument must be 0
>>> ohms_law(voltage=0, current=1, resistance=-2)
Traceback (most recent call last):
...
ValueError: Resistance cannot be negative
>>> ohms_law(resistance=0, voltage=-10, current=1)
{'resistance': -10.0}
>>> ohms_law(voltage=0, current=-1.5, resistance=2)
{'voltage': -3.0}
"""
if (voltage, current, resistance).count(0) != 1:
raise ValueError("One and only one argument must be 0")
if resistance < 0:
raise ValueError("Resistance cannot be negative")
if voltage == 0:
return {"voltage": float(current * resistance)}
elif current == 0:
return {"current": voltage / resistance}
elif resistance == 0:
return {"resistance": voltage / current}
else:
raise ValueError("Exactly one argument must be 0")
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Calculate real power from apparent power and power factor. Examples: realpower100, 0.9 90.0 realpower0, 0.8 0.0 realpower100, 0.9 90.0 Calculate reactive power from apparent power and power factor. Examples: reactivepower100, 0.9 43.58898943540673 reactivepower0, 0.8 0.0 reactivepower100, 0.9 43.58898943540673 | import math
def real_power(apparent_power: float, power_factor: float) -> float:
"""
Calculate real power from apparent power and power factor.
Examples:
>>> real_power(100, 0.9)
90.0
>>> real_power(0, 0.8)
0.0
>>> real_power(100, -0.9)
-90.0
"""
if (
not isinstance(power_factor, (int, float))
or power_factor < -1
or power_factor > 1
):
raise ValueError("power_factor must be a valid float value between -1 and 1.")
return apparent_power * power_factor
def reactive_power(apparent_power: float, power_factor: float) -> float:
"""
Calculate reactive power from apparent power and power factor.
Examples:
>>> reactive_power(100, 0.9)
43.58898943540673
>>> reactive_power(0, 0.8)
0.0
>>> reactive_power(100, -0.9)
43.58898943540673
"""
if (
not isinstance(power_factor, (int, float))
or power_factor < -1
or power_factor > 1
):
raise ValueError("power_factor must be a valid float value between -1 and 1.")
return apparent_power * math.sqrt(1 - power_factor**2)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Title : Calculating the resistance of a n band resistor using the color codes Description : Resistors resist the flow of electrical current.Each one has a value that tells how strongly it resists current flow.This value's unit is the ohm, often noted with the Greek letter omega: . The colored bands on a resistor can tell you everything you need to know about its value and tolerance, as long as you understand how to read them. The order in which the colors are arranged is very important, and each value of resistor has its own unique combination. The color coding for resistors is an international standard that is defined in IEC 60062. The number of bands present in a resistor varies from three to six. These represent significant figures, multiplier, tolerance, reliability, and temperature coefficient Each color used for a type of band has a value assigned to it. It is read from left to right. All resistors will have significant figures and multiplier bands. In a three band resistor first two bands from the left represent significant figures and the third represents the multiplier band. Significant figures The number of significant figures band in a resistor can vary from two to three. Colors and values associated with significant figure bands Black 0, Brown 1, Red 2, Orange 3, Yellow 4, Green 5, Blue 6, Violet 7, Grey 8, White 9 Multiplier There will be one multiplier band in a resistor. It is multiplied with the significant figures obtained from previous bands. Colors and values associated with multiplier band Black 100, Brown 101, Red 102, Orange 103, Yellow 104, Green 105, Blue 106, Violet 107, Grey 108, White 109, Gold 101, Silver 102 Note that multiplier bands use Gold and Silver which are not used for significant figure bands. Tolerance The tolerance band is not always present. It can be seen in four band resistors and above. This is a percentage by which the resistor value can vary. Colors and values associated with tolerance band Brown 1, Red 2, Orange 0.05, Yellow 0.02, Green 0.5,Blue 0.25, Violet 0.1, Grey 0.01, Gold 5, Silver 10 If no color is mentioned then by default tolerance is 20 Note that tolerance band does not use Black and White colors. Temperature Coeffecient Indicates the change in resistance of the component as a function of ambient temperature in terms of ppmK. It is present in six band resistors. Colors and values associated with Temperature coeffecient Black 250 ppmK, Brown 100 ppmK, Red 50 ppmK, Orange 15 ppmK, Yellow 25 ppmK, Green 20 ppmK, Blue 10 ppmK, Violet 5 ppmK, Grey 1 ppmK Note that temperature coeffecient band does not use White, Gold, Silver colors. Sources : https:www.calculator.netresistorcalculator.html https:learn.parallax.comsupportreferenceresistorcolorcodes https:byjus.comphysicsresistorcolourcodes Function returns the digit associated with the color. Function takes a list containing colors as input and returns digits as string getsignificantdigits'Black','Blue' '06' getsignificantdigits'Aqua','Blue' Traceback most recent call last: ... ValueError: Aqua is not a valid color for significant figure bands Function returns the multiplier value associated with the color. Function takes color as input and returns multiplier value getmultiplier'Gold' 0.1 getmultiplier'Ivory' Traceback most recent call last: ... ValueError: Ivory is not a valid color for multiplier band Function returns the tolerance value associated with the color. Function takes color as input and returns tolerance value. gettolerance'Green' 0.5 gettolerance'Indigo' Traceback most recent call last: ... ValueError: Indigo is not a valid color for tolerance band Function returns the temperature coeffecient value associated with the color. Function takes color as input and returns temperature coeffecient value. gettemperaturecoeffecient'Yellow' 25 gettemperaturecoeffecient'Cyan' Traceback most recent call last: ... ValueError: Cyan is not a valid color for temperature coeffecient band Function returns the number of bands of a given type in a resistor with n bands Function takes totalnumberofbands and typeofband as input and returns number of bands belonging to that type in the given resistor getbandtypecount3,'significant' 2 getbandtypecount2,'significant' Traceback most recent call last: ... ValueError: 2 is not a valid number of bands getbandtypecount3,'sign' Traceback most recent call last: ... ValueError: sign is not valid for a 3 band resistor getbandtypecount3,'tolerance' Traceback most recent call last: ... ValueError: tolerance is not valid for a 3 band resistor getbandtypecount5,'tempcoeffecient' Traceback most recent call last: ... ValueError: tempcoeffecient is not valid for a 5 band resistor Function checks if the input provided is valid or not. Function takes numberofbands and colors as input and returns True if it is valid checkvalidity3, Black,Blue,Orange True checkvalidity4, Black,Blue,Orange Traceback most recent call last: ... ValueError: Expecting 4 colors, provided 3 colors checkvalidity3, Cyan,Red,Yellow Traceback most recent call last: ... ValueError: Cyan is not a valid color Function calculates the total resistance of the resistor using the color codes. Function takes numberofbands, colorcodelist as input and returns resistance calculateresistance3, Black,Blue,Orange 'resistance': '6000 20 ' calculateresistance4, Orange,Green,Blue,Gold 'resistance': '35000000 5 ' calculateresistance5, Violet,Brown,Grey,Silver,Green 'resistance': '7.18 0.5 ' calculateresistance6, Red,Green,Blue,Yellow,Orange,Grey 'resistance': '2560000 0.05 1 ppmK' calculateresistance0, Violet,Brown,Grey,Silver,Green Traceback most recent call last: ... ValueError: Invalid number of bands. Resistor bands must be 3 to 6 calculateresistance4, Violet,Brown,Grey,Silver,Green Traceback most recent call last: ... ValueError: Expecting 4 colors, provided 5 colors calculateresistance4, Violet,Silver,Brown,Grey Traceback most recent call last: ... ValueError: Silver is not a valid color for significant figure bands calculateresistance4, Violet,Blue,Lime,Grey Traceback most recent call last: ... ValueError: Lime is not a valid color | valid_colors: list = [
"Black",
"Brown",
"Red",
"Orange",
"Yellow",
"Green",
"Blue",
"Violet",
"Grey",
"White",
"Gold",
"Silver",
]
significant_figures_color_values: dict[str, int] = {
"Black": 0,
"Brown": 1,
"Red": 2,
"Orange": 3,
"Yellow": 4,
"Green": 5,
"Blue": 6,
"Violet": 7,
"Grey": 8,
"White": 9,
}
multiplier_color_values: dict[str, float] = {
"Black": 10**0,
"Brown": 10**1,
"Red": 10**2,
"Orange": 10**3,
"Yellow": 10**4,
"Green": 10**5,
"Blue": 10**6,
"Violet": 10**7,
"Grey": 10**8,
"White": 10**9,
"Gold": 10**-1,
"Silver": 10**-2,
}
tolerance_color_values: dict[str, float] = {
"Brown": 1,
"Red": 2,
"Orange": 0.05,
"Yellow": 0.02,
"Green": 0.5,
"Blue": 0.25,
"Violet": 0.1,
"Grey": 0.01,
"Gold": 5,
"Silver": 10,
}
temperature_coeffecient_color_values: dict[str, int] = {
"Black": 250,
"Brown": 100,
"Red": 50,
"Orange": 15,
"Yellow": 25,
"Green": 20,
"Blue": 10,
"Violet": 5,
"Grey": 1,
}
band_types: dict[int, dict[str, int]] = {
3: {"significant": 2, "multiplier": 1},
4: {"significant": 2, "multiplier": 1, "tolerance": 1},
5: {"significant": 3, "multiplier": 1, "tolerance": 1},
6: {"significant": 3, "multiplier": 1, "tolerance": 1, "temp_coeffecient": 1},
}
def get_significant_digits(colors: list) -> str:
"""
Function returns the digit associated with the color. Function takes a
list containing colors as input and returns digits as string
>>> get_significant_digits(['Black','Blue'])
'06'
>>> get_significant_digits(['Aqua','Blue'])
Traceback (most recent call last):
...
ValueError: Aqua is not a valid color for significant figure bands
"""
digit = ""
for color in colors:
if color not in significant_figures_color_values:
msg = f"{color} is not a valid color for significant figure bands"
raise ValueError(msg)
digit = digit + str(significant_figures_color_values[color])
return str(digit)
def get_multiplier(color: str) -> float:
"""
Function returns the multiplier value associated with the color.
Function takes color as input and returns multiplier value
>>> get_multiplier('Gold')
0.1
>>> get_multiplier('Ivory')
Traceback (most recent call last):
...
ValueError: Ivory is not a valid color for multiplier band
"""
if color not in multiplier_color_values:
msg = f"{color} is not a valid color for multiplier band"
raise ValueError(msg)
return multiplier_color_values[color]
def get_tolerance(color: str) -> float:
"""
Function returns the tolerance value associated with the color.
Function takes color as input and returns tolerance value.
>>> get_tolerance('Green')
0.5
>>> get_tolerance('Indigo')
Traceback (most recent call last):
...
ValueError: Indigo is not a valid color for tolerance band
"""
if color not in tolerance_color_values:
msg = f"{color} is not a valid color for tolerance band"
raise ValueError(msg)
return tolerance_color_values[color]
def get_temperature_coeffecient(color: str) -> int:
"""
Function returns the temperature coeffecient value associated with the color.
Function takes color as input and returns temperature coeffecient value.
>>> get_temperature_coeffecient('Yellow')
25
>>> get_temperature_coeffecient('Cyan')
Traceback (most recent call last):
...
ValueError: Cyan is not a valid color for temperature coeffecient band
"""
if color not in temperature_coeffecient_color_values:
msg = f"{color} is not a valid color for temperature coeffecient band"
raise ValueError(msg)
return temperature_coeffecient_color_values[color]
def get_band_type_count(total_number_of_bands: int, type_of_band: str) -> int:
"""
Function returns the number of bands of a given type in a resistor with n bands
Function takes total_number_of_bands and type_of_band as input and returns
number of bands belonging to that type in the given resistor
>>> get_band_type_count(3,'significant')
2
>>> get_band_type_count(2,'significant')
Traceback (most recent call last):
...
ValueError: 2 is not a valid number of bands
>>> get_band_type_count(3,'sign')
Traceback (most recent call last):
...
ValueError: sign is not valid for a 3 band resistor
>>> get_band_type_count(3,'tolerance')
Traceback (most recent call last):
...
ValueError: tolerance is not valid for a 3 band resistor
>>> get_band_type_count(5,'temp_coeffecient')
Traceback (most recent call last):
...
ValueError: temp_coeffecient is not valid for a 5 band resistor
"""
if total_number_of_bands not in band_types:
msg = f"{total_number_of_bands} is not a valid number of bands"
raise ValueError(msg)
if type_of_band not in band_types[total_number_of_bands]:
msg = f"{type_of_band} is not valid for a {total_number_of_bands} band resistor"
raise ValueError(msg)
return band_types[total_number_of_bands][type_of_band]
def check_validity(number_of_bands: int, colors: list) -> bool:
"""
Function checks if the input provided is valid or not.
Function takes number_of_bands and colors as input and returns
True if it is valid
>>> check_validity(3, ["Black","Blue","Orange"])
True
>>> check_validity(4, ["Black","Blue","Orange"])
Traceback (most recent call last):
...
ValueError: Expecting 4 colors, provided 3 colors
>>> check_validity(3, ["Cyan","Red","Yellow"])
Traceback (most recent call last):
...
ValueError: Cyan is not a valid color
"""
if number_of_bands >= 3 and number_of_bands <= 6:
if number_of_bands == len(colors):
for color in colors:
if color not in valid_colors:
msg = f"{color} is not a valid color"
raise ValueError(msg)
return True
else:
msg = f"Expecting {number_of_bands} colors, provided {len(colors)} colors"
raise ValueError(msg)
else:
msg = "Invalid number of bands. Resistor bands must be 3 to 6"
raise ValueError(msg)
def calculate_resistance(number_of_bands: int, color_code_list: list) -> dict:
"""
Function calculates the total resistance of the resistor using the color codes.
Function takes number_of_bands, color_code_list as input and returns
resistance
>>> calculate_resistance(3, ["Black","Blue","Orange"])
{'resistance': '6000Ω ±20% '}
>>> calculate_resistance(4, ["Orange","Green","Blue","Gold"])
{'resistance': '35000000Ω ±5% '}
>>> calculate_resistance(5, ["Violet","Brown","Grey","Silver","Green"])
{'resistance': '7.18Ω ±0.5% '}
>>> calculate_resistance(6, ["Red","Green","Blue","Yellow","Orange","Grey"])
{'resistance': '2560000Ω ±0.05% 1 ppm/K'}
>>> calculate_resistance(0, ["Violet","Brown","Grey","Silver","Green"])
Traceback (most recent call last):
...
ValueError: Invalid number of bands. Resistor bands must be 3 to 6
>>> calculate_resistance(4, ["Violet","Brown","Grey","Silver","Green"])
Traceback (most recent call last):
...
ValueError: Expecting 4 colors, provided 5 colors
>>> calculate_resistance(4, ["Violet","Silver","Brown","Grey"])
Traceback (most recent call last):
...
ValueError: Silver is not a valid color for significant figure bands
>>> calculate_resistance(4, ["Violet","Blue","Lime","Grey"])
Traceback (most recent call last):
...
ValueError: Lime is not a valid color
"""
is_valid = check_validity(number_of_bands, color_code_list)
if is_valid:
number_of_significant_bands = get_band_type_count(
number_of_bands, "significant"
)
significant_colors = color_code_list[:number_of_significant_bands]
significant_digits = int(get_significant_digits(significant_colors))
multiplier_color = color_code_list[number_of_significant_bands]
multiplier = get_multiplier(multiplier_color)
if number_of_bands == 3:
tolerance_color = None
else:
tolerance_color = color_code_list[number_of_significant_bands + 1]
tolerance = (
20 if tolerance_color is None else get_tolerance(str(tolerance_color))
)
if number_of_bands != 6:
temperature_coeffecient_color = None
else:
temperature_coeffecient_color = color_code_list[
number_of_significant_bands + 2
]
temperature_coeffecient = (
0
if temperature_coeffecient_color is None
else get_temperature_coeffecient(str(temperature_coeffecient_color))
)
resisitance = significant_digits * multiplier
if temperature_coeffecient == 0:
answer = f"{resisitance}Ω ±{tolerance}% "
else:
answer = f"{resisitance}Ω ±{tolerance}% {temperature_coeffecient} ppm/K"
return {"resistance": answer}
else:
raise ValueError("Input is invalid")
if __name__ == "__main__":
import doctest
doctest.testmod()
|
https:byjus.comequivalentresistanceformula Req 1 1R1 1R2 ... 1Rn resistorparallel3.21389, 2, 3 0.8737571620498019 resistorparallel3.21389, 2, 3 Traceback most recent call last: ... ValueError: Resistor at index 2 has a negative or zero value! resistorparallel3.21389, 2, 0.000 Traceback most recent call last: ... ValueError: Resistor at index 2 has a negative or zero value! Req R1 R2 ... Rn Calculate the equivalent resistance for any number of resistors in parallel. resistorseries3.21389, 2, 3 8.21389 resistorseries3.21389, 2, 3 Traceback most recent call last: ... ValueError: Resistor at index 2 has a negative value! | # https://byjus.com/equivalent-resistance-formula/
from __future__ import annotations
def resistor_parallel(resistors: list[float]) -> float:
"""
Req = 1/ (1/R1 + 1/R2 + ... + 1/Rn)
>>> resistor_parallel([3.21389, 2, 3])
0.8737571620498019
>>> resistor_parallel([3.21389, 2, -3])
Traceback (most recent call last):
...
ValueError: Resistor at index 2 has a negative or zero value!
>>> resistor_parallel([3.21389, 2, 0.000])
Traceback (most recent call last):
...
ValueError: Resistor at index 2 has a negative or zero value!
"""
first_sum = 0.00
index = 0
for resistor in resistors:
if resistor <= 0:
msg = f"Resistor at index {index} has a negative or zero value!"
raise ValueError(msg)
first_sum += 1 / float(resistor)
index += 1
return 1 / first_sum
def resistor_series(resistors: list[float]) -> float:
"""
Req = R1 + R2 + ... + Rn
Calculate the equivalent resistance for any number of resistors in parallel.
>>> resistor_series([3.21389, 2, 3])
8.21389
>>> resistor_series([3.21389, 2, -3])
Traceback (most recent call last):
...
ValueError: Resistor at index 2 has a negative value!
"""
sum_r = 0.00
index = 0
for resistor in resistors:
sum_r += resistor
if resistor < 0:
msg = f"Resistor at index {index} has a negative value!"
raise ValueError(msg)
index += 1
return sum_r
if __name__ == "__main__":
import doctest
doctest.testmod()
|
https:en.wikipedia.orgwikiLCcircuit An LC circuit, also called a resonant circuit, tank circuit, or tuned circuit, is an electric circuit consisting of an inductor, represented by the letter L, and a capacitor, represented by the letter C, connected together. The circuit can act as an electrical resonator, an electrical analogue of a tuning fork, storing energy oscillating at the circuit's resonant frequency. Source: https:en.wikipedia.orgwikiLCcircuit This function can calculate the resonant frequency of LC circuit, for the given value of inductance and capacitnace. Examples are given below: resonantfrequencyinductance10, capacitance5 'Resonant frequency', 0.022507907903927652 resonantfrequencyinductance0, capacitance5 Traceback most recent call last: ... ValueError: Inductance cannot be 0 or negative resonantfrequencyinductance10, capacitance0 Traceback most recent call last: ... ValueError: Capacitance cannot be 0 or negative | # https://en.wikipedia.org/wiki/LC_circuit
"""An LC circuit, also called a resonant circuit, tank circuit, or tuned circuit,
is an electric circuit consisting of an inductor, represented by the letter L,
and a capacitor, represented by the letter C, connected together.
The circuit can act as an electrical resonator, an electrical analogue of a
tuning fork, storing energy oscillating at the circuit's resonant frequency.
Source: https://en.wikipedia.org/wiki/LC_circuit
"""
from __future__ import annotations
from math import pi, sqrt
def resonant_frequency(inductance: float, capacitance: float) -> tuple:
"""
This function can calculate the resonant frequency of LC circuit,
for the given value of inductance and capacitnace.
Examples are given below:
>>> resonant_frequency(inductance=10, capacitance=5)
('Resonant frequency', 0.022507907903927652)
>>> resonant_frequency(inductance=0, capacitance=5)
Traceback (most recent call last):
...
ValueError: Inductance cannot be 0 or negative
>>> resonant_frequency(inductance=10, capacitance=0)
Traceback (most recent call last):
...
ValueError: Capacitance cannot be 0 or negative
"""
if inductance <= 0:
raise ValueError("Inductance cannot be 0 or negative")
elif capacitance <= 0:
raise ValueError("Capacitance cannot be 0 or negative")
else:
return (
"Resonant frequency",
float(1 / (2 * pi * (sqrt(inductance * capacitance)))),
)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
https:en.wikipedia.orgwikiWheatstonebridge This function can calculate the unknown resistance in an wheatstone network, given that the three other resistances in the network are known. The formula to calculate the same is: RxR2R1R3 Usage examples: wheatstonesolverresistance12, resistance24, resistance35 10.0 wheatstonesolverresistance1356, resistance2234, resistance3976 641.5280898876405 wheatstonesolverresistance12, resistance21, resistance32 Traceback most recent call last: ... ValueError: All resistance values must be positive wheatstonesolverresistance10, resistance20, resistance32 Traceback most recent call last: ... ValueError: All resistance values must be positive | # https://en.wikipedia.org/wiki/Wheatstone_bridge
from __future__ import annotations
def wheatstone_solver(
resistance_1: float, resistance_2: float, resistance_3: float
) -> float:
"""
This function can calculate the unknown resistance in an wheatstone network,
given that the three other resistances in the network are known.
The formula to calculate the same is:
---------------
|Rx=(R2/R1)*R3|
---------------
Usage examples:
>>> wheatstone_solver(resistance_1=2, resistance_2=4, resistance_3=5)
10.0
>>> wheatstone_solver(resistance_1=356, resistance_2=234, resistance_3=976)
641.5280898876405
>>> wheatstone_solver(resistance_1=2, resistance_2=-1, resistance_3=2)
Traceback (most recent call last):
...
ValueError: All resistance values must be positive
>>> wheatstone_solver(resistance_1=0, resistance_2=0, resistance_3=2)
Traceback (most recent call last):
...
ValueError: All resistance values must be positive
"""
if resistance_1 <= 0 or resistance_2 <= 0 or resistance_3 <= 0:
raise ValueError("All resistance values must be positive")
else:
return float((resistance_2 / resistance_1) * resistance_3)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
initialization invoke ensurance | from unittest.mock import Mock, patch
from file_transfer.send_file import send_file
@patch("socket.socket")
@patch("builtins.open")
def test_send_file_running_as_expected(file, sock):
# ===== initialization =====
conn = Mock()
sock.return_value.accept.return_value = conn, Mock()
f = iter([1, None])
file.return_value.__enter__.return_value.read.side_effect = lambda _: next(f)
# ===== invoke =====
send_file(filename="mytext.txt", testing=True)
# ===== ensurance =====
sock.assert_called_once()
sock.return_value.bind.assert_called_once()
sock.return_value.listen.assert_called_once()
sock.return_value.accept.assert_called_once()
conn.recv.assert_called_once()
file.return_value.__enter__.assert_called_once()
file.return_value.__enter__.return_value.read.assert_called()
conn.send.assert_called_once()
conn.close.assert_called_once()
sock.return_value.shutdown.assert_called_once()
sock.return_value.close.assert_called_once()
|
Program to calculate the amortization amount per month, given Principal borrowed Rate of interest per annum Years to repay the loan Wikipedia Reference: https:en.wikipedia.orgwikiEquatedmonthlyinstallment Formula for amortization amount per month: A p r 1 rn 1 rn 1 where p is the principal, r is the rate of interest per month and n is the number of payments equatedmonthlyinstallments25000, 0.12, 3 830.3577453212793 equatedmonthlyinstallments25000, 0.12, 10 358.67737100646826 equatedmonthlyinstallments0, 0.12, 3 Traceback most recent call last: ... Exception: Principal borrowed must be 0 equatedmonthlyinstallments25000, 1, 3 Traceback most recent call last: ... Exception: Rate of interest must be 0 equatedmonthlyinstallments25000, 0.12, 0 Traceback most recent call last: ... Exception: Years to repay must be an integer 0 Yearly rate is divided by 12 to get monthly rate Years to repay is multiplied by 12 to get number of payments as payment is monthly | def equated_monthly_installments(
principal: float, rate_per_annum: float, years_to_repay: int
) -> float:
"""
Formula for amortization amount per month:
A = p * r * (1 + r)^n / ((1 + r)^n - 1)
where p is the principal, r is the rate of interest per month
and n is the number of payments
>>> equated_monthly_installments(25000, 0.12, 3)
830.3577453212793
>>> equated_monthly_installments(25000, 0.12, 10)
358.67737100646826
>>> equated_monthly_installments(0, 0.12, 3)
Traceback (most recent call last):
...
Exception: Principal borrowed must be > 0
>>> equated_monthly_installments(25000, -1, 3)
Traceback (most recent call last):
...
Exception: Rate of interest must be >= 0
>>> equated_monthly_installments(25000, 0.12, 0)
Traceback (most recent call last):
...
Exception: Years to repay must be an integer > 0
"""
if principal <= 0:
raise Exception("Principal borrowed must be > 0")
if rate_per_annum < 0:
raise Exception("Rate of interest must be >= 0")
if years_to_repay <= 0 or not isinstance(years_to_repay, int):
raise Exception("Years to repay must be an integer > 0")
# Yearly rate is divided by 12 to get monthly rate
rate_per_month = rate_per_annum / 12
# Years to repay is multiplied by 12 to get number of payments as payment is monthly
number_of_payments = years_to_repay * 12
return (
principal
* rate_per_month
* (1 + rate_per_month) ** number_of_payments
/ ((1 + rate_per_month) ** number_of_payments - 1)
)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Calculate the exponential moving average EMA on the series of stock prices. Wikipedia Reference: https:en.wikipedia.orgwikiExponentialsmoothing https:www.investopedia.comtermseema.asptocwhatisanexponential movingaverageema Exponential moving average is used in finance to analyze changes stock prices. EMA is used in conjunction with Simple moving average SMA, EMA reacts to the changes in the value quicker than SMA, which is one of the advantages of using EMA. Yields exponential moving averages of the given stock prices. tupleexponentialmovingaverageiter2, 5, 3, 8.2, 6, 9, 10, 3 2, 3.5, 3.25, 5.725, 5.8625, 7.43125, 8.715625 :param stockprices: A stream of stock prices :param windowsize: The number of stock prices that will trigger a new calculation of the exponential average windowsize 0 :return: Yields a sequence of exponential moving averages Formula: st alpha xt 1 alpha stprev Where, st : Exponential moving average at timestamp t xt : stock price in from the stock prices at timestamp t stprev : Exponential moving average at timestamp t1 alpha : 21 windowsize smoothing factor Exponential moving average EMA is a rule of thumb technique for smoothing time series data using an exponential window function. Calculating smoothing factor Exponential average at timestamp t Assigning simple moving average till the windowsize for the first time is reached Calculating exponential moving average based on current timestamp data point and previous exponential average value | from collections.abc import Iterator
def exponential_moving_average(
stock_prices: Iterator[float], window_size: int
) -> Iterator[float]:
"""
Yields exponential moving averages of the given stock prices.
>>> tuple(exponential_moving_average(iter([2, 5, 3, 8.2, 6, 9, 10]), 3))
(2, 3.5, 3.25, 5.725, 5.8625, 7.43125, 8.715625)
:param stock_prices: A stream of stock prices
:param window_size: The number of stock prices that will trigger a new calculation
of the exponential average (window_size > 0)
:return: Yields a sequence of exponential moving averages
Formula:
st = alpha * xt + (1 - alpha) * st_prev
Where,
st : Exponential moving average at timestamp t
xt : stock price in from the stock prices at timestamp t
st_prev : Exponential moving average at timestamp t-1
alpha : 2/(1 + window_size) - smoothing factor
Exponential moving average (EMA) is a rule of thumb technique for
smoothing time series data using an exponential window function.
"""
if window_size <= 0:
raise ValueError("window_size must be > 0")
# Calculating smoothing factor
alpha = 2 / (1 + window_size)
# Exponential average at timestamp t
moving_average = 0.0
for i, stock_price in enumerate(stock_prices):
if i <= window_size:
# Assigning simple moving average till the window_size for the first time
# is reached
moving_average = (moving_average + stock_price) * 0.5 if i else stock_price
else:
# Calculating exponential moving average based on current timestamp data
# point and previous exponential average value
moving_average = (alpha * stock_price) + ((1 - alpha) * moving_average)
yield moving_average
if __name__ == "__main__":
import doctest
doctest.testmod()
stock_prices = [2.0, 5, 3, 8.2, 6, 9, 10]
window_size = 3
result = tuple(exponential_moving_average(iter(stock_prices), window_size))
print(f"{stock_prices = }")
print(f"{window_size = }")
print(f"{result = }")
|
https:www.investopedia.com simpleinterest18000.0, 0.06, 3 3240.0 simpleinterest0.5, 0.06, 3 0.09 simpleinterest18000.0, 0.01, 10 1800.0 simpleinterest18000.0, 0.0, 3 0.0 simpleinterest5500.0, 0.01, 100 5500.0 simpleinterest10000.0, 0.06, 3 Traceback most recent call last: ... ValueError: dailyinterestrate must be 0 simpleinterest10000.0, 0.06, 3 Traceback most recent call last: ... ValueError: principal must be 0 simpleinterest5500.0, 0.01, 5 Traceback most recent call last: ... ValueError: daysbetweenpayments must be 0 compoundinterest10000.0, 0.05, 3 1576.2500000000014 compoundinterest10000.0, 0.05, 1 500.00000000000045 compoundinterest0.5, 0.05, 3 0.07881250000000006 compoundinterest10000.0, 0.06, 4 Traceback most recent call last: ... ValueError: numberofcompoundingperiods must be 0 compoundinterest10000.0, 3.5, 3.0 Traceback most recent call last: ... ValueError: nominalannualinterestratepercentage must be 0 compoundinterest5500.0, 0.01, 5 Traceback most recent call last: ... ValueError: principal must be 0 aprinterest10000.0, 0.05, 3 1618.223072263547 aprinterest10000.0, 0.05, 1 512.6749646744732 aprinterest0.5, 0.05, 3 0.08091115361317736 aprinterest10000.0, 0.06, 4 Traceback most recent call last: ... ValueError: numberofyears must be 0 aprinterest10000.0, 3.5, 3.0 Traceback most recent call last: ... ValueError: nominalannualpercentagerate must be 0 aprinterest5500.0, 0.01, 5 Traceback most recent call last: ... ValueError: principal must be 0 | # https://www.investopedia.com
from __future__ import annotations
def simple_interest(
principal: float, daily_interest_rate: float, days_between_payments: float
) -> float:
"""
>>> simple_interest(18000.0, 0.06, 3)
3240.0
>>> simple_interest(0.5, 0.06, 3)
0.09
>>> simple_interest(18000.0, 0.01, 10)
1800.0
>>> simple_interest(18000.0, 0.0, 3)
0.0
>>> simple_interest(5500.0, 0.01, 100)
5500.0
>>> simple_interest(10000.0, -0.06, 3)
Traceback (most recent call last):
...
ValueError: daily_interest_rate must be >= 0
>>> simple_interest(-10000.0, 0.06, 3)
Traceback (most recent call last):
...
ValueError: principal must be > 0
>>> simple_interest(5500.0, 0.01, -5)
Traceback (most recent call last):
...
ValueError: days_between_payments must be > 0
"""
if days_between_payments <= 0:
raise ValueError("days_between_payments must be > 0")
if daily_interest_rate < 0:
raise ValueError("daily_interest_rate must be >= 0")
if principal <= 0:
raise ValueError("principal must be > 0")
return principal * daily_interest_rate * days_between_payments
def compound_interest(
principal: float,
nominal_annual_interest_rate_percentage: float,
number_of_compounding_periods: float,
) -> float:
"""
>>> compound_interest(10000.0, 0.05, 3)
1576.2500000000014
>>> compound_interest(10000.0, 0.05, 1)
500.00000000000045
>>> compound_interest(0.5, 0.05, 3)
0.07881250000000006
>>> compound_interest(10000.0, 0.06, -4)
Traceback (most recent call last):
...
ValueError: number_of_compounding_periods must be > 0
>>> compound_interest(10000.0, -3.5, 3.0)
Traceback (most recent call last):
...
ValueError: nominal_annual_interest_rate_percentage must be >= 0
>>> compound_interest(-5500.0, 0.01, 5)
Traceback (most recent call last):
...
ValueError: principal must be > 0
"""
if number_of_compounding_periods <= 0:
raise ValueError("number_of_compounding_periods must be > 0")
if nominal_annual_interest_rate_percentage < 0:
raise ValueError("nominal_annual_interest_rate_percentage must be >= 0")
if principal <= 0:
raise ValueError("principal must be > 0")
return principal * (
(1 + nominal_annual_interest_rate_percentage) ** number_of_compounding_periods
- 1
)
def apr_interest(
principal: float,
nominal_annual_percentage_rate: float,
number_of_years: float,
) -> float:
"""
>>> apr_interest(10000.0, 0.05, 3)
1618.223072263547
>>> apr_interest(10000.0, 0.05, 1)
512.6749646744732
>>> apr_interest(0.5, 0.05, 3)
0.08091115361317736
>>> apr_interest(10000.0, 0.06, -4)
Traceback (most recent call last):
...
ValueError: number_of_years must be > 0
>>> apr_interest(10000.0, -3.5, 3.0)
Traceback (most recent call last):
...
ValueError: nominal_annual_percentage_rate must be >= 0
>>> apr_interest(-5500.0, 0.01, 5)
Traceback (most recent call last):
...
ValueError: principal must be > 0
"""
if number_of_years <= 0:
raise ValueError("number_of_years must be > 0")
if nominal_annual_percentage_rate < 0:
raise ValueError("nominal_annual_percentage_rate must be >= 0")
if principal <= 0:
raise ValueError("principal must be > 0")
return compound_interest(
principal, nominal_annual_percentage_rate / 365, number_of_years * 365
)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Reference: https:www.investopedia.comtermsppresentvalue.asp An algorithm that calculates the present value of a stream of yearly cash flows given... 1. The discount rate as a decimal, not a percent 2. An array of cash flows, with the index of the cash flow being the associated year Note: This algorithm assumes that cash flows are paid at the end of the specified year presentvalue0.13, 10, 20.70, 293, 297 4.69 presentvalue0.07, 109129.39, 30923.23, 15098.93, 29734,39 42739.63 presentvalue0.07, 109129.39, 30923.23, 15098.93, 29734,39 175519.15 presentvalue1, 109129.39, 30923.23, 15098.93, 29734,39 Traceback most recent call last: ... ValueError: Discount rate cannot be negative presentvalue0.03, Traceback most recent call last: ... ValueError: Cash flows list cannot be empty | def present_value(discount_rate: float, cash_flows: list[float]) -> float:
"""
>>> present_value(0.13, [10, 20.70, -293, 297])
4.69
>>> present_value(0.07, [-109129.39, 30923.23, 15098.93, 29734,39])
-42739.63
>>> present_value(0.07, [109129.39, 30923.23, 15098.93, 29734,39])
175519.15
>>> present_value(-1, [109129.39, 30923.23, 15098.93, 29734,39])
Traceback (most recent call last):
...
ValueError: Discount rate cannot be negative
>>> present_value(0.03, [])
Traceback (most recent call last):
...
ValueError: Cash flows list cannot be empty
"""
if discount_rate < 0:
raise ValueError("Discount rate cannot be negative")
if not cash_flows:
raise ValueError("Cash flows list cannot be empty")
present_value = sum(
cash_flow / ((1 + discount_rate) ** i) for i, cash_flow in enumerate(cash_flows)
)
return round(present_value, ndigits=2)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Calculate price plus tax of a good or service given its price and a tax rate. priceplustax100, 0.25 125.0 priceplustax125.50, 0.05 131.775 | def price_plus_tax(price: float, tax_rate: float) -> float:
"""
>>> price_plus_tax(100, 0.25)
125.0
>>> price_plus_tax(125.50, 0.05)
131.775
"""
return price * (1 + tax_rate)
if __name__ == "__main__":
print(f"{price_plus_tax(100, 0.25) = }")
print(f"{price_plus_tax(125.50, 0.05) = }")
|
The Simple Moving Average SMA is a statistical calculation used to analyze data points by creating a constantly updated average price over a specific time period. In finance, SMA is often used in time series analysis to smooth out price data and identify trends. Reference: https:en.wikipedia.orgwikiMovingaverage Calculate the simple moving average SMA for some given time series data. :param data: A list of numerical data points. :param windowsize: An integer representing the size of the SMA window. :return: A list of SMA values with the same length as the input data. Examples: sma simplemovingaverage10, 12, 15, 13, 14, 16, 18, 17, 19, 21, 3 roundvalue, 2 if value is not None else None for value in sma None, None, 12.33, 13.33, 14.0, 14.33, 16.0, 17.0, 18.0, 19.0 simplemovingaverage10, 12, 15, 5 None, None, None simplemovingaverage10, 12, 15, 13, 14, 16, 18, 17, 19, 21, 0 Traceback most recent call last: ... ValueError: Window size must be a positive integer Example data replace with your own time series data Specify the window size for the SMA Calculate the Simple Moving Average Print the SMA values | from collections.abc import Sequence
def simple_moving_average(
data: Sequence[float], window_size: int
) -> list[float | None]:
"""
Calculate the simple moving average (SMA) for some given time series data.
:param data: A list of numerical data points.
:param window_size: An integer representing the size of the SMA window.
:return: A list of SMA values with the same length as the input data.
Examples:
>>> sma = simple_moving_average([10, 12, 15, 13, 14, 16, 18, 17, 19, 21], 3)
>>> [round(value, 2) if value is not None else None for value in sma]
[None, None, 12.33, 13.33, 14.0, 14.33, 16.0, 17.0, 18.0, 19.0]
>>> simple_moving_average([10, 12, 15], 5)
[None, None, None]
>>> simple_moving_average([10, 12, 15, 13, 14, 16, 18, 17, 19, 21], 0)
Traceback (most recent call last):
...
ValueError: Window size must be a positive integer
"""
if window_size < 1:
raise ValueError("Window size must be a positive integer")
sma: list[float | None] = []
for i in range(len(data)):
if i < window_size - 1:
sma.append(None) # SMA not available for early data points
else:
window = data[i - window_size + 1 : i + 1]
sma_value = sum(window) / window_size
sma.append(sma_value)
return sma
if __name__ == "__main__":
import doctest
doctest.testmod()
# Example data (replace with your own time series data)
data = [10, 12, 15, 13, 14, 16, 18, 17, 19, 21]
# Specify the window size for the SMA
window_size = 3
# Calculate the Simple Moving Average
sma_values = simple_moving_average(data, window_size)
# Print the SMA values
print("Simple Moving Average (SMA) Values:")
for i, value in enumerate(sma_values):
if value is not None:
print(f"Day {i + 1}: {value:.2f}")
else:
print(f"Day {i + 1}: Not enough data for SMA")
|
Author Alexandre De Zotti Draws Julia sets of quadratic polynomials and exponential maps. More specifically, this iterates the function a fixed number of times then plots whether the absolute value of the last iterate is greater than a fixed threshold named escape radius. For the exponential map this is not really an escape radius but rather a convenient way to approximate the Julia set with bounded orbits. The examples presented here are: The Cauliflower Julia set, see e.g. https:en.wikipedia.orgwikiFile:Juliaz22B0,25.png Other examples from https:en.wikipedia.orgwikiJuliaset An exponential map Julia set, ambiantly homeomorphic to the examples in https:www.math.univtoulouse.frcheritatGalIIgalery.html and https:ddd.uab.catpubpubmat02141493v43n102141493v43n1p27.pdf Remark: Some overflow runtime warnings are suppressed. This is because of the way the iteration loop is implemented, using numpy's efficient computations. Overflows and infinites are replaced after each step by a large number. Evaluate ez c. evalexponential0, 0 1.0 absevalexponential1, numpy.pi1.j 1e15 True absevalexponential1.j, 011.j 1e15 True evalquadraticpolynomial0, 2 4 evalquadraticpolynomial1, 1 0 roundevalquadraticpolynomial1.j, 0.imag 1 roundevalquadraticpolynomial1.j, 0.real 0 Create a grid of complex values of size nbpixelsnbpixels with real and imaginary parts ranging from windowsize to windowsize inclusive. Returns a numpy array. preparegrid1,3 array1.1.j, 1.0.j, 1.1.j, 0.1.j, 0.0.j, 0.1.j, 1.1.j, 1.0.j, 1.1.j Iterate the function evalfunction exactly nbiterations times. The first argument of the function is a parameter which is contained in functionparams. The variable z0 is an array that contains the initial values to iterate from. This function returns the final iterates. iteratefunctionevalquadraticpolynomial, 0, 3, numpy.array0,1,2.shape 3, numpy.rounditeratefunctionevalquadraticpolynomial, ... 0, ... 3, ... numpy.array0,1,20 0j numpy.rounditeratefunctionevalquadraticpolynomial, ... 0, ... 3, ... numpy.array0,1,21 10j numpy.rounditeratefunctionevalquadraticpolynomial, ... 0, ... 3, ... numpy.array0,1,22 2560j Plots of whether the absolute value of zfinal is greater than the value of escaperadius. Adds the functionlabel and functionparams to the title. showresults'80', 0, 1, numpy.array0,1,.5,.4,2,1.1,.2,1,1.3 Ignore some overflow and invalid value warnings. ignoreoverflowwarnings | import warnings
from collections.abc import Callable
from typing import Any
import numpy
from matplotlib import pyplot
c_cauliflower = 0.25 + 0.0j
c_polynomial_1 = -0.4 + 0.6j
c_polynomial_2 = -0.1 + 0.651j
c_exponential = -2.0
nb_iterations = 56
window_size = 2.0
nb_pixels = 666
def eval_exponential(c_parameter: complex, z_values: numpy.ndarray) -> numpy.ndarray:
"""
Evaluate $e^z + c$.
>>> eval_exponential(0, 0)
1.0
>>> abs(eval_exponential(1, numpy.pi*1.j)) < 1e-15
True
>>> abs(eval_exponential(1.j, 0)-1-1.j) < 1e-15
True
"""
return numpy.exp(z_values) + c_parameter
def eval_quadratic_polynomial(
c_parameter: complex, z_values: numpy.ndarray
) -> numpy.ndarray:
"""
>>> eval_quadratic_polynomial(0, 2)
4
>>> eval_quadratic_polynomial(-1, 1)
0
>>> round(eval_quadratic_polynomial(1.j, 0).imag)
1
>>> round(eval_quadratic_polynomial(1.j, 0).real)
0
"""
return z_values * z_values + c_parameter
def prepare_grid(window_size: float, nb_pixels: int) -> numpy.ndarray:
"""
Create a grid of complex values of size nb_pixels*nb_pixels with real and
imaginary parts ranging from -window_size to window_size (inclusive).
Returns a numpy array.
>>> prepare_grid(1,3)
array([[-1.-1.j, -1.+0.j, -1.+1.j],
[ 0.-1.j, 0.+0.j, 0.+1.j],
[ 1.-1.j, 1.+0.j, 1.+1.j]])
"""
x = numpy.linspace(-window_size, window_size, nb_pixels)
x = x.reshape((nb_pixels, 1))
y = numpy.linspace(-window_size, window_size, nb_pixels)
y = y.reshape((1, nb_pixels))
return x + 1.0j * y
def iterate_function(
eval_function: Callable[[Any, numpy.ndarray], numpy.ndarray],
function_params: Any,
nb_iterations: int,
z_0: numpy.ndarray,
infinity: float | None = None,
) -> numpy.ndarray:
"""
Iterate the function "eval_function" exactly nb_iterations times.
The first argument of the function is a parameter which is contained in
function_params. The variable z_0 is an array that contains the initial
values to iterate from.
This function returns the final iterates.
>>> iterate_function(eval_quadratic_polynomial, 0, 3, numpy.array([0,1,2])).shape
(3,)
>>> numpy.round(iterate_function(eval_quadratic_polynomial,
... 0,
... 3,
... numpy.array([0,1,2]))[0])
0j
>>> numpy.round(iterate_function(eval_quadratic_polynomial,
... 0,
... 3,
... numpy.array([0,1,2]))[1])
(1+0j)
>>> numpy.round(iterate_function(eval_quadratic_polynomial,
... 0,
... 3,
... numpy.array([0,1,2]))[2])
(256+0j)
"""
z_n = z_0.astype("complex64")
for _ in range(nb_iterations):
z_n = eval_function(function_params, z_n)
if infinity is not None:
numpy.nan_to_num(z_n, copy=False, nan=infinity)
z_n[abs(z_n) == numpy.inf] = infinity
return z_n
def show_results(
function_label: str,
function_params: Any,
escape_radius: float,
z_final: numpy.ndarray,
) -> None:
"""
Plots of whether the absolute value of z_final is greater than
the value of escape_radius. Adds the function_label and function_params to
the title.
>>> show_results('80', 0, 1, numpy.array([[0,1,.5],[.4,2,1.1],[.2,1,1.3]]))
"""
abs_z_final = (abs(z_final)).transpose()
abs_z_final[:, :] = abs_z_final[::-1, :]
pyplot.matshow(abs_z_final < escape_radius)
pyplot.title(f"Julia set of ${function_label}$, $c={function_params}$")
pyplot.show()
def ignore_overflow_warnings() -> None:
"""
Ignore some overflow and invalid value warnings.
>>> ignore_overflow_warnings()
"""
warnings.filterwarnings(
"ignore", category=RuntimeWarning, message="overflow encountered in multiply"
)
warnings.filterwarnings(
"ignore",
category=RuntimeWarning,
message="invalid value encountered in multiply",
)
warnings.filterwarnings(
"ignore", category=RuntimeWarning, message="overflow encountered in absolute"
)
warnings.filterwarnings(
"ignore", category=RuntimeWarning, message="overflow encountered in exp"
)
if __name__ == "__main__":
z_0 = prepare_grid(window_size, nb_pixels)
ignore_overflow_warnings() # See file header for explanations
nb_iterations = 24
escape_radius = 2 * abs(c_cauliflower) + 1
z_final = iterate_function(
eval_quadratic_polynomial,
c_cauliflower,
nb_iterations,
z_0,
infinity=1.1 * escape_radius,
)
show_results("z^2+c", c_cauliflower, escape_radius, z_final)
nb_iterations = 64
escape_radius = 2 * abs(c_polynomial_1) + 1
z_final = iterate_function(
eval_quadratic_polynomial,
c_polynomial_1,
nb_iterations,
z_0,
infinity=1.1 * escape_radius,
)
show_results("z^2+c", c_polynomial_1, escape_radius, z_final)
nb_iterations = 161
escape_radius = 2 * abs(c_polynomial_2) + 1
z_final = iterate_function(
eval_quadratic_polynomial,
c_polynomial_2,
nb_iterations,
z_0,
infinity=1.1 * escape_radius,
)
show_results("z^2+c", c_polynomial_2, escape_radius, z_final)
nb_iterations = 12
escape_radius = 10000.0
z_final = iterate_function(
eval_exponential,
c_exponential,
nb_iterations,
z_0 + 2,
infinity=1.0e10,
)
show_results("e^z+c", c_exponential, escape_radius, z_final)
|
Description The Koch snowflake is a fractal curve and one of the earliest fractals to have been described. The Koch snowflake can be built up iteratively, in a sequence of stages. The first stage is an equilateral triangle, and each successive stage is formed by adding outward bends to each side of the previous stage, making smaller equilateral triangles. This can be achieved through the following steps for each line: 1. divide the line segment into three segments of equal length. 2. draw an equilateral triangle that has the middle segment from step 1 as its base and points outward. 3. remove the line segment that is the base of the triangle from step 2. description adapted from https:en.wikipedia.orgwikiKochsnowflake for a more detailed explanation and an implementation in the Processing language, see https:natureofcode.combookchapter8fractals 84thekochcurveandthearraylisttechnique Requirements pip: matplotlib numpy initial triangle of Koch snowflake uncomment for simple Koch curve instead of Koch snowflake INITIALVECTORS VECTOR1, VECTOR3 Go through the number of iterations determined by the argument steps. Be careful with high values above 5 since the time to calculate increases exponentially. iteratenumpy.array0, 0, numpy.array1, 0, 1 array0, 0, array0.33333333, 0. , array0.5 , 0.28867513, array0.66666667, 0. , array1, 0 Loops through each pair of adjacent vectors. Each line between two adjacent vectors is divided into 4 segments by adding 3 additional vectors inbetween the original two vectors. The vector in the middle is constructed through a 60 degree rotation so it is bent outwards. iterationstepnumpy.array0, 0, numpy.array1, 0 array0, 0, array0.33333333, 0. , array0.5 , 0.28867513, array0.66666667, 0. , array1, 0 Standard rotation of a 2D vector with a rotation matrix see https:en.wikipedia.orgwikiRotationmatrix rotatenumpy.array1, 0, 60 array0.5 , 0.8660254 rotatenumpy.array1, 0, 90 array6.123234e17, 1.000000e00 Utility function to plot the vectors using matplotlib.pyplot No doctest was implemented since this function does not have a return value avoid stretched display of graph matplotlib.pyplot.plot takes a list of all xcoordinates and a list of all ycoordinates as inputs, which are constructed from the vectorlist using zip | from __future__ import annotations
import matplotlib.pyplot as plt # type: ignore
import numpy
# initial triangle of Koch snowflake
VECTOR_1 = numpy.array([0, 0])
VECTOR_2 = numpy.array([0.5, 0.8660254])
VECTOR_3 = numpy.array([1, 0])
INITIAL_VECTORS = [VECTOR_1, VECTOR_2, VECTOR_3, VECTOR_1]
# uncomment for simple Koch curve instead of Koch snowflake
# INITIAL_VECTORS = [VECTOR_1, VECTOR_3]
def iterate(initial_vectors: list[numpy.ndarray], steps: int) -> list[numpy.ndarray]:
"""
Go through the number of iterations determined by the argument "steps".
Be careful with high values (above 5) since the time to calculate increases
exponentially.
>>> iterate([numpy.array([0, 0]), numpy.array([1, 0])], 1)
[array([0, 0]), array([0.33333333, 0. ]), array([0.5 , \
0.28867513]), array([0.66666667, 0. ]), array([1, 0])]
"""
vectors = initial_vectors
for _ in range(steps):
vectors = iteration_step(vectors)
return vectors
def iteration_step(vectors: list[numpy.ndarray]) -> list[numpy.ndarray]:
"""
Loops through each pair of adjacent vectors. Each line between two adjacent
vectors is divided into 4 segments by adding 3 additional vectors in-between
the original two vectors. The vector in the middle is constructed through a
60 degree rotation so it is bent outwards.
>>> iteration_step([numpy.array([0, 0]), numpy.array([1, 0])])
[array([0, 0]), array([0.33333333, 0. ]), array([0.5 , \
0.28867513]), array([0.66666667, 0. ]), array([1, 0])]
"""
new_vectors = []
for i, start_vector in enumerate(vectors[:-1]):
end_vector = vectors[i + 1]
new_vectors.append(start_vector)
difference_vector = end_vector - start_vector
new_vectors.append(start_vector + difference_vector / 3)
new_vectors.append(
start_vector + difference_vector / 3 + rotate(difference_vector / 3, 60)
)
new_vectors.append(start_vector + difference_vector * 2 / 3)
new_vectors.append(vectors[-1])
return new_vectors
def rotate(vector: numpy.ndarray, angle_in_degrees: float) -> numpy.ndarray:
"""
Standard rotation of a 2D vector with a rotation matrix
(see https://en.wikipedia.org/wiki/Rotation_matrix )
>>> rotate(numpy.array([1, 0]), 60)
array([0.5 , 0.8660254])
>>> rotate(numpy.array([1, 0]), 90)
array([6.123234e-17, 1.000000e+00])
"""
theta = numpy.radians(angle_in_degrees)
c, s = numpy.cos(theta), numpy.sin(theta)
rotation_matrix = numpy.array(((c, -s), (s, c)))
return numpy.dot(rotation_matrix, vector)
def plot(vectors: list[numpy.ndarray]) -> None:
"""
Utility function to plot the vectors using matplotlib.pyplot
No doctest was implemented since this function does not have a return value
"""
# avoid stretched display of graph
axes = plt.gca()
axes.set_aspect("equal")
# matplotlib.pyplot.plot takes a list of all x-coordinates and a list of all
# y-coordinates as inputs, which are constructed from the vector-list using
# zip()
x_coordinates, y_coordinates = zip(*vectors)
plt.plot(x_coordinates, y_coordinates)
plt.show()
if __name__ == "__main__":
import doctest
doctest.testmod()
processed_vectors = iterate(INITIAL_VECTORS, 5)
plot(processed_vectors)
|
The Mandelbrot set is the set of complex numbers c for which the series zn1 zn zn c does not diverge, i.e. remains bounded. Thus, a complex number c is a member of the Mandelbrot set if, when starting with z0 0 and applying the iteration repeatedly, the absolute value of zn remains bounded for all n 0. Complex numbers can be written as a bi: a is the real component, usually drawn on the xaxis, and bi is the imaginary component, usually drawn on the yaxis. Most visualizations of the Mandelbrot set use a colorcoding to indicate after how many steps in the series the numbers outside the set diverge. Images of the Mandelbrot set exhibit an elaborate and infinitely complicated boundary that reveals progressively everfiner recursive detail at increasing magnifications, making the boundary of the Mandelbrot set a fractal curve. description adapted from https:en.wikipedia.orgwikiMandelbrotset see also https:en.wikipedia.orgwikiPlottingalgorithmsfortheMandelbrotset Return the relative distance stepmaxstep after which the complex number constituted by this xypair diverges. Members of the Mandelbrot set do not diverge so their distance is 1. getdistance0, 0, 50 1.0 getdistance0.5, 0.5, 50 0.061224489795918366 getdistance2, 0, 50 0.0 divergence happens for all complex number with an absolute value greater than 4 Blackwhite colorcoding that ignores the relative distance. The Mandelbrot set is black, everything else is white. getblackandwhitergb0 255, 255, 255 getblackandwhitergb0.5 255, 255, 255 getblackandwhitergb1 0, 0, 0 Colorcoding taking the relative distance into account. The Mandelbrot set is black. getcolorcodedrgb0 255, 0, 0 getcolorcodedrgb0.5 0, 255, 255 getcolorcodedrgb1 0, 0, 0 Function to generate the image of the Mandelbrot set. Two types of coordinates are used: imagecoordinates that refer to the pixels and figurecoordinates that refer to the complex numbers inside and outside the Mandelbrot set. The figurecoordinates in the arguments of this function determine which section of the Mandelbrot set is viewed. The main area of the Mandelbrot set is roughly between 1.5 x 0.5 and 1 y 1 in the figurecoordinates. Commenting out tests that slow down pytest... 13.35s call fractalsmandelbrot.py::mandelbrot.getimage getimage.load0,0 255, 0, 0 getimageusedistancecolorcoding False.load0,0 255, 255, 255 loop through the imagecoordinates determine the figurecoordinates based on the imagecoordinates color the corresponding pixel based on the selected coloringfunction colored version, full figure uncomment for colored version, different section, zoomed in img getimagefigurecenterx 0.6, figurecentery 0.4, figurewidth 0.8 uncomment for black and white version, full figure img getimageusedistancecolorcoding False uncomment to save the image img.savemandelbrot.png | import colorsys
from PIL import Image # type: ignore
def get_distance(x: float, y: float, max_step: int) -> float:
"""
Return the relative distance (= step/max_step) after which the complex number
constituted by this x-y-pair diverges. Members of the Mandelbrot set do not
diverge so their distance is 1.
>>> get_distance(0, 0, 50)
1.0
>>> get_distance(0.5, 0.5, 50)
0.061224489795918366
>>> get_distance(2, 0, 50)
0.0
"""
a = x
b = y
for step in range(max_step): # noqa: B007
a_new = a * a - b * b + x
b = 2 * a * b + y
a = a_new
# divergence happens for all complex number with an absolute value
# greater than 4
if a * a + b * b > 4:
break
return step / (max_step - 1)
def get_black_and_white_rgb(distance: float) -> tuple:
"""
Black&white color-coding that ignores the relative distance. The Mandelbrot
set is black, everything else is white.
>>> get_black_and_white_rgb(0)
(255, 255, 255)
>>> get_black_and_white_rgb(0.5)
(255, 255, 255)
>>> get_black_and_white_rgb(1)
(0, 0, 0)
"""
if distance == 1:
return (0, 0, 0)
else:
return (255, 255, 255)
def get_color_coded_rgb(distance: float) -> tuple:
"""
Color-coding taking the relative distance into account. The Mandelbrot set
is black.
>>> get_color_coded_rgb(0)
(255, 0, 0)
>>> get_color_coded_rgb(0.5)
(0, 255, 255)
>>> get_color_coded_rgb(1)
(0, 0, 0)
"""
if distance == 1:
return (0, 0, 0)
else:
return tuple(round(i * 255) for i in colorsys.hsv_to_rgb(distance, 1, 1))
def get_image(
image_width: int = 800,
image_height: int = 600,
figure_center_x: float = -0.6,
figure_center_y: float = 0,
figure_width: float = 3.2,
max_step: int = 50,
use_distance_color_coding: bool = True,
) -> Image.Image:
"""
Function to generate the image of the Mandelbrot set. Two types of coordinates
are used: image-coordinates that refer to the pixels and figure-coordinates
that refer to the complex numbers inside and outside the Mandelbrot set. The
figure-coordinates in the arguments of this function determine which section
of the Mandelbrot set is viewed. The main area of the Mandelbrot set is
roughly between "-1.5 < x < 0.5" and "-1 < y < 1" in the figure-coordinates.
Commenting out tests that slow down pytest...
# 13.35s call fractals/mandelbrot.py::mandelbrot.get_image
# >>> get_image().load()[0,0]
(255, 0, 0)
# >>> get_image(use_distance_color_coding = False).load()[0,0]
(255, 255, 255)
"""
img = Image.new("RGB", (image_width, image_height))
pixels = img.load()
# loop through the image-coordinates
for image_x in range(image_width):
for image_y in range(image_height):
# determine the figure-coordinates based on the image-coordinates
figure_height = figure_width / image_width * image_height
figure_x = figure_center_x + (image_x / image_width - 0.5) * figure_width
figure_y = figure_center_y + (image_y / image_height - 0.5) * figure_height
distance = get_distance(figure_x, figure_y, max_step)
# color the corresponding pixel based on the selected coloring-function
if use_distance_color_coding:
pixels[image_x, image_y] = get_color_coded_rgb(distance)
else:
pixels[image_x, image_y] = get_black_and_white_rgb(distance)
return img
if __name__ == "__main__":
import doctest
doctest.testmod()
# colored version, full figure
img = get_image()
# uncomment for colored version, different section, zoomed in
# img = get_image(figure_center_x = -0.6, figure_center_y = -0.4,
# figure_width = 0.8)
# uncomment for black and white version, full figure
# img = get_image(use_distance_color_coding = False)
# uncomment to save the image
# img.save("mandelbrot.png")
img.show()
|
Author Anurag Kumar anuragkumarak95gmail.com gitanuragkumarak95 Simple example of fractal generation using recursion. What is the Sierpiski Triangle? The Sierpiski triangle sometimes spelled Sierpinski, also called the Sierpiski gasket or Sierpiski sieve, is a fractal attractive fixed set with the overall shape of an equilateral triangle, subdivided recursively into smaller equilateral triangles. Originally constructed as a curve, this is one of the basic examples of selfsimilar setsthat is, it is a mathematically generated pattern that is reproducible at any magnification or reduction. It is named after the Polish mathematician Wacaw Sierpiski, but appeared as a decorative pattern many centuries before the work of Sierpiski. Usage: python sierpinskitriangle.py int:depthforfractal Credits: The above description is taken from https:en.wikipedia.orgwikiSierpiC584skitriangle This code was written by editing the code from https:www.riannetrujillo.comblogpythonfractal Find the midpoint of two points getmid0, 0, 2, 2 1.0, 1.0 getmid3, 3, 3, 3 0.0, 0.0 getmid1, 0, 3, 2 2.0, 1.0 getmid0, 0, 1, 1 0.5, 0.5 getmid0, 0, 0, 0 0.0, 0.0 Recursively draw the Sierpinski triangle given the vertices of the triangle and the recursion depth | import sys
import turtle
def get_mid(p1: tuple[float, float], p2: tuple[float, float]) -> tuple[float, float]:
"""
Find the midpoint of two points
>>> get_mid((0, 0), (2, 2))
(1.0, 1.0)
>>> get_mid((-3, -3), (3, 3))
(0.0, 0.0)
>>> get_mid((1, 0), (3, 2))
(2.0, 1.0)
>>> get_mid((0, 0), (1, 1))
(0.5, 0.5)
>>> get_mid((0, 0), (0, 0))
(0.0, 0.0)
"""
return (p1[0] + p2[0]) / 2, (p1[1] + p2[1]) / 2
def triangle(
vertex1: tuple[float, float],
vertex2: tuple[float, float],
vertex3: tuple[float, float],
depth: int,
) -> None:
"""
Recursively draw the Sierpinski triangle given the vertices of the triangle
and the recursion depth
"""
my_pen.up()
my_pen.goto(vertex1[0], vertex1[1])
my_pen.down()
my_pen.goto(vertex2[0], vertex2[1])
my_pen.goto(vertex3[0], vertex3[1])
my_pen.goto(vertex1[0], vertex1[1])
if depth == 0:
return
triangle(vertex1, get_mid(vertex1, vertex2), get_mid(vertex1, vertex3), depth - 1)
triangle(vertex2, get_mid(vertex1, vertex2), get_mid(vertex2, vertex3), depth - 1)
triangle(vertex3, get_mid(vertex3, vertex2), get_mid(vertex1, vertex3), depth - 1)
if __name__ == "__main__":
if len(sys.argv) != 2:
raise ValueError(
"Correct format for using this script: "
"python fractals.py <int:depth_for_fractal>"
)
my_pen = turtle.Turtle()
my_pen.ht()
my_pen.speed(5)
my_pen.pencolor("red")
vertices = [(-175, -125), (0, 175), (175, -125)] # vertices of triangle
triangle(vertices[0], vertices[1], vertices[2], int(sys.argv[1]))
turtle.Screen().exitonclick()
|
By Shreya123714 https:en.wikipedia.orgwikiFuzzyset A class for representing and manipulating triangular fuzzy sets. Attributes: name: The name or label of the fuzzy set. leftboundary: The left boundary of the fuzzy set. peak: The peak central value of the fuzzy set. rightboundary: The right boundary of the fuzzy set. Methods: membershipx: Calculate the membership value of an input 'x' in the fuzzy set. unionother: Calculate the union of this fuzzy set with another fuzzy set. intersectionother: Calculate the intersection of this fuzzy set with another. complement: Calculate the complement negation of this fuzzy set. plot: Plot the membership function of the fuzzy set. sheru FuzzySetSheru, 0.4, 1, 0.6 sheru FuzzySetname'Sheru', leftboundary0.4, peak1, rightboundary0.6 strsheru 'Sheru: 0.4, 1, 0.6' siya FuzzySetSiya, 0.5, 1, 0.7 siya FuzzySetname'Siya', leftboundary0.5, peak1, rightboundary0.7 Complement Operation sheru.complement FuzzySetname'Sheru', leftboundary0.4, peak0.6, rightboundary0 siya.complement doctest: NORMALIZEWHITESPACE FuzzySetname'Siya', leftboundary0.30000000000000004, peak0.5, rightboundary0 Intersection Operation siya.intersectionsheru FuzzySetname'Siya Sheru', leftboundary0.5, peak0.6, rightboundary1.0 Membership Operation sheru.membership0.5 0.16666666666666663 sheru.membership0.6 0.0 Union Operations siya.unionsheru FuzzySetname'Siya Sheru', leftboundary0.4, peak0.7, rightboundary1.0 FuzzySetfuzzyset, 0.1, 0.2, 0.3 FuzzySetname'fuzzyset', leftboundary0.1, peak0.2, rightboundary0.3 Calculate the complement negation of this fuzzy set. Returns: FuzzySet: A new fuzzy set representing the complement. FuzzySetfuzzyset, 0.1, 0.2, 0.3.complement FuzzySetname'fuzzyset', leftboundary0.7, peak0.9, rightboundary0.8 Calculate the intersection of this fuzzy set with another fuzzy set. Args: other: Another fuzzy set to intersect with. Returns: A new fuzzy set representing the intersection. FuzzySeta, 0.1, 0.2, 0.3.intersectionFuzzySetb, 0.4, 0.5, 0.6 FuzzySetname'a b', leftboundary0.4, peak0.3, rightboundary0.35 Calculate the membership value of an input 'x' in the fuzzy set. Returns: The membership value of 'x' in the fuzzy set. a FuzzySeta, 0.1, 0.2, 0.3 a.membership0.09 0.0 a.membership0.1 0.0 a.membership0.11 0.09999999999999995 a.membership0.4 0.0 FuzzySetA, 0, 0.5, 1.membership0.1 0.2 FuzzySetB, 0.2, 0.7, 1.membership0.6 0.8 Calculate the union of this fuzzy set with another fuzzy set. Args: other FuzzySet: Another fuzzy set to union with. Returns: FuzzySet: A new fuzzy set representing the union. FuzzySeta, 0.1, 0.2, 0.3.unionFuzzySetb, 0.4, 0.5, 0.6 FuzzySetname'a b', leftboundary0.1, peak0.6, rightboundary0.35 Plot the membership function of the fuzzy set. | from __future__ import annotations
from dataclasses import dataclass
import matplotlib.pyplot as plt
import numpy as np
@dataclass
class FuzzySet:
"""
A class for representing and manipulating triangular fuzzy sets.
Attributes:
name: The name or label of the fuzzy set.
left_boundary: The left boundary of the fuzzy set.
peak: The peak (central) value of the fuzzy set.
right_boundary: The right boundary of the fuzzy set.
Methods:
membership(x): Calculate the membership value of an input 'x' in the fuzzy set.
union(other): Calculate the union of this fuzzy set with another fuzzy set.
intersection(other): Calculate the intersection of this fuzzy set with another.
complement(): Calculate the complement (negation) of this fuzzy set.
plot(): Plot the membership function of the fuzzy set.
>>> sheru = FuzzySet("Sheru", 0.4, 1, 0.6)
>>> sheru
FuzzySet(name='Sheru', left_boundary=0.4, peak=1, right_boundary=0.6)
>>> str(sheru)
'Sheru: [0.4, 1, 0.6]'
>>> siya = FuzzySet("Siya", 0.5, 1, 0.7)
>>> siya
FuzzySet(name='Siya', left_boundary=0.5, peak=1, right_boundary=0.7)
# Complement Operation
>>> sheru.complement()
FuzzySet(name='¬Sheru', left_boundary=0.4, peak=0.6, right_boundary=0)
>>> siya.complement() # doctest: +NORMALIZE_WHITESPACE
FuzzySet(name='¬Siya', left_boundary=0.30000000000000004, peak=0.5,
right_boundary=0)
# Intersection Operation
>>> siya.intersection(sheru)
FuzzySet(name='Siya ∩ Sheru', left_boundary=0.5, peak=0.6, right_boundary=1.0)
# Membership Operation
>>> sheru.membership(0.5)
0.16666666666666663
>>> sheru.membership(0.6)
0.0
# Union Operations
>>> siya.union(sheru)
FuzzySet(name='Siya ∪ Sheru', left_boundary=0.4, peak=0.7, right_boundary=1.0)
"""
name: str
left_boundary: float
peak: float
right_boundary: float
def __str__(self) -> str:
"""
>>> FuzzySet("fuzzy_set", 0.1, 0.2, 0.3)
FuzzySet(name='fuzzy_set', left_boundary=0.1, peak=0.2, right_boundary=0.3)
"""
return (
f"{self.name}: [{self.left_boundary}, {self.peak}, {self.right_boundary}]"
)
def complement(self) -> FuzzySet:
"""
Calculate the complement (negation) of this fuzzy set.
Returns:
FuzzySet: A new fuzzy set representing the complement.
>>> FuzzySet("fuzzy_set", 0.1, 0.2, 0.3).complement()
FuzzySet(name='¬fuzzy_set', left_boundary=0.7, peak=0.9, right_boundary=0.8)
"""
return FuzzySet(
f"¬{self.name}",
1 - self.right_boundary,
1 - self.left_boundary,
1 - self.peak,
)
def intersection(self, other) -> FuzzySet:
"""
Calculate the intersection of this fuzzy set
with another fuzzy set.
Args:
other: Another fuzzy set to intersect with.
Returns:
A new fuzzy set representing the intersection.
>>> FuzzySet("a", 0.1, 0.2, 0.3).intersection(FuzzySet("b", 0.4, 0.5, 0.6))
FuzzySet(name='a ∩ b', left_boundary=0.4, peak=0.3, right_boundary=0.35)
"""
return FuzzySet(
f"{self.name} ∩ {other.name}",
max(self.left_boundary, other.left_boundary),
min(self.right_boundary, other.right_boundary),
(self.peak + other.peak) / 2,
)
def membership(self, x: float) -> float:
"""
Calculate the membership value of an input 'x' in the fuzzy set.
Returns:
The membership value of 'x' in the fuzzy set.
>>> a = FuzzySet("a", 0.1, 0.2, 0.3)
>>> a.membership(0.09)
0.0
>>> a.membership(0.1)
0.0
>>> a.membership(0.11)
0.09999999999999995
>>> a.membership(0.4)
0.0
>>> FuzzySet("A", 0, 0.5, 1).membership(0.1)
0.2
>>> FuzzySet("B", 0.2, 0.7, 1).membership(0.6)
0.8
"""
if x <= self.left_boundary or x >= self.right_boundary:
return 0.0
elif self.left_boundary < x <= self.peak:
return (x - self.left_boundary) / (self.peak - self.left_boundary)
elif self.peak < x < self.right_boundary:
return (self.right_boundary - x) / (self.right_boundary - self.peak)
msg = f"Invalid value {x} for fuzzy set {self}"
raise ValueError(msg)
def union(self, other) -> FuzzySet:
"""
Calculate the union of this fuzzy set with another fuzzy set.
Args:
other (FuzzySet): Another fuzzy set to union with.
Returns:
FuzzySet: A new fuzzy set representing the union.
>>> FuzzySet("a", 0.1, 0.2, 0.3).union(FuzzySet("b", 0.4, 0.5, 0.6))
FuzzySet(name='a ∪ b', left_boundary=0.1, peak=0.6, right_boundary=0.35)
"""
return FuzzySet(
f"{self.name} ∪ {other.name}",
min(self.left_boundary, other.left_boundary),
max(self.right_boundary, other.right_boundary),
(self.peak + other.peak) / 2,
)
def plot(self):
"""
Plot the membership function of the fuzzy set.
"""
x = np.linspace(0, 1, 1000)
y = [self.membership(xi) for xi in x]
plt.plot(x, y, label=self.name)
if __name__ == "__main__":
from doctest import testmod
testmod()
a = FuzzySet("A", 0, 0.5, 1)
b = FuzzySet("B", 0.2, 0.7, 1)
a.plot()
b.plot()
plt.xlabel("x")
plt.ylabel("Membership")
plt.legend()
plt.show()
union_ab = a.union(b)
intersection_ab = a.intersection(b)
complement_a = a.complement()
union_ab.plot()
intersection_ab.plot()
complement_a.plot()
plt.xlabel("x")
plt.ylabel("Membership")
plt.legend()
plt.show()
|
Simple multithreaded algorithm to show how the 4 phases of a genetic algorithm works Evaluation, Selection, Crossover and Mutation https:en.wikipedia.orgwikiGeneticalgorithm Author: D4rkia Maximum size of the population. Bigger could be faster but is more memory expensive. Number of elements selected in every generation of evolution. The selection takes place from best to worst of that generation and must be smaller than NPOPULATION. Probability that an element of a generation can mutate, changing one of its genes. This will guarantee that all genes will be used during evolution. Just a seed to improve randomness required by the algorithm. Evaluate how similar the item is with the target by just counting each char in the right position evaluateHelxo Worlx, Hello World 'Helxo Worlx', 9.0 Slice and combine two string at a random point. randomslice random.randint0, lenparent1 1 child1 parent1:randomslice parent2randomslice: child2 parent2:randomslice parent1randomslice: return child1, child2 def mutatechild: str, genes: liststr str: Select, crossover and mutate a new population. Select the second parent and generate new population pop Generate more children proportionally to the fitness score. childn intparent11 100 1 childn 10 if childn 10 else childn for in rangechildn: parent2 populationscorerandom.randint0, NSELECTED0 child1, child2 crossoverparent10, parent2 Append new string to the population list. pop.appendmutatechild1, genes pop.appendmutatechild2, genes return pop def basictarget: str, genes: liststr, debug: bool True tupleint, int, str: Verify if NPOPULATION is bigger than NSELECTED if NPOPULATION NSELECTED: msg fNPOPULATION must be bigger than NSELECTED raise ValueErrormsg Verify that the target contains no genes besides the ones inside genes variable. notingeneslist sortedc for c in target if c not in genes if notingeneslist: msg fnotingeneslist is not in genes list, evolution cannot converge raise ValueErrormsg Generate random starting population. population for in rangeNPOPULATION: population.append.joinrandom.choicegenes for i in rangelentarget Just some logs to know what the algorithms is doing. generation, totalpopulation 0, 0 This loop will end when we find a perfect match for our target. while True: generation 1 totalpopulation lenpopulation Random population created. Now it's time to evaluate. Adding a bit of concurrency can make everything faster, import concurrent.futures populationscore: listtuplestr, float with concurrent.futures.ThreadPoolExecutor maxworkersNUMWORKERS as executor: futures executor.submitevaluate, item for item in population concurrent.futures.waitfutures populationscore item.result for item in futures but with a simple algorithm like this, it will probably be slower. We just need to call evaluate for every item inside the population. populationscore evaluateitem, target for item in population Check if there is a matching evolution. populationscore sortedpopulationscore, keylambda x: x1, reverseTrue if populationscore00 target: return generation, totalpopulation, populationscore00 Print the best result every 10 generation. Just to know that the algorithm is working. if debug and generation 10 0: print fnGeneration: generation fnTotal Population:totalpopulation fnBest score: populationscore01 fnBest string: populationscore00 Flush the old population, keeping some of the best evolutions. Keeping this avoid regression of evolution. populationbest population: intNPOPULATION 3 population.clear population.extendpopulationbest Normalize population score to be between 0 and 1. populationscore item, score lentarget for item, score in populationscore This is selection for i in rangeNSELECTED: population.extendselectpopulationscoreinti, populationscore, genes Check if the population has already reached the maximum value and if so, break the cycle. If this check is disabled, the algorithm will take forever to compute large strings, but will also calculate small strings in a far fewer generations. if lenpopulation NPOPULATION: break if name main: targetstr This is a genetic algorithm to evaluate, combine, evolve, and mutate a string! geneslist list ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklm nopqrstuvwxyz.,;!?' generation, population, target basictargetstr, geneslist print fnGeneration: generationnTotal Population: populationnTarget: target | from __future__ import annotations
import random
# Maximum size of the population. Bigger could be faster but is more memory expensive.
N_POPULATION = 200
# Number of elements selected in every generation of evolution. The selection takes
# place from best to worst of that generation and must be smaller than N_POPULATION.
N_SELECTED = 50
# Probability that an element of a generation can mutate, changing one of its genes.
# This will guarantee that all genes will be used during evolution.
MUTATION_PROBABILITY = 0.4
# Just a seed to improve randomness required by the algorithm.
random.seed(random.randint(0, 1000))
def evaluate(item: str, main_target: str) -> tuple[str, float]:
"""
Evaluate how similar the item is with the target by just
counting each char in the right position
>>> evaluate("Helxo Worlx", "Hello World")
('Helxo Worlx', 9.0)
"""
score = len([g for position, g in enumerate(item) if g == main_target[position]])
return (item, float(score))
def crossover(parent_1: str, parent_2: str) -> tuple[str, str]:
"""Slice and combine two string at a random point."""
random_slice = random.randint(0, len(parent_1) - 1)
child_1 = parent_1[:random_slice] + parent_2[random_slice:]
child_2 = parent_2[:random_slice] + parent_1[random_slice:]
return (child_1, child_2)
def mutate(child: str, genes: list[str]) -> str:
"""Mutate a random gene of a child with another one from the list."""
child_list = list(child)
if random.uniform(0, 1) < MUTATION_PROBABILITY:
child_list[random.randint(0, len(child)) - 1] = random.choice(genes)
return "".join(child_list)
# Select, crossover and mutate a new population.
def select(
parent_1: tuple[str, float],
population_score: list[tuple[str, float]],
genes: list[str],
) -> list[str]:
"""Select the second parent and generate new population"""
pop = []
# Generate more children proportionally to the fitness score.
child_n = int(parent_1[1] * 100) + 1
child_n = 10 if child_n >= 10 else child_n
for _ in range(child_n):
parent_2 = population_score[random.randint(0, N_SELECTED)][0]
child_1, child_2 = crossover(parent_1[0], parent_2)
# Append new string to the population list.
pop.append(mutate(child_1, genes))
pop.append(mutate(child_2, genes))
return pop
def basic(target: str, genes: list[str], debug: bool = True) -> tuple[int, int, str]:
"""
Verify that the target contains no genes besides the ones inside genes variable.
>>> from string import ascii_lowercase
>>> basic("doctest", ascii_lowercase, debug=False)[2]
'doctest'
>>> genes = list(ascii_lowercase)
>>> genes.remove("e")
>>> basic("test", genes)
Traceback (most recent call last):
...
ValueError: ['e'] is not in genes list, evolution cannot converge
>>> genes.remove("s")
>>> basic("test", genes)
Traceback (most recent call last):
...
ValueError: ['e', 's'] is not in genes list, evolution cannot converge
>>> genes.remove("t")
>>> basic("test", genes)
Traceback (most recent call last):
...
ValueError: ['e', 's', 't'] is not in genes list, evolution cannot converge
"""
# Verify if N_POPULATION is bigger than N_SELECTED
if N_POPULATION < N_SELECTED:
msg = f"{N_POPULATION} must be bigger than {N_SELECTED}"
raise ValueError(msg)
# Verify that the target contains no genes besides the ones inside genes variable.
not_in_genes_list = sorted({c for c in target if c not in genes})
if not_in_genes_list:
msg = f"{not_in_genes_list} is not in genes list, evolution cannot converge"
raise ValueError(msg)
# Generate random starting population.
population = []
for _ in range(N_POPULATION):
population.append("".join([random.choice(genes) for i in range(len(target))]))
# Just some logs to know what the algorithms is doing.
generation, total_population = 0, 0
# This loop will end when we find a perfect match for our target.
while True:
generation += 1
total_population += len(population)
# Random population created. Now it's time to evaluate.
# Adding a bit of concurrency can make everything faster,
#
# import concurrent.futures
# population_score: list[tuple[str, float]] = []
# with concurrent.futures.ThreadPoolExecutor(
# max_workers=NUM_WORKERS) as executor:
# futures = {executor.submit(evaluate, item) for item in population}
# concurrent.futures.wait(futures)
# population_score = [item.result() for item in futures]
#
# but with a simple algorithm like this, it will probably be slower.
# We just need to call evaluate for every item inside the population.
population_score = [evaluate(item, target) for item in population]
# Check if there is a matching evolution.
population_score = sorted(population_score, key=lambda x: x[1], reverse=True)
if population_score[0][0] == target:
return (generation, total_population, population_score[0][0])
# Print the best result every 10 generation.
# Just to know that the algorithm is working.
if debug and generation % 10 == 0:
print(
f"\nGeneration: {generation}"
f"\nTotal Population:{total_population}"
f"\nBest score: {population_score[0][1]}"
f"\nBest string: {population_score[0][0]}"
)
# Flush the old population, keeping some of the best evolutions.
# Keeping this avoid regression of evolution.
population_best = population[: int(N_POPULATION / 3)]
population.clear()
population.extend(population_best)
# Normalize population score to be between 0 and 1.
population_score = [
(item, score / len(target)) for item, score in population_score
]
# This is selection
for i in range(N_SELECTED):
population.extend(select(population_score[int(i)], population_score, genes))
# Check if the population has already reached the maximum value and if so,
# break the cycle. If this check is disabled, the algorithm will take
# forever to compute large strings, but will also calculate small strings in
# a far fewer generations.
if len(population) > N_POPULATION:
break
if __name__ == "__main__":
target_str = (
"This is a genetic algorithm to evaluate, combine, evolve, and mutate a string!"
)
genes_list = list(
" ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklm"
"nopqrstuvwxyz.,;!?+-*#@^'èéòà€ù=)(&%$£/\\"
)
generation, population, target = basic(target_str, genes_list)
print(
f"\nGeneration: {generation}\nTotal Population: {population}\nTarget: {target}"
)
|
Calculate great circle distance between two points in a sphere, given longitudes and latitudes https:en.wikipedia.orgwikiHaversineformula We know that the globe is sort of spherical, so a path between two points isn't exactly a straight line. We need to account for the Earth's curvature when calculating distance from point A to B. This effect is negligible for small distances but adds up as distance increases. The Haversine method treats the earth as a sphere which allows us to project the two points A and B onto the surface of that sphere and approximate the spherical distance between them. Since the Earth is not a perfect sphere, other methods which model the Earth's ellipsoidal nature are more accurate but a quick and modifiable computation like Haversine can be handy for shorter range distances. Args: lat1, lon1: latitude and longitude of coordinate 1 lat2, lon2: latitude and longitude of coordinate 2 Returns: geographical distance between two points in metres from collections import namedtuple point2d namedtuplepoint2d, lat lon SANFRANCISCO point2d37.774856, 122.424227 YOSEMITE point2d37.864742, 119.537521 fhaversinedistanceSANFRANCISCO, YOSEMITE:0,.0f meters '254,352 meters' CONSTANTS per WGS84 https:en.wikipedia.orgwikiWorldGeodeticSystem Distance in metresm Equation parameters Equation https:en.wikipedia.orgwikiHaversineformulaFormulation Equation Square both values | from math import asin, atan, cos, radians, sin, sqrt, tan
AXIS_A = 6378137.0
AXIS_B = 6356752.314245
RADIUS = 6378137
def haversine_distance(lat1: float, lon1: float, lat2: float, lon2: float) -> float:
"""
Calculate great circle distance between two points in a sphere,
given longitudes and latitudes https://en.wikipedia.org/wiki/Haversine_formula
We know that the globe is "sort of" spherical, so a path between two points
isn't exactly a straight line. We need to account for the Earth's curvature
when calculating distance from point A to B. This effect is negligible for
small distances but adds up as distance increases. The Haversine method treats
the earth as a sphere which allows us to "project" the two points A and B
onto the surface of that sphere and approximate the spherical distance between
them. Since the Earth is not a perfect sphere, other methods which model the
Earth's ellipsoidal nature are more accurate but a quick and modifiable
computation like Haversine can be handy for shorter range distances.
Args:
lat1, lon1: latitude and longitude of coordinate 1
lat2, lon2: latitude and longitude of coordinate 2
Returns:
geographical distance between two points in metres
>>> from collections import namedtuple
>>> point_2d = namedtuple("point_2d", "lat lon")
>>> SAN_FRANCISCO = point_2d(37.774856, -122.424227)
>>> YOSEMITE = point_2d(37.864742, -119.537521)
>>> f"{haversine_distance(*SAN_FRANCISCO, *YOSEMITE):0,.0f} meters"
'254,352 meters'
"""
# CONSTANTS per WGS84 https://en.wikipedia.org/wiki/World_Geodetic_System
# Distance in metres(m)
# Equation parameters
# Equation https://en.wikipedia.org/wiki/Haversine_formula#Formulation
flattening = (AXIS_A - AXIS_B) / AXIS_A
phi_1 = atan((1 - flattening) * tan(radians(lat1)))
phi_2 = atan((1 - flattening) * tan(radians(lat2)))
lambda_1 = radians(lon1)
lambda_2 = radians(lon2)
# Equation
sin_sq_phi = sin((phi_2 - phi_1) / 2)
sin_sq_lambda = sin((lambda_2 - lambda_1) / 2)
# Square both values
sin_sq_phi *= sin_sq_phi
sin_sq_lambda *= sin_sq_lambda
h_value = sqrt(sin_sq_phi + (cos(phi_1) * cos(phi_2) * sin_sq_lambda))
return 2 * RADIUS * asin(h_value)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Calculate the shortest distance along the surface of an ellipsoid between two points on the surface of earth given longitudes and latitudes https:en.wikipedia.orgwikiGeographicaldistanceLambert'sformulaforlonglines NOTE: This algorithm uses geodesyhaversinedistance.py to compute central angle, sigma Representing the earth as an ellipsoid allows us to approximate distances between points on the surface much better than a sphere. Ellipsoidal formulas treat the Earth as an oblate ellipsoid which means accounting for the flattening that happens at the North and South poles. Lambert's formulae provide accuracy on the order of 10 meteres over thousands of kilometeres. Other methods can provide millimeterlevel accuracy but this is a simpler method to calculate long range distances without increasing computational intensity. Args: lat1, lon1: latitude and longitude of coordinate 1 lat2, lon2: latitude and longitude of coordinate 2 Returns: geographical distance between two points in metres from collections import namedtuple point2d namedtuplepoint2d, lat lon SANFRANCISCO point2d37.774856, 122.424227 YOSEMITE point2d37.864742, 119.537521 NEWYORK point2d40.713019, 74.012647 VENICE point2d45.443012, 12.313071 flambertsellipsoidaldistanceSANFRANCISCO, YOSEMITE:0,.0f meters '254,351 meters' flambertsellipsoidaldistanceSANFRANCISCO, NEWYORK:0,.0f meters '4,138,992 meters' flambertsellipsoidaldistanceSANFRANCISCO, VENICE:0,.0f meters '9,737,326 meters' CONSTANTS per WGS84 https:en.wikipedia.orgwikiWorldGeodeticSystem Distance in metresm Equation Parameters https:en.wikipedia.orgwikiGeographicaldistanceLambert'sformulaforlonglines Parametric latitudes https:en.wikipedia.orgwikiLatitudeParametricorreducedlatitude Compute central angle between two points using haversine theta. sigma haversinedistance equatorial radius Intermediate P and Q values Intermediate X value X sigma sinsigma sin2Pcos2Q cos2sigma2 Intermediate Y value Y sigma sinsigma cos2Psin2Q sin2sigma2 | from math import atan, cos, radians, sin, tan
from .haversine_distance import haversine_distance
AXIS_A = 6378137.0
AXIS_B = 6356752.314245
EQUATORIAL_RADIUS = 6378137
def lamberts_ellipsoidal_distance(
lat1: float, lon1: float, lat2: float, lon2: float
) -> float:
"""
Calculate the shortest distance along the surface of an ellipsoid between
two points on the surface of earth given longitudes and latitudes
https://en.wikipedia.org/wiki/Geographical_distance#Lambert's_formula_for_long_lines
NOTE: This algorithm uses geodesy/haversine_distance.py to compute central angle,
sigma
Representing the earth as an ellipsoid allows us to approximate distances between
points on the surface much better than a sphere. Ellipsoidal formulas treat the
Earth as an oblate ellipsoid which means accounting for the flattening that happens
at the North and South poles. Lambert's formulae provide accuracy on the order of
10 meteres over thousands of kilometeres. Other methods can provide
millimeter-level accuracy but this is a simpler method to calculate long range
distances without increasing computational intensity.
Args:
lat1, lon1: latitude and longitude of coordinate 1
lat2, lon2: latitude and longitude of coordinate 2
Returns:
geographical distance between two points in metres
>>> from collections import namedtuple
>>> point_2d = namedtuple("point_2d", "lat lon")
>>> SAN_FRANCISCO = point_2d(37.774856, -122.424227)
>>> YOSEMITE = point_2d(37.864742, -119.537521)
>>> NEW_YORK = point_2d(40.713019, -74.012647)
>>> VENICE = point_2d(45.443012, 12.313071)
>>> f"{lamberts_ellipsoidal_distance(*SAN_FRANCISCO, *YOSEMITE):0,.0f} meters"
'254,351 meters'
>>> f"{lamberts_ellipsoidal_distance(*SAN_FRANCISCO, *NEW_YORK):0,.0f} meters"
'4,138,992 meters'
>>> f"{lamberts_ellipsoidal_distance(*SAN_FRANCISCO, *VENICE):0,.0f} meters"
'9,737,326 meters'
"""
# CONSTANTS per WGS84 https://en.wikipedia.org/wiki/World_Geodetic_System
# Distance in metres(m)
# Equation Parameters
# https://en.wikipedia.org/wiki/Geographical_distance#Lambert's_formula_for_long_lines
flattening = (AXIS_A - AXIS_B) / AXIS_A
# Parametric latitudes
# https://en.wikipedia.org/wiki/Latitude#Parametric_(or_reduced)_latitude
b_lat1 = atan((1 - flattening) * tan(radians(lat1)))
b_lat2 = atan((1 - flattening) * tan(radians(lat2)))
# Compute central angle between two points
# using haversine theta. sigma = haversine_distance / equatorial radius
sigma = haversine_distance(lat1, lon1, lat2, lon2) / EQUATORIAL_RADIUS
# Intermediate P and Q values
p_value = (b_lat1 + b_lat2) / 2
q_value = (b_lat2 - b_lat1) / 2
# Intermediate X value
# X = (sigma - sin(sigma)) * sin^2Pcos^2Q / cos^2(sigma/2)
x_numerator = (sin(p_value) ** 2) * (cos(q_value) ** 2)
x_demonimator = cos(sigma / 2) ** 2
x_value = (sigma - sin(sigma)) * (x_numerator / x_demonimator)
# Intermediate Y value
# Y = (sigma + sin(sigma)) * cos^2Psin^2Q / sin^2(sigma/2)
y_numerator = (cos(p_value) ** 2) * (sin(q_value) ** 2)
y_denominator = sin(sigma / 2) ** 2
y_value = (sigma + sin(sigma)) * (y_numerator / y_denominator)
return EQUATORIAL_RADIUS * (sigma - ((flattening / 2) * (x_value + y_value)))
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Building block classes An Angle in degrees unit of measurement Angle Angledegrees90 Angle45.5 Angledegrees45.5 Angle1 Traceback most recent call last: ... TypeError: degrees must be a numeric value between 0 and 360. Angle361 Traceback most recent call last: ... TypeError: degrees must be a numeric value between 0 and 360. A side of a two dimensional Shape such as Polygon, etc. adjacentsides: a list of sides which are adjacent to the current side angle: the angle in degrees between each adjacent side length: the length of the current side in meters Side5 Sidelength5, angleAngledegrees90, nextsideNone Side5, Angle45.6 Sidelength5, angleAngledegrees45.6, nextsideNone Side5, Angle45.6, Side1, Angle2 doctest: ELLIPSIS Sidelength5, angleAngledegrees45.6, nextsideSidelength1, angleAngled... A geometric Ellipse on a 2D surface Ellipse5, 10 Ellipsemajorradius5, minorradius10 Ellipse5, 10 is Ellipse5, 10 False Ellipse5, 10 Ellipse5, 10 True Ellipse5, 10.area 157.07963267948966 Ellipse5, 10.perimeter 47.12388980384689 A geometric Circle on a 2D surface Circle5 Circleradius5 Circle5 is Circle5 False Circle5 Circle5 True Circle5.area 78.53981633974483 Circle5.perimeter 31.41592653589793 Circle5.diameter 10 Return the maximum number of parts that circle can be divided into if cut 'numcuts' times. circle Circle5 circle.maxparts0 1.0 circle.maxparts7 29.0 circle.maxparts54 1486.0 circle.maxparts22.5 265.375 circle.maxparts222 Traceback most recent call last: ... TypeError: numcuts must be a positive numeric value. circle.maxparts222 Traceback most recent call last: ... TypeError: numcuts must be a positive numeric value. An abstract class which represents Polygon on a 2D surface. Polygon Polygonsides Polygon.addsideSide5 PolygonsidesSidelength5, angleAngledegrees90, nextsideNone Polygon.getside0 Traceback most recent call last: ... IndexError: list index out of range Polygon.addsideSide5.getside1 Sidelength5, angleAngledegrees90, nextsideNone Polygon.setside0, Side5 Traceback most recent call last: ... IndexError: list assignment index out of range Polygon.addsideSide5.setside0, Side10 PolygonsidesSidelength10, angleAngledegrees90, nextsideNone A geometric rectangle on a 2D surface. rectangleone Rectangle5, 10 rectangleone.perimeter 30 rectangleone.area 50 Rectangle5, 10 doctest: NORMALIZEWHITESPACE RectanglesidesSidelength5, angleAngledegrees90, nextsideNone, Sidelength10, angleAngledegrees90, nextsideNone a structure which represents a geometrical square on a 2D surface squareone Square5 squareone.perimeter 20 squareone.area 25 | from __future__ import annotations
import math
from dataclasses import dataclass, field
from types import NoneType
from typing import Self
# Building block classes
@dataclass
class Angle:
"""
An Angle in degrees (unit of measurement)
>>> Angle()
Angle(degrees=90)
>>> Angle(45.5)
Angle(degrees=45.5)
>>> Angle(-1)
Traceback (most recent call last):
...
TypeError: degrees must be a numeric value between 0 and 360.
>>> Angle(361)
Traceback (most recent call last):
...
TypeError: degrees must be a numeric value between 0 and 360.
"""
degrees: float = 90
def __post_init__(self) -> None:
if not isinstance(self.degrees, (int, float)) or not 0 <= self.degrees <= 360:
raise TypeError("degrees must be a numeric value between 0 and 360.")
@dataclass
class Side:
"""
A side of a two dimensional Shape such as Polygon, etc.
adjacent_sides: a list of sides which are adjacent to the current side
angle: the angle in degrees between each adjacent side
length: the length of the current side in meters
>>> Side(5)
Side(length=5, angle=Angle(degrees=90), next_side=None)
>>> Side(5, Angle(45.6))
Side(length=5, angle=Angle(degrees=45.6), next_side=None)
>>> Side(5, Angle(45.6), Side(1, Angle(2))) # doctest: +ELLIPSIS
Side(length=5, angle=Angle(degrees=45.6), next_side=Side(length=1, angle=Angle(d...
"""
length: float
angle: Angle = field(default_factory=Angle)
next_side: Side | None = None
def __post_init__(self) -> None:
if not isinstance(self.length, (int, float)) or self.length <= 0:
raise TypeError("length must be a positive numeric value.")
if not isinstance(self.angle, Angle):
raise TypeError("angle must be an Angle object.")
if not isinstance(self.next_side, (Side, NoneType)):
raise TypeError("next_side must be a Side or None.")
@dataclass
class Ellipse:
"""
A geometric Ellipse on a 2D surface
>>> Ellipse(5, 10)
Ellipse(major_radius=5, minor_radius=10)
>>> Ellipse(5, 10) is Ellipse(5, 10)
False
>>> Ellipse(5, 10) == Ellipse(5, 10)
True
"""
major_radius: float
minor_radius: float
@property
def area(self) -> float:
"""
>>> Ellipse(5, 10).area
157.07963267948966
"""
return math.pi * self.major_radius * self.minor_radius
@property
def perimeter(self) -> float:
"""
>>> Ellipse(5, 10).perimeter
47.12388980384689
"""
return math.pi * (self.major_radius + self.minor_radius)
class Circle(Ellipse):
"""
A geometric Circle on a 2D surface
>>> Circle(5)
Circle(radius=5)
>>> Circle(5) is Circle(5)
False
>>> Circle(5) == Circle(5)
True
>>> Circle(5).area
78.53981633974483
>>> Circle(5).perimeter
31.41592653589793
"""
def __init__(self, radius: float) -> None:
super().__init__(radius, radius)
self.radius = radius
def __repr__(self) -> str:
return f"Circle(radius={self.radius})"
@property
def diameter(self) -> float:
"""
>>> Circle(5).diameter
10
"""
return self.radius * 2
def max_parts(self, num_cuts: float) -> float:
"""
Return the maximum number of parts that circle can be divided into if cut
'num_cuts' times.
>>> circle = Circle(5)
>>> circle.max_parts(0)
1.0
>>> circle.max_parts(7)
29.0
>>> circle.max_parts(54)
1486.0
>>> circle.max_parts(22.5)
265.375
>>> circle.max_parts(-222)
Traceback (most recent call last):
...
TypeError: num_cuts must be a positive numeric value.
>>> circle.max_parts("-222")
Traceback (most recent call last):
...
TypeError: num_cuts must be a positive numeric value.
"""
if not isinstance(num_cuts, (int, float)) or num_cuts < 0:
raise TypeError("num_cuts must be a positive numeric value.")
return (num_cuts + 2 + num_cuts**2) * 0.5
@dataclass
class Polygon:
"""
An abstract class which represents Polygon on a 2D surface.
>>> Polygon()
Polygon(sides=[])
"""
sides: list[Side] = field(default_factory=list)
def add_side(self, side: Side) -> Self:
"""
>>> Polygon().add_side(Side(5))
Polygon(sides=[Side(length=5, angle=Angle(degrees=90), next_side=None)])
"""
self.sides.append(side)
return self
def get_side(self, index: int) -> Side:
"""
>>> Polygon().get_side(0)
Traceback (most recent call last):
...
IndexError: list index out of range
>>> Polygon().add_side(Side(5)).get_side(-1)
Side(length=5, angle=Angle(degrees=90), next_side=None)
"""
return self.sides[index]
def set_side(self, index: int, side: Side) -> Self:
"""
>>> Polygon().set_side(0, Side(5))
Traceback (most recent call last):
...
IndexError: list assignment index out of range
>>> Polygon().add_side(Side(5)).set_side(0, Side(10))
Polygon(sides=[Side(length=10, angle=Angle(degrees=90), next_side=None)])
"""
self.sides[index] = side
return self
class Rectangle(Polygon):
"""
A geometric rectangle on a 2D surface.
>>> rectangle_one = Rectangle(5, 10)
>>> rectangle_one.perimeter()
30
>>> rectangle_one.area()
50
"""
def __init__(self, short_side_length: float, long_side_length: float) -> None:
super().__init__()
self.short_side_length = short_side_length
self.long_side_length = long_side_length
self.post_init()
def post_init(self) -> None:
"""
>>> Rectangle(5, 10) # doctest: +NORMALIZE_WHITESPACE
Rectangle(sides=[Side(length=5, angle=Angle(degrees=90), next_side=None),
Side(length=10, angle=Angle(degrees=90), next_side=None)])
"""
self.short_side = Side(self.short_side_length)
self.long_side = Side(self.long_side_length)
super().add_side(self.short_side)
super().add_side(self.long_side)
def perimeter(self) -> float:
return (self.short_side.length + self.long_side.length) * 2
def area(self) -> float:
return self.short_side.length * self.long_side.length
@dataclass
class Square(Rectangle):
"""
a structure which represents a
geometrical square on a 2D surface
>>> square_one = Square(5)
>>> square_one.perimeter()
20
>>> square_one.area()
25
"""
def __init__(self, side_length: float) -> None:
super().__init__(side_length, side_length)
def perimeter(self) -> float:
return super().perimeter()
def area(self) -> float:
return super().area()
if __name__ == "__main__":
__import__("doctest").testmod()
|
https:en.wikipedia.orgwikiBC3A9ziercurve https:www.tutorialspoint.comcomputergraphicscomputergraphicscurves.htm Bezier curve is a weighted sum of a set of control points. Generate Bezier curves from a given set of control points. This implementation works only for 2d coordinates in the xy plane. listofpoints: Control points in the xy plane on which to interpolate. These points control the behavior shape of the Bezier curve. Degree determines the flexibility of the curve. Degree 1 will produce a straight line. The basis function determines the weight of each control point at time t. t: time value between 0 and 1 inclusive at which to evaluate the basis of the curve. returns the x, y values of basis function at time t curve BezierCurve1,1, 1,2 curve.basisfunction0 1.0, 0.0 curve.basisfunction1 0.0, 1.0 basis function for each i the basis must sum up to 1 for it to produce a valid Bezier curve. The function to produce the values of the Bezier curve at time t. t: the value of time t at which to evaluate the Bezier function Returns the x, y coordinates of the Bezier curve at time t. The first point in the curve is when t 0. The last point in the curve is when t 1. curve BezierCurve1,1, 1,2 curve.beziercurvefunction0 1.0, 1.0 curve.beziercurvefunction1 1.0, 2.0 For all points, sum up the product of ith basis function and ith point. Plots the Bezier curve using matplotlib plotting capabilities. stepsize: defines the steps at which to evaluate the Bezier curve. The smaller the step size, the finer the curve produced. | # https://en.wikipedia.org/wiki/B%C3%A9zier_curve
# https://www.tutorialspoint.com/computer_graphics/computer_graphics_curves.htm
from __future__ import annotations
from scipy.special import comb # type: ignore
class BezierCurve:
"""
Bezier curve is a weighted sum of a set of control points.
Generate Bezier curves from a given set of control points.
This implementation works only for 2d coordinates in the xy plane.
"""
def __init__(self, list_of_points: list[tuple[float, float]]):
"""
list_of_points: Control points in the xy plane on which to interpolate. These
points control the behavior (shape) of the Bezier curve.
"""
self.list_of_points = list_of_points
# Degree determines the flexibility of the curve.
# Degree = 1 will produce a straight line.
self.degree = len(list_of_points) - 1
def basis_function(self, t: float) -> list[float]:
"""
The basis function determines the weight of each control point at time t.
t: time value between 0 and 1 inclusive at which to evaluate the basis of
the curve.
returns the x, y values of basis function at time t
>>> curve = BezierCurve([(1,1), (1,2)])
>>> curve.basis_function(0)
[1.0, 0.0]
>>> curve.basis_function(1)
[0.0, 1.0]
"""
assert 0 <= t <= 1, "Time t must be between 0 and 1."
output_values: list[float] = []
for i in range(len(self.list_of_points)):
# basis function for each i
output_values.append(
comb(self.degree, i) * ((1 - t) ** (self.degree - i)) * (t**i)
)
# the basis must sum up to 1 for it to produce a valid Bezier curve.
assert round(sum(output_values), 5) == 1
return output_values
def bezier_curve_function(self, t: float) -> tuple[float, float]:
"""
The function to produce the values of the Bezier curve at time t.
t: the value of time t at which to evaluate the Bezier function
Returns the x, y coordinates of the Bezier curve at time t.
The first point in the curve is when t = 0.
The last point in the curve is when t = 1.
>>> curve = BezierCurve([(1,1), (1,2)])
>>> curve.bezier_curve_function(0)
(1.0, 1.0)
>>> curve.bezier_curve_function(1)
(1.0, 2.0)
"""
assert 0 <= t <= 1, "Time t must be between 0 and 1."
basis_function = self.basis_function(t)
x = 0.0
y = 0.0
for i in range(len(self.list_of_points)):
# For all points, sum up the product of i-th basis function and i-th point.
x += basis_function[i] * self.list_of_points[i][0]
y += basis_function[i] * self.list_of_points[i][1]
return (x, y)
def plot_curve(self, step_size: float = 0.01):
"""
Plots the Bezier curve using matplotlib plotting capabilities.
step_size: defines the step(s) at which to evaluate the Bezier curve.
The smaller the step size, the finer the curve produced.
"""
from matplotlib import pyplot as plt # type: ignore
to_plot_x: list[float] = [] # x coordinates of points to plot
to_plot_y: list[float] = [] # y coordinates of points to plot
t = 0.0
while t <= 1:
value = self.bezier_curve_function(t)
to_plot_x.append(value[0])
to_plot_y.append(value[1])
t += step_size
x = [i[0] for i in self.list_of_points]
y = [i[1] for i in self.list_of_points]
plt.plot(
to_plot_x,
to_plot_y,
color="blue",
label="Curve of Degree " + str(self.degree),
)
plt.scatter(x, y, color="red", label="Control Points")
plt.legend()
plt.show()
if __name__ == "__main__":
import doctest
doctest.testmod()
BezierCurve([(1, 2), (3, 5)]).plot_curve() # degree 1
BezierCurve([(0, 0), (5, 5), (5, 0)]).plot_curve() # degree 2
BezierCurve([(0, 0), (5, 5), (5, 0), (2.5, -2.5)]).plot_curve() # degree 3
|
render 3d points for 2d surfaces. Converts 3d point to a 2d drawable point convertto2d1.0, 2.0, 3.0, 10.0, 10.0 7.6923076923076925, 15.384615384615385 convertto2d1, 2, 3, 10, 10 7.6923076923076925, 15.384615384615385 convertto2d1, 2, 3, 10, 10 '1' is str Traceback most recent call last: ... TypeError: Input values must either be float or int: '1', 2, 3, 10, 10 rotate a point around a certain axis with a certain angle angle can be any integer between 1, 360 and axis can be any one of 'x', 'y', 'z' rotate1.0, 2.0, 3.0, 'y', 90.0 3.130524675073759, 2.0, 0.4470070007889556 rotate1, 2, 3, z, 180 0.999736015495891, 2.0001319704760485, 3 rotate'1', 2, 3, z, 90.0 '1' is str Traceback most recent call last: ... TypeError: Input values except axis must either be float or int: '1', 2, 3, 90.0 rotate1, 2, 3, n, 90 'n' is not a valid axis Traceback most recent call last: ... ValueError: not a valid axis, choose one of 'x', 'y', 'z' rotate1, 2, 3, x, 90 1, 2.5049096187183877, 2.5933429780983657 rotate1, 2, 3, x, 450 450 wrap around to 90 1, 3.5776792428178217, 0.44744970165427644 | from __future__ import annotations
import math
__version__ = "2020.9.26"
__author__ = "xcodz-dot, cclaus, dhruvmanila"
def convert_to_2d(
x: float, y: float, z: float, scale: float, distance: float
) -> tuple[float, float]:
"""
Converts 3d point to a 2d drawable point
>>> convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d(1, 2, 3, 10, 10)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d("1", 2, 3, 10, 10) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values must either be float or int: ['1', 2, 3, 10, 10]
"""
if not all(isinstance(val, (float, int)) for val in locals().values()):
msg = f"Input values must either be float or int: {list(locals().values())}"
raise TypeError(msg)
projected_x = ((x * distance) / (z + distance)) * scale
projected_y = ((y * distance) / (z + distance)) * scale
return projected_x, projected_y
def rotate(
x: float, y: float, z: float, axis: str, angle: float
) -> tuple[float, float, float]:
"""
rotate a point around a certain axis with a certain angle
angle can be any integer between 1, 360 and axis can be any one of
'x', 'y', 'z'
>>> rotate(1.0, 2.0, 3.0, 'y', 90.0)
(3.130524675073759, 2.0, 0.4470070007889556)
>>> rotate(1, 2, 3, "z", 180)
(0.999736015495891, -2.0001319704760485, 3)
>>> rotate('1', 2, 3, "z", 90.0) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values except axis must either be float or int: ['1', 2, 3, 90.0]
>>> rotate(1, 2, 3, "n", 90) # 'n' is not a valid axis
Traceback (most recent call last):
...
ValueError: not a valid axis, choose one of 'x', 'y', 'z'
>>> rotate(1, 2, 3, "x", -90)
(1, -2.5049096187183877, -2.5933429780983657)
>>> rotate(1, 2, 3, "x", 450) # 450 wrap around to 90
(1, 3.5776792428178217, -0.44744970165427644)
"""
if not isinstance(axis, str):
raise TypeError("Axis must be a str")
input_variables = locals()
del input_variables["axis"]
if not all(isinstance(val, (float, int)) for val in input_variables.values()):
msg = (
"Input values except axis must either be float or int: "
f"{list(input_variables.values())}"
)
raise TypeError(msg)
angle = (angle % 360) / 450 * 180 / math.pi
if axis == "z":
new_x = x * math.cos(angle) - y * math.sin(angle)
new_y = y * math.cos(angle) + x * math.sin(angle)
new_z = z
elif axis == "x":
new_y = y * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + y * math.sin(angle)
new_x = x
elif axis == "y":
new_x = x * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + x * math.sin(angle)
new_y = y
else:
raise ValueError("not a valid axis, choose one of 'x', 'y', 'z'")
return new_x, new_y, new_z
if __name__ == "__main__":
import doctest
doctest.testmod()
print(f"{convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0) = }")
print(f"{rotate(1.0, 2.0, 3.0, 'y', 90.0) = }")
|
function to search the path Search for a path on a grid avoiding obstacles. grid 0, 1, 0, 0, 0, 0, ... 0, 1, 0, 0, 0, 0, ... 0, 1, 0, 0, 0, 0, ... 0, 1, 0, 0, 1, 0, ... 0, 0, 0, 0, 1, 0 init 0, 0 goal lengrid 1, lengrid0 1 cost 1 heuristic 0 lengrid0 for in rangelengrid heuristic 0 for row in rangelengrid0 for col in rangelengrid for i in rangelengrid: ... for j in rangelengrid0: ... heuristicij absi goal0 absj goal1 ... if gridij 1: ... heuristicij 99 path, action searchgrid, init, goal, cost, heuristic path doctest: NORMALIZEWHITESPACE 0, 0, 1, 0, 2, 0, 3, 0, 4, 0, 4, 1, 4, 2, 4, 3, 3, 3, 2, 3, 2, 4, 2, 5, 3, 5, 4, 5 action doctest: NORMALIZEWHITESPACE 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 2, 0, 0, 0, 3, 3, 2, 0, 0, 0, 0, 2, 2, 3, 3, 3, 0, 2 all coordinates are given in format y,x the cost map which pushes the path closer to the goal added extra penalty in the heuristic map | from __future__ import annotations
DIRECTIONS = [
[-1, 0], # left
[0, -1], # down
[1, 0], # right
[0, 1], # up
]
# function to search the path
def search(
grid: list[list[int]],
init: list[int],
goal: list[int],
cost: int,
heuristic: list[list[int]],
) -> tuple[list[list[int]], list[list[int]]]:
"""
Search for a path on a grid avoiding obstacles.
>>> grid = [[0, 1, 0, 0, 0, 0],
... [0, 1, 0, 0, 0, 0],
... [0, 1, 0, 0, 0, 0],
... [0, 1, 0, 0, 1, 0],
... [0, 0, 0, 0, 1, 0]]
>>> init = [0, 0]
>>> goal = [len(grid) - 1, len(grid[0]) - 1]
>>> cost = 1
>>> heuristic = [[0] * len(grid[0]) for _ in range(len(grid))]
>>> heuristic = [[0 for row in range(len(grid[0]))] for col in range(len(grid))]
>>> for i in range(len(grid)):
... for j in range(len(grid[0])):
... heuristic[i][j] = abs(i - goal[0]) + abs(j - goal[1])
... if grid[i][j] == 1:
... heuristic[i][j] = 99
>>> path, action = search(grid, init, goal, cost, heuristic)
>>> path # doctest: +NORMALIZE_WHITESPACE
[[0, 0], [1, 0], [2, 0], [3, 0], [4, 0], [4, 1], [4, 2], [4, 3], [3, 3],
[2, 3], [2, 4], [2, 5], [3, 5], [4, 5]]
>>> action # doctest: +NORMALIZE_WHITESPACE
[[0, 0, 0, 0, 0, 0], [2, 0, 0, 0, 0, 0], [2, 0, 0, 0, 3, 3],
[2, 0, 0, 0, 0, 2], [2, 3, 3, 3, 0, 2]]
"""
closed = [
[0 for col in range(len(grid[0]))] for row in range(len(grid))
] # the reference grid
closed[init[0]][init[1]] = 1
action = [
[0 for col in range(len(grid[0]))] for row in range(len(grid))
] # the action grid
x = init[0]
y = init[1]
g = 0
f = g + heuristic[x][y] # cost from starting cell to destination cell
cell = [[f, g, x, y]]
found = False # flag that is set when search is complete
resign = False # flag set if we can't find expand
while not found and not resign:
if len(cell) == 0:
raise ValueError("Algorithm is unable to find solution")
else: # to choose the least costliest action so as to move closer to the goal
cell.sort()
cell.reverse()
next_cell = cell.pop()
x = next_cell[2]
y = next_cell[3]
g = next_cell[1]
if x == goal[0] and y == goal[1]:
found = True
else:
for i in range(len(DIRECTIONS)): # to try out different valid actions
x2 = x + DIRECTIONS[i][0]
y2 = y + DIRECTIONS[i][1]
if x2 >= 0 and x2 < len(grid) and y2 >= 0 and y2 < len(grid[0]):
if closed[x2][y2] == 0 and grid[x2][y2] == 0:
g2 = g + cost
f2 = g2 + heuristic[x2][y2]
cell.append([f2, g2, x2, y2])
closed[x2][y2] = 1
action[x2][y2] = i
invpath = []
x = goal[0]
y = goal[1]
invpath.append([x, y]) # we get the reverse path from here
while x != init[0] or y != init[1]:
x2 = x - DIRECTIONS[action[x][y]][0]
y2 = y - DIRECTIONS[action[x][y]][1]
x = x2
y = y2
invpath.append([x, y])
path = []
for i in range(len(invpath)):
path.append(invpath[len(invpath) - 1 - i])
return path, action
if __name__ == "__main__":
grid = [
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0], # 0 are free path whereas 1's are obstacles
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0],
[0, 0, 0, 0, 1, 0],
]
init = [0, 0]
# all coordinates are given in format [y,x]
goal = [len(grid) - 1, len(grid[0]) - 1]
cost = 1
# the cost map which pushes the path closer to the goal
heuristic = [[0 for row in range(len(grid[0]))] for col in range(len(grid))]
for i in range(len(grid)):
for j in range(len(grid[0])):
heuristic[i][j] = abs(i - goal[0]) + abs(j - goal[1])
if grid[i][j] == 1:
# added extra penalty in the heuristic map
heuristic[i][j] = 99
path, action = search(grid, init, goal, cost, heuristic)
print("ACTION MAP")
for i in range(len(action)):
print(action[i])
for i in range(len(path)):
print(path[i])
|
Use an ant colony optimization algorithm to solve the travelling salesman problem TSP which asks the following question: Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city? https:en.wikipedia.orgwikiAntcolonyoptimizationalgorithms https:en.wikipedia.orgwikiTravellingsalesmanproblem Author: Clark Ant colony algorithm main function maincitiescities, antsnum10, iterationsnum20, ... pheromoneevaporation0.7, alpha1.0, beta5.0, q10 0, 1, 2, 3, 4, 5, 6, 7, 0, 37.909778143828696 maincities0: 0, 0, 1: 2, 2, antsnum5, iterationsnum5, ... pheromoneevaporation0.7, alpha1.0, beta5.0, q10 0, 1, 0, 5.656854249492381 maincities0: 0, 0, 1: 2, 2, 4: 4, 4, antsnum5, iterationsnum5, ... pheromoneevaporation0.7, alpha1.0, beta5.0, q10 Traceback most recent call last: ... IndexError: list index out of range maincities, antsnum5, iterationsnum5, ... pheromoneevaporation0.7, alpha1.0, beta5.0, q10 Traceback most recent call last: ... StopIteration maincities0: 0, 0, 1: 2, 2, antsnum0, iterationsnum5, ... pheromoneevaporation0.7, alpha1.0, beta5.0, q10 , inf maincities0: 0, 0, 1: 2, 2, antsnum5, iterationsnum0, ... pheromoneevaporation0.7, alpha1.0, beta5.0, q10 , inf maincities0: 0, 0, 1: 2, 2, antsnum5, iterationsnum5, ... pheromoneevaporation1, alpha1.0, beta5.0, q10 0, 1, 0, 5.656854249492381 maincities0: 0, 0, 1: 2, 2, antsnum5, iterationsnum5, ... pheromoneevaporation0, alpha1.0, beta5.0, q10 0, 1, 0, 5.656854249492381 Initialize the pheromone matrix Calculate the distance between two coordinate points distance0, 0, 3, 4 5.0 distance0, 0, 3, 4 5.0 distance0, 0, 3, 4 5.0 Update pheromones on the route and update the best route pheromoneupdatepheromone1.0, 1.0, 1.0, 1.0, ... cities0: 0,0, 1: 2,2, pheromoneevaporation0.7, ... antsroute0, 1, 0, q10, bestpath, ... bestdistancefloatinf 0.7, 4.235533905932737, 4.235533905932737, 0.7, 0, 1, 0, 5.656854249492381 pheromoneupdatepheromone, ... cities0: 0,0, 1: 2,2, pheromoneevaporation0.7, ... antsroute0, 1, 0, q10, bestpath, ... bestdistancefloatinf Traceback most recent call last: ... IndexError: list index out of range pheromoneupdatepheromone1.0, 1.0, 1.0, 1.0, ... cities, pheromoneevaporation0.7, ... antsroute0, 1, 0, q10, bestpath, ... bestdistancefloatinf Traceback most recent call last: ... KeyError: 0 Choose the next city for ants cityselectpheromone1.0, 1.0, 1.0, 1.0, currentcity0: 0, 0, ... unvisitedcities1: 2, 2, alpha1.0, beta5.0 1: 2, 2, cityselectpheromone, currentcity0: 0,0, ... unvisitedcities1: 2, 2, alpha1.0, beta5.0 Traceback most recent call last: ... IndexError: list index out of range cityselectpheromone1.0, 1.0, 1.0, 1.0, currentcity, ... unvisitedcities1: 2, 2, alpha1.0, beta5.0 Traceback most recent call last: ... StopIteration cityselectpheromone1.0, 1.0, 1.0, 1.0, currentcity0: 0, 0, ... unvisitedcities, alpha1.0, beta5.0 Traceback most recent call last: ... IndexError: list index out of range | import copy
import random
cities = {
0: [0, 0],
1: [0, 5],
2: [3, 8],
3: [8, 10],
4: [12, 8],
5: [12, 4],
6: [8, 0],
7: [6, 2],
}
def main(
cities: dict[int, list[int]],
ants_num: int,
iterations_num: int,
pheromone_evaporation: float,
alpha: float,
beta: float,
q: float, # Pheromone system parameters Q,which is a constant
) -> tuple[list[int], float]:
"""
Ant colony algorithm main function
>>> main(cities=cities, ants_num=10, iterations_num=20,
... pheromone_evaporation=0.7, alpha=1.0, beta=5.0, q=10)
([0, 1, 2, 3, 4, 5, 6, 7, 0], 37.909778143828696)
>>> main(cities={0: [0, 0], 1: [2, 2]}, ants_num=5, iterations_num=5,
... pheromone_evaporation=0.7, alpha=1.0, beta=5.0, q=10)
([0, 1, 0], 5.656854249492381)
>>> main(cities={0: [0, 0], 1: [2, 2], 4: [4, 4]}, ants_num=5, iterations_num=5,
... pheromone_evaporation=0.7, alpha=1.0, beta=5.0, q=10)
Traceback (most recent call last):
...
IndexError: list index out of range
>>> main(cities={}, ants_num=5, iterations_num=5,
... pheromone_evaporation=0.7, alpha=1.0, beta=5.0, q=10)
Traceback (most recent call last):
...
StopIteration
>>> main(cities={0: [0, 0], 1: [2, 2]}, ants_num=0, iterations_num=5,
... pheromone_evaporation=0.7, alpha=1.0, beta=5.0, q=10)
([], inf)
>>> main(cities={0: [0, 0], 1: [2, 2]}, ants_num=5, iterations_num=0,
... pheromone_evaporation=0.7, alpha=1.0, beta=5.0, q=10)
([], inf)
>>> main(cities={0: [0, 0], 1: [2, 2]}, ants_num=5, iterations_num=5,
... pheromone_evaporation=1, alpha=1.0, beta=5.0, q=10)
([0, 1, 0], 5.656854249492381)
>>> main(cities={0: [0, 0], 1: [2, 2]}, ants_num=5, iterations_num=5,
... pheromone_evaporation=0, alpha=1.0, beta=5.0, q=10)
([0, 1, 0], 5.656854249492381)
"""
# Initialize the pheromone matrix
cities_num = len(cities)
pheromone = [[1.0] * cities_num] * cities_num
best_path: list[int] = []
best_distance = float("inf")
for _ in range(iterations_num):
ants_route = []
for _ in range(ants_num):
unvisited_cities = copy.deepcopy(cities)
current_city = {next(iter(cities.keys())): next(iter(cities.values()))}
del unvisited_cities[next(iter(current_city.keys()))]
ant_route = [next(iter(current_city.keys()))]
while unvisited_cities:
current_city, unvisited_cities = city_select(
pheromone, current_city, unvisited_cities, alpha, beta
)
ant_route.append(next(iter(current_city.keys())))
ant_route.append(0)
ants_route.append(ant_route)
pheromone, best_path, best_distance = pheromone_update(
pheromone,
cities,
pheromone_evaporation,
ants_route,
q,
best_path,
best_distance,
)
return best_path, best_distance
def distance(city1: list[int], city2: list[int]) -> float:
"""
Calculate the distance between two coordinate points
>>> distance([0, 0], [3, 4] )
5.0
>>> distance([0, 0], [-3, 4] )
5.0
>>> distance([0, 0], [-3, -4] )
5.0
"""
return (((city1[0] - city2[0]) ** 2) + ((city1[1] - city2[1]) ** 2)) ** 0.5
def pheromone_update(
pheromone: list[list[float]],
cities: dict[int, list[int]],
pheromone_evaporation: float,
ants_route: list[list[int]],
q: float, # Pheromone system parameters Q,which is a constant
best_path: list[int],
best_distance: float,
) -> tuple[list[list[float]], list[int], float]:
"""
Update pheromones on the route and update the best route
>>>
>>> pheromone_update(pheromone=[[1.0, 1.0], [1.0, 1.0]],
... cities={0: [0,0], 1: [2,2]}, pheromone_evaporation=0.7,
... ants_route=[[0, 1, 0]], q=10, best_path=[],
... best_distance=float("inf"))
([[0.7, 4.235533905932737], [4.235533905932737, 0.7]], [0, 1, 0], 5.656854249492381)
>>> pheromone_update(pheromone=[],
... cities={0: [0,0], 1: [2,2]}, pheromone_evaporation=0.7,
... ants_route=[[0, 1, 0]], q=10, best_path=[],
... best_distance=float("inf"))
Traceback (most recent call last):
...
IndexError: list index out of range
>>> pheromone_update(pheromone=[[1.0, 1.0], [1.0, 1.0]],
... cities={}, pheromone_evaporation=0.7,
... ants_route=[[0, 1, 0]], q=10, best_path=[],
... best_distance=float("inf"))
Traceback (most recent call last):
...
KeyError: 0
"""
for a in range(len(cities)): # Update the volatilization of pheromone on all routes
for b in range(len(cities)):
pheromone[a][b] *= pheromone_evaporation
for ant_route in ants_route:
total_distance = 0.0
for i in range(len(ant_route) - 1): # Calculate total distance
total_distance += distance(cities[ant_route[i]], cities[ant_route[i + 1]])
delta_pheromone = q / total_distance
for i in range(len(ant_route) - 1): # Update pheromones
pheromone[ant_route[i]][ant_route[i + 1]] += delta_pheromone
pheromone[ant_route[i + 1]][ant_route[i]] = pheromone[ant_route[i]][
ant_route[i + 1]
]
if total_distance < best_distance:
best_path = ant_route
best_distance = total_distance
return pheromone, best_path, best_distance
def city_select(
pheromone: list[list[float]],
current_city: dict[int, list[int]],
unvisited_cities: dict[int, list[int]],
alpha: float,
beta: float,
) -> tuple[dict[int, list[int]], dict[int, list[int]]]:
"""
Choose the next city for ants
>>> city_select(pheromone=[[1.0, 1.0], [1.0, 1.0]], current_city={0: [0, 0]},
... unvisited_cities={1: [2, 2]}, alpha=1.0, beta=5.0)
({1: [2, 2]}, {})
>>> city_select(pheromone=[], current_city={0: [0,0]},
... unvisited_cities={1: [2, 2]}, alpha=1.0, beta=5.0)
Traceback (most recent call last):
...
IndexError: list index out of range
>>> city_select(pheromone=[[1.0, 1.0], [1.0, 1.0]], current_city={},
... unvisited_cities={1: [2, 2]}, alpha=1.0, beta=5.0)
Traceback (most recent call last):
...
StopIteration
>>> city_select(pheromone=[[1.0, 1.0], [1.0, 1.0]], current_city={0: [0, 0]},
... unvisited_cities={}, alpha=1.0, beta=5.0)
Traceback (most recent call last):
...
IndexError: list index out of range
"""
probabilities = []
for city in unvisited_cities:
city_distance = distance(
unvisited_cities[city], next(iter(current_city.values()))
)
probability = (pheromone[city][next(iter(current_city.keys()))] ** alpha) * (
(1 / city_distance) ** beta
)
probabilities.append(probability)
chosen_city_i = random.choices(
list(unvisited_cities.keys()), weights=probabilities
)[0]
chosen_city = {chosen_city_i: unvisited_cities[chosen_city_i]}
del unvisited_cities[next(iter(chosen_city.keys()))]
return chosen_city, unvisited_cities
if __name__ == "__main__":
best_path, best_distance = main(
cities=cities,
ants_num=10,
iterations_num=20,
pheromone_evaporation=0.7,
alpha=1.0,
beta=5.0,
q=10,
)
print(f"{best_path = }")
print(f"{best_distance = }")
|
Finding Articulation Points in Undirected Graph AP found via bridge AP found via cycle Adjacency list of graph | # Finding Articulation Points in Undirected Graph
def compute_ap(l): # noqa: E741
n = len(l)
out_edge_count = 0
low = [0] * n
visited = [False] * n
is_art = [False] * n
def dfs(root, at, parent, out_edge_count):
if parent == root:
out_edge_count += 1
visited[at] = True
low[at] = at
for to in l[at]:
if to == parent:
pass
elif not visited[to]:
out_edge_count = dfs(root, to, at, out_edge_count)
low[at] = min(low[at], low[to])
# AP found via bridge
if at < low[to]:
is_art[at] = True
# AP found via cycle
if at == low[to]:
is_art[at] = True
else:
low[at] = min(low[at], to)
return out_edge_count
for i in range(n):
if not visited[i]:
out_edge_count = 0
out_edge_count = dfs(i, i, -1, out_edge_count)
is_art[i] = out_edge_count > 1
for x in range(len(is_art)):
if is_art[x] is True:
print(x)
# Adjacency list of graph
data = {
0: [1, 2],
1: [0, 2],
2: [0, 1, 3, 5],
3: [2, 4],
4: [3],
5: [2, 6, 8],
6: [5, 7],
7: [6, 8],
8: [5, 7],
}
compute_ap(data)
|
Depth First Search. Args : G Dictionary of edges s Starting Node Vars : vis Set of visited nodes S Traversal Stack Breadth First Search. Args : G Dictionary of edges s Starting Node Vars : vis Set of visited nodes Q Traversal Stack Dijkstra's shortest path Algorithm Args : G Dictionary of edges s Starting Node Vars : dist Dictionary storing shortest distance from s to every other node known Set of knows nodes path Preceding node in path Topological Sort Reading an Adjacency matrix n intinput.strip a for in rangen: a.appendtuplemapint, input.strip.split return a, n def floyaandn: a, n aandn dist lista path 0 n for i in rangen for k in rangen: for i in rangen: for j in rangen: if distij distik distkj: distij distik distkj pathik k printdist def primg, s: dist, known, path s: 0, set, s: 0 while True: if lenknown leng 1: break mini 100000 for i in dist: if i not in known and disti mini: mini disti u i known.addu for v in gu: if v0 not in known and v1 dist.getv0, 100000: distv0 v1 pathv0 u return dist def edglist: r Get the edges and number of edges from the user Parameters: None Returns: tuple: A tuple containing a list of edges and number of edges Example: Simulate user input for 3 edges and 4 vertices: 1, 2, 2, 3, 3, 4 inputdata 4 3n1 2n2 3n3 4n import sys,io originalinput sys.stdin sys.stdin io.StringIOinputdata Redirect stdin for testing edglist 1, 2, 2, 3, 3, 4, 4 sys.stdin originalinput Restore original stdin Kruskal's MST Algorithm Args : E Edge list n Number of Nodes Vars : s Set of all nodes as unique disjoint sets initially Sort edges on the basis of distance Find the isolated node in the graph Parameters: graph dict: A dictionary representing a graph. Returns: list: A list of isolated nodes. Examples: graph1 1: 2, 3, 2: 1, 3, 3: 1, 2, 4: findisolatednodesgraph1 4 graph2 'A': 'B', 'C', 'B': 'A', 'C': 'A', 'D': findisolatednodesgraph2 'D' graph3 'X': , 'Y': , 'Z': findisolatednodesgraph3 'X', 'Y', 'Z' graph4 1: 2, 3, 2: 1, 3, 3: 1, 2 findisolatednodesgraph4 graph5 findisolatednodesgraph5 | from collections import deque
def _input(message):
return input(message).strip().split(" ")
def initialize_unweighted_directed_graph(
node_count: int, edge_count: int
) -> dict[int, list[int]]:
graph: dict[int, list[int]] = {}
for i in range(node_count):
graph[i + 1] = []
for e in range(edge_count):
x, y = (int(i) for i in _input(f"Edge {e + 1}: <node1> <node2> "))
graph[x].append(y)
return graph
def initialize_unweighted_undirected_graph(
node_count: int, edge_count: int
) -> dict[int, list[int]]:
graph: dict[int, list[int]] = {}
for i in range(node_count):
graph[i + 1] = []
for e in range(edge_count):
x, y = (int(i) for i in _input(f"Edge {e + 1}: <node1> <node2> "))
graph[x].append(y)
graph[y].append(x)
return graph
def initialize_weighted_undirected_graph(
node_count: int, edge_count: int
) -> dict[int, list[tuple[int, int]]]:
graph: dict[int, list[tuple[int, int]]] = {}
for i in range(node_count):
graph[i + 1] = []
for e in range(edge_count):
x, y, w = (int(i) for i in _input(f"Edge {e + 1}: <node1> <node2> <weight> "))
graph[x].append((y, w))
graph[y].append((x, w))
return graph
if __name__ == "__main__":
n, m = (int(i) for i in _input("Number of nodes and edges: "))
graph_choice = int(
_input(
"Press 1 or 2 or 3 \n"
"1. Unweighted directed \n"
"2. Unweighted undirected \n"
"3. Weighted undirected \n"
)[0]
)
g = {
1: initialize_unweighted_directed_graph,
2: initialize_unweighted_undirected_graph,
3: initialize_weighted_undirected_graph,
}[graph_choice](n, m)
"""
--------------------------------------------------------------------------------
Depth First Search.
Args : G - Dictionary of edges
s - Starting Node
Vars : vis - Set of visited nodes
S - Traversal Stack
--------------------------------------------------------------------------------
"""
def dfs(g, s):
vis, _s = {s}, [s]
print(s)
while _s:
flag = 0
for i in g[_s[-1]]:
if i not in vis:
_s.append(i)
vis.add(i)
flag = 1
print(i)
break
if not flag:
_s.pop()
"""
--------------------------------------------------------------------------------
Breadth First Search.
Args : G - Dictionary of edges
s - Starting Node
Vars : vis - Set of visited nodes
Q - Traversal Stack
--------------------------------------------------------------------------------
"""
def bfs(g, s):
vis, q = {s}, deque([s])
print(s)
while q:
u = q.popleft()
for v in g[u]:
if v not in vis:
vis.add(v)
q.append(v)
print(v)
"""
--------------------------------------------------------------------------------
Dijkstra's shortest path Algorithm
Args : G - Dictionary of edges
s - Starting Node
Vars : dist - Dictionary storing shortest distance from s to every other node
known - Set of knows nodes
path - Preceding node in path
--------------------------------------------------------------------------------
"""
def dijk(g, s):
dist, known, path = {s: 0}, set(), {s: 0}
while True:
if len(known) == len(g) - 1:
break
mini = 100000
for i in dist:
if i not in known and dist[i] < mini:
mini = dist[i]
u = i
known.add(u)
for v in g[u]:
if v[0] not in known and dist[u] + v[1] < dist.get(v[0], 100000):
dist[v[0]] = dist[u] + v[1]
path[v[0]] = u
for i in dist:
if i != s:
print(dist[i])
"""
--------------------------------------------------------------------------------
Topological Sort
--------------------------------------------------------------------------------
"""
def topo(g, ind=None, q=None):
if q is None:
q = [1]
if ind is None:
ind = [0] * (len(g) + 1) # SInce oth Index is ignored
for u in g:
for v in g[u]:
ind[v] += 1
q = deque()
for i in g:
if ind[i] == 0:
q.append(i)
if len(q) == 0:
return
v = q.popleft()
print(v)
for w in g[v]:
ind[w] -= 1
if ind[w] == 0:
q.append(w)
topo(g, ind, q)
"""
--------------------------------------------------------------------------------
Reading an Adjacency matrix
--------------------------------------------------------------------------------
"""
def adjm():
r"""
Reading an Adjacency matrix
Parameters:
None
Returns:
tuple: A tuple containing a list of edges and number of edges
Example:
>>> # Simulate user input for 3 nodes
>>> input_data = "4\n0 1 0 1\n1 0 1 0\n0 1 0 1\n1 0 1 0\n"
>>> import sys,io
>>> original_input = sys.stdin
>>> sys.stdin = io.StringIO(input_data) # Redirect stdin for testing
>>> adjm()
([(0, 1, 0, 1), (1, 0, 1, 0), (0, 1, 0, 1), (1, 0, 1, 0)], 4)
>>> sys.stdin = original_input # Restore original stdin
"""
n = int(input().strip())
a = []
for _ in range(n):
a.append(tuple(map(int, input().strip().split())))
return a, n
"""
--------------------------------------------------------------------------------
Floyd Warshall's algorithm
Args : G - Dictionary of edges
s - Starting Node
Vars : dist - Dictionary storing shortest distance from s to every other node
known - Set of knows nodes
path - Preceding node in path
--------------------------------------------------------------------------------
"""
def floy(a_and_n):
(a, n) = a_and_n
dist = list(a)
path = [[0] * n for i in range(n)]
for k in range(n):
for i in range(n):
for j in range(n):
if dist[i][j] > dist[i][k] + dist[k][j]:
dist[i][j] = dist[i][k] + dist[k][j]
path[i][k] = k
print(dist)
"""
--------------------------------------------------------------------------------
Prim's MST Algorithm
Args : G - Dictionary of edges
s - Starting Node
Vars : dist - Dictionary storing shortest distance from s to nearest node
known - Set of knows nodes
path - Preceding node in path
--------------------------------------------------------------------------------
"""
def prim(g, s):
dist, known, path = {s: 0}, set(), {s: 0}
while True:
if len(known) == len(g) - 1:
break
mini = 100000
for i in dist:
if i not in known and dist[i] < mini:
mini = dist[i]
u = i
known.add(u)
for v in g[u]:
if v[0] not in known and v[1] < dist.get(v[0], 100000):
dist[v[0]] = v[1]
path[v[0]] = u
return dist
"""
--------------------------------------------------------------------------------
Accepting Edge list
Vars : n - Number of nodes
m - Number of edges
Returns : l - Edge list
n - Number of Nodes
--------------------------------------------------------------------------------
"""
def edglist():
r"""
Get the edges and number of edges from the user
Parameters:
None
Returns:
tuple: A tuple containing a list of edges and number of edges
Example:
>>> # Simulate user input for 3 edges and 4 vertices: (1, 2), (2, 3), (3, 4)
>>> input_data = "4 3\n1 2\n2 3\n3 4\n"
>>> import sys,io
>>> original_input = sys.stdin
>>> sys.stdin = io.StringIO(input_data) # Redirect stdin for testing
>>> edglist()
([(1, 2), (2, 3), (3, 4)], 4)
>>> sys.stdin = original_input # Restore original stdin
"""
n, m = tuple(map(int, input().split(" ")))
edges = []
for _ in range(m):
edges.append(tuple(map(int, input().split(" "))))
return edges, n
"""
--------------------------------------------------------------------------------
Kruskal's MST Algorithm
Args : E - Edge list
n - Number of Nodes
Vars : s - Set of all nodes as unique disjoint sets (initially)
--------------------------------------------------------------------------------
"""
def krusk(e_and_n):
"""
Sort edges on the basis of distance
"""
(e, n) = e_and_n
e.sort(reverse=True, key=lambda x: x[2])
s = [{i} for i in range(1, n + 1)]
while True:
if len(s) == 1:
break
print(s)
x = e.pop()
for i in range(len(s)):
if x[0] in s[i]:
break
for j in range(len(s)):
if x[1] in s[j]:
if i == j:
break
s[j].update(s[i])
s.pop(i)
break
def find_isolated_nodes(graph):
"""
Find the isolated node in the graph
Parameters:
graph (dict): A dictionary representing a graph.
Returns:
list: A list of isolated nodes.
Examples:
>>> graph1 = {1: [2, 3], 2: [1, 3], 3: [1, 2], 4: []}
>>> find_isolated_nodes(graph1)
[4]
>>> graph2 = {'A': ['B', 'C'], 'B': ['A'], 'C': ['A'], 'D': []}
>>> find_isolated_nodes(graph2)
['D']
>>> graph3 = {'X': [], 'Y': [], 'Z': []}
>>> find_isolated_nodes(graph3)
['X', 'Y', 'Z']
>>> graph4 = {1: [2, 3], 2: [1, 3], 3: [1, 2]}
>>> find_isolated_nodes(graph4)
[]
>>> graph5 = {}
>>> find_isolated_nodes(graph5)
[]
"""
isolated = []
for node in graph:
if not graph[node]:
isolated.append(node)
return isolated
|
Returns shortest paths from a vertex src to all other vertices. edges 2, 1, 10, 3, 2, 3, 0, 3, 5, 0, 1, 4 g src: s, dst: d, weight: w for s, d, w in edges bellmanfordg, 4, 4, 0 0.0, 2.0, 8.0, 5.0 g src: s, dst: d, weight: w for s, d, w in edges 1, 3, 5 bellmanfordg, 4, 5, 0 Traceback most recent call last: ... Exception: Negative cycle found | from __future__ import annotations
def print_distance(distance: list[float], src):
print(f"Vertex\tShortest Distance from vertex {src}")
for i, d in enumerate(distance):
print(f"{i}\t\t{d}")
def check_negative_cycle(
graph: list[dict[str, int]], distance: list[float], edge_count: int
):
for j in range(edge_count):
u, v, w = (graph[j][k] for k in ["src", "dst", "weight"])
if distance[u] != float("inf") and distance[u] + w < distance[v]:
return True
return False
def bellman_ford(
graph: list[dict[str, int]], vertex_count: int, edge_count: int, src: int
) -> list[float]:
"""
Returns shortest paths from a vertex src to all
other vertices.
>>> edges = [(2, 1, -10), (3, 2, 3), (0, 3, 5), (0, 1, 4)]
>>> g = [{"src": s, "dst": d, "weight": w} for s, d, w in edges]
>>> bellman_ford(g, 4, 4, 0)
[0.0, -2.0, 8.0, 5.0]
>>> g = [{"src": s, "dst": d, "weight": w} for s, d, w in edges + [(1, 3, 5)]]
>>> bellman_ford(g, 4, 5, 0)
Traceback (most recent call last):
...
Exception: Negative cycle found
"""
distance = [float("inf")] * vertex_count
distance[src] = 0.0
for _ in range(vertex_count - 1):
for j in range(edge_count):
u, v, w = (graph[j][k] for k in ["src", "dst", "weight"])
if distance[u] != float("inf") and distance[u] + w < distance[v]:
distance[v] = distance[u] + w
negative_cycle_exists = check_negative_cycle(graph, distance, edge_count)
if negative_cycle_exists:
raise Exception("Negative cycle found")
return distance
if __name__ == "__main__":
import doctest
doctest.testmod()
V = int(input("Enter number of vertices: ").strip())
E = int(input("Enter number of edges: ").strip())
graph: list[dict[str, int]] = [{} for _ in range(E)]
for i in range(E):
print("Edge ", i + 1)
src, dest, weight = (
int(x)
for x in input("Enter source, destination, weight: ").strip().split(" ")
)
graph[i] = {"src": src, "dst": dest, "weight": weight}
source = int(input("\nEnter shortest path source:").strip())
shortest_distance = bellman_ford(graph, V, E, source)
print_distance(shortest_distance, 0)
|
Bidirectional Dijkstra's algorithm. A bidirectional approach is an efficient and less time consuming optimization for Dijkstra's searching algorithm Reference: shorturl.atexHM7 Author: Swayam Singh https:github.compractice404 Bidirectional Dijkstra's algorithm. Returns: shortestpathdistance int: length of the shortest path. Warnings: If the destination is not reachable, function returns 1 bidirectionaldijE, F, graphfwd, graphbwd 3 | # Author: Swayam Singh (https://github.com/practice404)
from queue import PriorityQueue
from typing import Any
import numpy as np
def pass_and_relaxation(
graph: dict,
v: str,
visited_forward: set,
visited_backward: set,
cst_fwd: dict,
cst_bwd: dict,
queue: PriorityQueue,
parent: dict,
shortest_distance: float,
) -> float:
for nxt, d in graph[v]:
if nxt in visited_forward:
continue
old_cost_f = cst_fwd.get(nxt, np.inf)
new_cost_f = cst_fwd[v] + d
if new_cost_f < old_cost_f:
queue.put((new_cost_f, nxt))
cst_fwd[nxt] = new_cost_f
parent[nxt] = v
if nxt in visited_backward:
if cst_fwd[v] + d + cst_bwd[nxt] < shortest_distance:
shortest_distance = cst_fwd[v] + d + cst_bwd[nxt]
return shortest_distance
def bidirectional_dij(
source: str, destination: str, graph_forward: dict, graph_backward: dict
) -> int:
"""
Bi-directional Dijkstra's algorithm.
Returns:
shortest_path_distance (int): length of the shortest path.
Warnings:
If the destination is not reachable, function returns -1
>>> bidirectional_dij("E", "F", graph_fwd, graph_bwd)
3
"""
shortest_path_distance = -1
visited_forward = set()
visited_backward = set()
cst_fwd = {source: 0}
cst_bwd = {destination: 0}
parent_forward = {source: None}
parent_backward = {destination: None}
queue_forward: PriorityQueue[Any] = PriorityQueue()
queue_backward: PriorityQueue[Any] = PriorityQueue()
shortest_distance = np.inf
queue_forward.put((0, source))
queue_backward.put((0, destination))
if source == destination:
return 0
while not queue_forward.empty() and not queue_backward.empty():
_, v_fwd = queue_forward.get()
visited_forward.add(v_fwd)
_, v_bwd = queue_backward.get()
visited_backward.add(v_bwd)
shortest_distance = pass_and_relaxation(
graph_forward,
v_fwd,
visited_forward,
visited_backward,
cst_fwd,
cst_bwd,
queue_forward,
parent_forward,
shortest_distance,
)
shortest_distance = pass_and_relaxation(
graph_backward,
v_bwd,
visited_backward,
visited_forward,
cst_bwd,
cst_fwd,
queue_backward,
parent_backward,
shortest_distance,
)
if cst_fwd[v_fwd] + cst_bwd[v_bwd] >= shortest_distance:
break
if shortest_distance != np.inf:
shortest_path_distance = shortest_distance
return shortest_path_distance
graph_fwd = {
"B": [["C", 1]],
"C": [["D", 1]],
"D": [["F", 1]],
"E": [["B", 1], ["G", 2]],
"F": [],
"G": [["F", 1]],
}
graph_bwd = {
"B": [["E", 1]],
"C": [["B", 1]],
"D": [["C", 1]],
"F": [["D", 1], ["G", 1]],
"E": [[None, np.inf]],
"G": [["E", 2]],
}
if __name__ == "__main__":
import doctest
doctest.testmod()
|
https:en.wikipedia.orgwikiBidirectionalsearch 1 for manhattan, 0 for euclidean k Node0, 0, 4, 3, 0, None k.calculateheuristic 5.0 n Node1, 4, 3, 4, 2, None n.calculateheuristic 2.0 l k, n n l0 False l.sort n l0 True Heuristic for the A astar AStar0, 0, lengrid 1, lengrid0 1 astar.start.posy delta30, astar.start.posx delta31 0, 1 x.pos for x in astar.getsuccessorsastar.start 1, 0, 0, 1 astar.start.posy delta20, astar.start.posx delta21 1, 0 astar.retracepathastar.start 0, 0 astar.search doctest: NORMALIZEWHITESPACE 0, 0, 1, 0, 2, 0, 2, 1, 2, 2, 2, 3, 3, 3, 4, 3, 4, 4, 5, 4, 5, 5, 6, 5, 6, 6 Open Nodes are sorted using lt retrieve the best current path Returns a list of successors both in the grid and free spaces Retrace the path from parents to parents until start node bdastar BidirectionalAStar0, 0, lengrid 1, lengrid0 1 bdastar.fwdastar.start.pos bdastar.bwdastar.target.pos True bdastar.retracebidirectionalpathbdastar.fwdastar.start, ... bdastar.bwdastar.start 0, 0 bdastar.search doctest: NORMALIZEWHITESPACE 0, 0, 0, 1, 0, 2, 1, 2, 1, 3, 2, 3, 2, 4, 2, 5, 3, 5, 4, 5, 5, 5, 5, 6, 6, 6 retrieve the best current path all coordinates are given in format y,x | from __future__ import annotations
import time
from math import sqrt
# 1 for manhattan, 0 for euclidean
HEURISTIC = 0
grid = [
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0], # 0 are free path whereas 1's are obstacles
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0],
]
delta = [[-1, 0], [0, -1], [1, 0], [0, 1]] # up, left, down, right
TPosition = tuple[int, int]
class Node:
"""
>>> k = Node(0, 0, 4, 3, 0, None)
>>> k.calculate_heuristic()
5.0
>>> n = Node(1, 4, 3, 4, 2, None)
>>> n.calculate_heuristic()
2.0
>>> l = [k, n]
>>> n == l[0]
False
>>> l.sort()
>>> n == l[0]
True
"""
def __init__(
self,
pos_x: int,
pos_y: int,
goal_x: int,
goal_y: int,
g_cost: int,
parent: Node | None,
) -> None:
self.pos_x = pos_x
self.pos_y = pos_y
self.pos = (pos_y, pos_x)
self.goal_x = goal_x
self.goal_y = goal_y
self.g_cost = g_cost
self.parent = parent
self.h_cost = self.calculate_heuristic()
self.f_cost = self.g_cost + self.h_cost
def calculate_heuristic(self) -> float:
"""
Heuristic for the A*
"""
dy = self.pos_x - self.goal_x
dx = self.pos_y - self.goal_y
if HEURISTIC == 1:
return abs(dx) + abs(dy)
else:
return sqrt(dy**2 + dx**2)
def __lt__(self, other: Node) -> bool:
return self.f_cost < other.f_cost
class AStar:
"""
>>> astar = AStar((0, 0), (len(grid) - 1, len(grid[0]) - 1))
>>> (astar.start.pos_y + delta[3][0], astar.start.pos_x + delta[3][1])
(0, 1)
>>> [x.pos for x in astar.get_successors(astar.start)]
[(1, 0), (0, 1)]
>>> (astar.start.pos_y + delta[2][0], astar.start.pos_x + delta[2][1])
(1, 0)
>>> astar.retrace_path(astar.start)
[(0, 0)]
>>> astar.search() # doctest: +NORMALIZE_WHITESPACE
[(0, 0), (1, 0), (2, 0), (2, 1), (2, 2), (2, 3), (3, 3),
(4, 3), (4, 4), (5, 4), (5, 5), (6, 5), (6, 6)]
"""
def __init__(self, start: TPosition, goal: TPosition):
self.start = Node(start[1], start[0], goal[1], goal[0], 0, None)
self.target = Node(goal[1], goal[0], goal[1], goal[0], 99999, None)
self.open_nodes = [self.start]
self.closed_nodes: list[Node] = []
self.reached = False
def search(self) -> list[TPosition]:
while self.open_nodes:
# Open Nodes are sorted using __lt__
self.open_nodes.sort()
current_node = self.open_nodes.pop(0)
if current_node.pos == self.target.pos:
return self.retrace_path(current_node)
self.closed_nodes.append(current_node)
successors = self.get_successors(current_node)
for child_node in successors:
if child_node in self.closed_nodes:
continue
if child_node not in self.open_nodes:
self.open_nodes.append(child_node)
else:
# retrieve the best current path
better_node = self.open_nodes.pop(self.open_nodes.index(child_node))
if child_node.g_cost < better_node.g_cost:
self.open_nodes.append(child_node)
else:
self.open_nodes.append(better_node)
return [self.start.pos]
def get_successors(self, parent: Node) -> list[Node]:
"""
Returns a list of successors (both in the grid and free spaces)
"""
successors = []
for action in delta:
pos_x = parent.pos_x + action[1]
pos_y = parent.pos_y + action[0]
if not (0 <= pos_x <= len(grid[0]) - 1 and 0 <= pos_y <= len(grid) - 1):
continue
if grid[pos_y][pos_x] != 0:
continue
successors.append(
Node(
pos_x,
pos_y,
self.target.pos_y,
self.target.pos_x,
parent.g_cost + 1,
parent,
)
)
return successors
def retrace_path(self, node: Node | None) -> list[TPosition]:
"""
Retrace the path from parents to parents until start node
"""
current_node = node
path = []
while current_node is not None:
path.append((current_node.pos_y, current_node.pos_x))
current_node = current_node.parent
path.reverse()
return path
class BidirectionalAStar:
"""
>>> bd_astar = BidirectionalAStar((0, 0), (len(grid) - 1, len(grid[0]) - 1))
>>> bd_astar.fwd_astar.start.pos == bd_astar.bwd_astar.target.pos
True
>>> bd_astar.retrace_bidirectional_path(bd_astar.fwd_astar.start,
... bd_astar.bwd_astar.start)
[(0, 0)]
>>> bd_astar.search() # doctest: +NORMALIZE_WHITESPACE
[(0, 0), (0, 1), (0, 2), (1, 2), (1, 3), (2, 3), (2, 4),
(2, 5), (3, 5), (4, 5), (5, 5), (5, 6), (6, 6)]
"""
def __init__(self, start: TPosition, goal: TPosition) -> None:
self.fwd_astar = AStar(start, goal)
self.bwd_astar = AStar(goal, start)
self.reached = False
def search(self) -> list[TPosition]:
while self.fwd_astar.open_nodes or self.bwd_astar.open_nodes:
self.fwd_astar.open_nodes.sort()
self.bwd_astar.open_nodes.sort()
current_fwd_node = self.fwd_astar.open_nodes.pop(0)
current_bwd_node = self.bwd_astar.open_nodes.pop(0)
if current_bwd_node.pos == current_fwd_node.pos:
return self.retrace_bidirectional_path(
current_fwd_node, current_bwd_node
)
self.fwd_astar.closed_nodes.append(current_fwd_node)
self.bwd_astar.closed_nodes.append(current_bwd_node)
self.fwd_astar.target = current_bwd_node
self.bwd_astar.target = current_fwd_node
successors = {
self.fwd_astar: self.fwd_astar.get_successors(current_fwd_node),
self.bwd_astar: self.bwd_astar.get_successors(current_bwd_node),
}
for astar in [self.fwd_astar, self.bwd_astar]:
for child_node in successors[astar]:
if child_node in astar.closed_nodes:
continue
if child_node not in astar.open_nodes:
astar.open_nodes.append(child_node)
else:
# retrieve the best current path
better_node = astar.open_nodes.pop(
astar.open_nodes.index(child_node)
)
if child_node.g_cost < better_node.g_cost:
astar.open_nodes.append(child_node)
else:
astar.open_nodes.append(better_node)
return [self.fwd_astar.start.pos]
def retrace_bidirectional_path(
self, fwd_node: Node, bwd_node: Node
) -> list[TPosition]:
fwd_path = self.fwd_astar.retrace_path(fwd_node)
bwd_path = self.bwd_astar.retrace_path(bwd_node)
bwd_path.pop()
bwd_path.reverse()
path = fwd_path + bwd_path
return path
if __name__ == "__main__":
# all coordinates are given in format [y,x]
init = (0, 0)
goal = (len(grid) - 1, len(grid[0]) - 1)
for elem in grid:
print(elem)
start_time = time.time()
a_star = AStar(init, goal)
path = a_star.search()
end_time = time.time() - start_time
print(f"AStar execution time = {end_time:f} seconds")
bd_start_time = time.time()
bidir_astar = BidirectionalAStar(init, goal)
bd_end_time = time.time() - bd_start_time
print(f"BidirectionalAStar execution time = {bd_end_time:f} seconds")
|
https:en.wikipedia.orgwikiBidirectionalsearch Comment out slow pytests... 9.15s call graphsbidirectionalbreadthfirstsearch.py:: graphs.bidirectionalbreadthfirstsearch.BreadthFirstSearch bfs BreadthFirstSearch0, 0, lengrid 1, lengrid0 1 bfs.start.posy delta30, bfs.start.posx delta31 0, 1 x.pos for x in bfs.getsuccessorsbfs.start 1, 0, 0, 1 bfs.start.posy delta20, bfs.start.posx delta21 1, 0 bfs.retracepathbfs.start 0, 0 bfs.search doctest: NORMALIZEWHITESPACE 0, 0, 1, 0, 2, 0, 3, 0, 3, 1, 4, 1, 5, 1, 5, 2, 5, 3, 5, 4, 5, 5, 6, 5, 6, 6 Returns a list of successors both in the grid and free spaces Retrace the path from parents to parents until start node bdbfs BidirectionalBreadthFirstSearch0, 0, lengrid 1, ... lengrid0 1 bdbfs.fwdbfs.start.pos bdbfs.bwdbfs.target.pos True bdbfs.retracebidirectionalpathbdbfs.fwdbfs.start, ... bdbfs.bwdbfs.start 0, 0 bdbfs.search doctest: NORMALIZEWHITESPACE 0, 0, 0, 1, 0, 2, 1, 2, 2, 2, 2, 3, 2, 4, 3, 4, 3, 5, 3, 6, 4, 6, 5, 6, 6, 6 all coordinates are given in format y,x | from __future__ import annotations
import time
Path = list[tuple[int, int]]
grid = [
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0], # 0 are free path whereas 1's are obstacles
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0],
]
delta = [[-1, 0], [0, -1], [1, 0], [0, 1]] # up, left, down, right
class Node:
def __init__(
self, pos_x: int, pos_y: int, goal_x: int, goal_y: int, parent: Node | None
):
self.pos_x = pos_x
self.pos_y = pos_y
self.pos = (pos_y, pos_x)
self.goal_x = goal_x
self.goal_y = goal_y
self.parent = parent
class BreadthFirstSearch:
"""
# Comment out slow pytests...
# 9.15s call graphs/bidirectional_breadth_first_search.py:: \
# graphs.bidirectional_breadth_first_search.BreadthFirstSearch
# >>> bfs = BreadthFirstSearch((0, 0), (len(grid) - 1, len(grid[0]) - 1))
# >>> (bfs.start.pos_y + delta[3][0], bfs.start.pos_x + delta[3][1])
(0, 1)
# >>> [x.pos for x in bfs.get_successors(bfs.start)]
[(1, 0), (0, 1)]
# >>> (bfs.start.pos_y + delta[2][0], bfs.start.pos_x + delta[2][1])
(1, 0)
# >>> bfs.retrace_path(bfs.start)
[(0, 0)]
# >>> bfs.search() # doctest: +NORMALIZE_WHITESPACE
[(0, 0), (1, 0), (2, 0), (3, 0), (3, 1), (4, 1),
(5, 1), (5, 2), (5, 3), (5, 4), (5, 5), (6, 5), (6, 6)]
"""
def __init__(self, start: tuple[int, int], goal: tuple[int, int]):
self.start = Node(start[1], start[0], goal[1], goal[0], None)
self.target = Node(goal[1], goal[0], goal[1], goal[0], None)
self.node_queue = [self.start]
self.reached = False
def search(self) -> Path | None:
while self.node_queue:
current_node = self.node_queue.pop(0)
if current_node.pos == self.target.pos:
self.reached = True
return self.retrace_path(current_node)
successors = self.get_successors(current_node)
for node in successors:
self.node_queue.append(node)
if not self.reached:
return [self.start.pos]
return None
def get_successors(self, parent: Node) -> list[Node]:
"""
Returns a list of successors (both in the grid and free spaces)
"""
successors = []
for action in delta:
pos_x = parent.pos_x + action[1]
pos_y = parent.pos_y + action[0]
if not (0 <= pos_x <= len(grid[0]) - 1 and 0 <= pos_y <= len(grid) - 1):
continue
if grid[pos_y][pos_x] != 0:
continue
successors.append(
Node(pos_x, pos_y, self.target.pos_y, self.target.pos_x, parent)
)
return successors
def retrace_path(self, node: Node | None) -> Path:
"""
Retrace the path from parents to parents until start node
"""
current_node = node
path = []
while current_node is not None:
path.append((current_node.pos_y, current_node.pos_x))
current_node = current_node.parent
path.reverse()
return path
class BidirectionalBreadthFirstSearch:
"""
>>> bd_bfs = BidirectionalBreadthFirstSearch((0, 0), (len(grid) - 1,
... len(grid[0]) - 1))
>>> bd_bfs.fwd_bfs.start.pos == bd_bfs.bwd_bfs.target.pos
True
>>> bd_bfs.retrace_bidirectional_path(bd_bfs.fwd_bfs.start,
... bd_bfs.bwd_bfs.start)
[(0, 0)]
>>> bd_bfs.search() # doctest: +NORMALIZE_WHITESPACE
[(0, 0), (0, 1), (0, 2), (1, 2), (2, 2), (2, 3),
(2, 4), (3, 4), (3, 5), (3, 6), (4, 6), (5, 6), (6, 6)]
"""
def __init__(self, start, goal):
self.fwd_bfs = BreadthFirstSearch(start, goal)
self.bwd_bfs = BreadthFirstSearch(goal, start)
self.reached = False
def search(self) -> Path | None:
while self.fwd_bfs.node_queue or self.bwd_bfs.node_queue:
current_fwd_node = self.fwd_bfs.node_queue.pop(0)
current_bwd_node = self.bwd_bfs.node_queue.pop(0)
if current_bwd_node.pos == current_fwd_node.pos:
self.reached = True
return self.retrace_bidirectional_path(
current_fwd_node, current_bwd_node
)
self.fwd_bfs.target = current_bwd_node
self.bwd_bfs.target = current_fwd_node
successors = {
self.fwd_bfs: self.fwd_bfs.get_successors(current_fwd_node),
self.bwd_bfs: self.bwd_bfs.get_successors(current_bwd_node),
}
for bfs in [self.fwd_bfs, self.bwd_bfs]:
for node in successors[bfs]:
bfs.node_queue.append(node)
if not self.reached:
return [self.fwd_bfs.start.pos]
return None
def retrace_bidirectional_path(self, fwd_node: Node, bwd_node: Node) -> Path:
fwd_path = self.fwd_bfs.retrace_path(fwd_node)
bwd_path = self.bwd_bfs.retrace_path(bwd_node)
bwd_path.pop()
bwd_path.reverse()
path = fwd_path + bwd_path
return path
if __name__ == "__main__":
# all coordinates are given in format [y,x]
import doctest
doctest.testmod()
init = (0, 0)
goal = (len(grid) - 1, len(grid[0]) - 1)
for elem in grid:
print(elem)
start_bfs_time = time.time()
bfs = BreadthFirstSearch(init, goal)
path = bfs.search()
bfs_time = time.time() - start_bfs_time
print("Unidirectional BFS computation time : ", bfs_time)
start_bd_bfs_time = time.time()
bd_bfs = BidirectionalBreadthFirstSearch(init, goal)
bd_path = bd_bfs.search()
bd_bfs_time = time.time() - start_bd_bfs_time
print("Bidirectional BFS computation time : ", bd_bfs_time)
|
Borvka's algorithm. Determines the minimum spanning tree MST of a graph using the Borvka's algorithm. Borvka's algorithm is a greedy algorithm for finding a minimum spanning tree in a connected graph, or a minimum spanning forest if a graph that is not connected. The time complexity of this algorithm is OELogV, where E represents the number of edges, while V represents the number of nodes. Onumberofedges Log numberofnodes The space complexity of this algorithm is OV E, since we have to keep a couple of lists whose sizes are equal to the number of nodes, as well as keep all the edges of a graph inside of the data structure itself. Borvka's algorithm gives us pretty much the same result as other MST Algorithms they all find the minimum spanning tree, and the time complexity is approximately the same. One advantage that Borvka's algorithm has compared to the alternatives is that it doesn't need to presort the edges or maintain a priority queue in order to find the minimum spanning tree. Even though that doesn't help its complexity, since it still passes the edges logE times, it is a bit simpler to code. Details: https:en.wikipedia.orgwikiBorC5AFvka27salgorithm Arguments: numofnodes the number of nodes in the graph Attributes: mnumofnodes the number of nodes in the graph. medges the list of edges. mcomponent the dictionary which stores the index of the component which a node belongs to. Adds an edge in the format first, second, edge weight to graph. self.medges.appendunode, vnode, weight def findcomponentself, unode: int int: Finds the component index of a given node if self.mcomponentunode ! unode: for k in self.mcomponent: self.mcomponentk self.findcomponentk def unionself, componentsize: listint, unode: int, vnode: int None: Performs Borvka's algorithm to find MST. Initialize additional lists required to algorithm. componentsize mstweight 0 minimumweightedge: listAny 1 self.mnumofnodes A list of components initialized to all of the nodes for node in rangeself.mnumofnodes: self.mcomponent.updatenode: node componentsize.append1 numofcomponents self.mnumofnodes while numofcomponents 1: for edge in self.medges: u, v, w edge ucomponent self.mcomponentu vcomponent self.mcomponentv if ucomponent ! vcomponent: g Graph8 for uvw in 0, 1, 10, 0, 2, 6, 0, 3, 5, 1, 3, 15, 2, 3, 4, ... 3, 4, 8, 4, 5, 10, 4, 6, 6, 4, 7, 5, 5, 7, 15, 6, 7, 4: ... g.addedgeuvw g.boruvka Added edge 0 3 Added weight: 5 BLANKLINE Added edge 0 1 Added weight: 10 BLANKLINE Added edge 2 3 Added weight: 4 BLANKLINE Added edge 4 7 Added weight: 5 BLANKLINE Added edge 4 5 Added weight: 10 BLANKLINE Added edge 6 7 Added weight: 4 BLANKLINE Added edge 3 4 Added weight: 8 BLANKLINE The total weight of the minimal spanning tree is: 46 | from __future__ import annotations
from typing import Any
class Graph:
def __init__(self, num_of_nodes: int) -> None:
"""
Arguments:
num_of_nodes - the number of nodes in the graph
Attributes:
m_num_of_nodes - the number of nodes in the graph.
m_edges - the list of edges.
m_component - the dictionary which stores the index of the component which
a node belongs to.
"""
self.m_num_of_nodes = num_of_nodes
self.m_edges: list[list[int]] = []
self.m_component: dict[int, int] = {}
def add_edge(self, u_node: int, v_node: int, weight: int) -> None:
"""Adds an edge in the format [first, second, edge weight] to graph."""
self.m_edges.append([u_node, v_node, weight])
def find_component(self, u_node: int) -> int:
"""Propagates a new component throughout a given component."""
if self.m_component[u_node] == u_node:
return u_node
return self.find_component(self.m_component[u_node])
def set_component(self, u_node: int) -> None:
"""Finds the component index of a given node"""
if self.m_component[u_node] != u_node:
for k in self.m_component:
self.m_component[k] = self.find_component(k)
def union(self, component_size: list[int], u_node: int, v_node: int) -> None:
"""Union finds the roots of components for two nodes, compares the components
in terms of size, and attaches the smaller one to the larger one to form
single component"""
if component_size[u_node] <= component_size[v_node]:
self.m_component[u_node] = v_node
component_size[v_node] += component_size[u_node]
self.set_component(u_node)
elif component_size[u_node] >= component_size[v_node]:
self.m_component[v_node] = self.find_component(u_node)
component_size[u_node] += component_size[v_node]
self.set_component(v_node)
def boruvka(self) -> None:
"""Performs Borůvka's algorithm to find MST."""
# Initialize additional lists required to algorithm.
component_size = []
mst_weight = 0
minimum_weight_edge: list[Any] = [-1] * self.m_num_of_nodes
# A list of components (initialized to all of the nodes)
for node in range(self.m_num_of_nodes):
self.m_component.update({node: node})
component_size.append(1)
num_of_components = self.m_num_of_nodes
while num_of_components > 1:
for edge in self.m_edges:
u, v, w = edge
u_component = self.m_component[u]
v_component = self.m_component[v]
if u_component != v_component:
"""If the current minimum weight edge of component u doesn't
exist (is -1), or if it's greater than the edge we're
observing right now, we will assign the value of the edge
we're observing to it.
If the current minimum weight edge of component v doesn't
exist (is -1), or if it's greater than the edge we're
observing right now, we will assign the value of the edge
we're observing to it"""
for component in (u_component, v_component):
if (
minimum_weight_edge[component] == -1
or minimum_weight_edge[component][2] > w
):
minimum_weight_edge[component] = [u, v, w]
for edge in minimum_weight_edge:
if isinstance(edge, list):
u, v, w = edge
u_component = self.m_component[u]
v_component = self.m_component[v]
if u_component != v_component:
mst_weight += w
self.union(component_size, u_component, v_component)
print(f"Added edge [{u} - {v}]\nAdded weight: {w}\n")
num_of_components -= 1
minimum_weight_edge = [-1] * self.m_num_of_nodes
print(f"The total weight of the minimal spanning tree is: {mst_weight}")
def test_vector() -> None:
"""
>>> g = Graph(8)
>>> for u_v_w in ((0, 1, 10), (0, 2, 6), (0, 3, 5), (1, 3, 15), (2, 3, 4),
... (3, 4, 8), (4, 5, 10), (4, 6, 6), (4, 7, 5), (5, 7, 15), (6, 7, 4)):
... g.add_edge(*u_v_w)
>>> g.boruvka()
Added edge [0 - 3]
Added weight: 5
<BLANKLINE>
Added edge [0 - 1]
Added weight: 10
<BLANKLINE>
Added edge [2 - 3]
Added weight: 4
<BLANKLINE>
Added edge [4 - 7]
Added weight: 5
<BLANKLINE>
Added edge [4 - 5]
Added weight: 10
<BLANKLINE>
Added edge [6 - 7]
Added weight: 4
<BLANKLINE>
Added edge [3 - 4]
Added weight: 8
<BLANKLINE>
The total weight of the minimal spanning tree is: 46
"""
if __name__ == "__main__":
import doctest
doctest.testmod()
|
!usrbinpython Author: OMKAR PATHAK from future import annotations from queue import Queue class Graph: def initself None: self.vertices: dictint, listint def printgraphself None: for i in self.vertices: printi, : , .joinstrj for j in self.verticesi def addedgeself, fromvertex: int, tovertex: int None: if fromvertex in self.vertices: self.verticesfromvertex.appendtovertex else: self.verticesfromvertex tovertex def bfsself, startvertex: int setint: initialize set for storing already visited vertices visited set create a first in first out queue to store all the vertices for BFS queue: Queue Queue mark the source node as visited and enqueue it visited.addstartvertex queue.putstartvertex while not queue.empty: vertex queue.get loop through all adjacent vertex and enqueue it if not yet visited for adjacentvertex in self.verticesvertex: if adjacentvertex not in visited: queue.putadjacentvertex visited.addadjacentvertex return visited if name main: from doctest import testmod testmodverboseTrue g Graph g.addedge0, 1 g.addedge0, 2 g.addedge1, 2 g.addedge2, 0 g.addedge2, 3 g.addedge3, 3 g.printgraph 0 : 1 2 1 : 2 2 : 0 3 3 : 3 assert sortedg.bfs2 0, 1, 2, 3 | #!/usr/bin/python
""" Author: OMKAR PATHAK """
from __future__ import annotations
from queue import Queue
class Graph:
def __init__(self) -> None:
self.vertices: dict[int, list[int]] = {}
def print_graph(self) -> None:
"""
prints adjacency list representation of graaph
>>> g = Graph()
>>> g.print_graph()
>>> g.add_edge(0, 1)
>>> g.print_graph()
0 : 1
"""
for i in self.vertices:
print(i, " : ", " -> ".join([str(j) for j in self.vertices[i]]))
def add_edge(self, from_vertex: int, to_vertex: int) -> None:
"""
adding the edge between two vertices
>>> g = Graph()
>>> g.print_graph()
>>> g.add_edge(0, 1)
>>> g.print_graph()
0 : 1
"""
if from_vertex in self.vertices:
self.vertices[from_vertex].append(to_vertex)
else:
self.vertices[from_vertex] = [to_vertex]
def bfs(self, start_vertex: int) -> set[int]:
"""
>>> g = Graph()
>>> g.add_edge(0, 1)
>>> g.add_edge(0, 1)
>>> g.add_edge(0, 2)
>>> g.add_edge(1, 2)
>>> g.add_edge(2, 0)
>>> g.add_edge(2, 3)
>>> g.add_edge(3, 3)
>>> sorted(g.bfs(2))
[0, 1, 2, 3]
"""
# initialize set for storing already visited vertices
visited = set()
# create a first in first out queue to store all the vertices for BFS
queue: Queue = Queue()
# mark the source node as visited and enqueue it
visited.add(start_vertex)
queue.put(start_vertex)
while not queue.empty():
vertex = queue.get()
# loop through all adjacent vertex and enqueue it if not yet visited
for adjacent_vertex in self.vertices[vertex]:
if adjacent_vertex not in visited:
queue.put(adjacent_vertex)
visited.add(adjacent_vertex)
return visited
if __name__ == "__main__":
from doctest import testmod
testmod(verbose=True)
g = Graph()
g.add_edge(0, 1)
g.add_edge(0, 2)
g.add_edge(1, 2)
g.add_edge(2, 0)
g.add_edge(2, 3)
g.add_edge(3, 3)
g.print_graph()
# 0 : 1 -> 2
# 1 : 2
# 2 : 0 -> 3
# 3 : 3
assert sorted(g.bfs(2)) == [0, 1, 2, 3]
|
https:en.wikipedia.orgwikiBreadthfirstsearch pseudocode: breadthfirstsearchgraph G, start vertex s: all nodes initially unexplored mark s as explored let Q queue data structure, initialized with s while Q is nonempty: remove the first node of Q, call it v for each edgev, w: for w in graphv if w unexplored: mark w as explored add w to Q at the end Implementation of breadth first search using queue.Queue. ''.joinbreadthfirstsearchG, 'A' 'ABCDEF' Implementation of breadth first search using collection.queue. ''.joinbreadthfirstsearchwithdequeG, 'A' 'ABCDEF' breadthfirstsearch finished 10000 runs in 0.20999 seconds breadthfirstsearchwithdeque finished 10000 runs in 0.01421 seconds | from __future__ import annotations
from collections import deque
from queue import Queue
from timeit import timeit
G = {
"A": ["B", "C"],
"B": ["A", "D", "E"],
"C": ["A", "F"],
"D": ["B"],
"E": ["B", "F"],
"F": ["C", "E"],
}
def breadth_first_search(graph: dict, start: str) -> list[str]:
"""
Implementation of breadth first search using queue.Queue.
>>> ''.join(breadth_first_search(G, 'A'))
'ABCDEF'
"""
explored = {start}
result = [start]
queue: Queue = Queue()
queue.put(start)
while not queue.empty():
v = queue.get()
for w in graph[v]:
if w not in explored:
explored.add(w)
result.append(w)
queue.put(w)
return result
def breadth_first_search_with_deque(graph: dict, start: str) -> list[str]:
"""
Implementation of breadth first search using collection.queue.
>>> ''.join(breadth_first_search_with_deque(G, 'A'))
'ABCDEF'
"""
visited = {start}
result = [start]
queue = deque([start])
while queue:
v = queue.popleft()
for child in graph[v]:
if child not in visited:
visited.add(child)
result.append(child)
queue.append(child)
return result
def benchmark_function(name: str) -> None:
setup = f"from __main__ import G, {name}"
number = 10000
res = timeit(f"{name}(G, 'A')", setup=setup, number=number)
print(f"{name:<35} finished {number} runs in {res:.5f} seconds")
if __name__ == "__main__":
import doctest
doctest.testmod()
benchmark_function("breadth_first_search")
benchmark_function("breadth_first_search_with_deque")
# breadth_first_search finished 10000 runs in 0.20999 seconds
# breadth_first_search_with_deque finished 10000 runs in 0.01421 seconds
|
Breath First Search BFS can be used when finding the shortest path from a given source node to a target node in an unweighted graph. Graph is implemented as dictionary of adjacency lists. Also, Source vertex have to be defined upon initialization. mapping node to its parent in resulting breadth first tree This function is a helper for running breath first search on this graph. g Graphgraph, G g.breathfirstsearch g.parent 'G': None, 'C': 'G', 'A': 'C', 'F': 'C', 'B': 'A', 'E': 'A', 'D': 'B' This shortest path function returns a string, describing the result: 1. No path is found. The string is a human readable message to indicate this. 2. The shortest path is found. The string is in the form v1v2v3...vn, where v1 is the source vertex and vn is the target vertex, if it exists separately. g Graphgraph, G g.breathfirstsearch Case 1 No path is found. g.shortestpathFoo Traceback most recent call last: ... ValueError: No path from vertex: G to vertex: Foo Case 2 The path is found. g.shortestpathD 'GCABD' g.shortestpathG 'G' | from __future__ import annotations
graph = {
"A": ["B", "C", "E"],
"B": ["A", "D", "E"],
"C": ["A", "F", "G"],
"D": ["B"],
"E": ["A", "B", "D"],
"F": ["C"],
"G": ["C"],
}
class Graph:
def __init__(self, graph: dict[str, list[str]], source_vertex: str) -> None:
"""
Graph is implemented as dictionary of adjacency lists. Also,
Source vertex have to be defined upon initialization.
"""
self.graph = graph
# mapping node to its parent in resulting breadth first tree
self.parent: dict[str, str | None] = {}
self.source_vertex = source_vertex
def breath_first_search(self) -> None:
"""
This function is a helper for running breath first search on this graph.
>>> g = Graph(graph, "G")
>>> g.breath_first_search()
>>> g.parent
{'G': None, 'C': 'G', 'A': 'C', 'F': 'C', 'B': 'A', 'E': 'A', 'D': 'B'}
"""
visited = {self.source_vertex}
self.parent[self.source_vertex] = None
queue = [self.source_vertex] # first in first out queue
while queue:
vertex = queue.pop(0)
for adjacent_vertex in self.graph[vertex]:
if adjacent_vertex not in visited:
visited.add(adjacent_vertex)
self.parent[adjacent_vertex] = vertex
queue.append(adjacent_vertex)
def shortest_path(self, target_vertex: str) -> str:
"""
This shortest path function returns a string, describing the result:
1.) No path is found. The string is a human readable message to indicate this.
2.) The shortest path is found. The string is in the form
`v1(->v2->v3->...->vn)`, where v1 is the source vertex and vn is the target
vertex, if it exists separately.
>>> g = Graph(graph, "G")
>>> g.breath_first_search()
Case 1 - No path is found.
>>> g.shortest_path("Foo")
Traceback (most recent call last):
...
ValueError: No path from vertex: G to vertex: Foo
Case 2 - The path is found.
>>> g.shortest_path("D")
'G->C->A->B->D'
>>> g.shortest_path("G")
'G'
"""
if target_vertex == self.source_vertex:
return self.source_vertex
target_vertex_parent = self.parent.get(target_vertex)
if target_vertex_parent is None:
msg = (
f"No path from vertex: {self.source_vertex} to vertex: {target_vertex}"
)
raise ValueError(msg)
return self.shortest_path(target_vertex_parent) + f"->{target_vertex}"
if __name__ == "__main__":
g = Graph(graph, "G")
g.breath_first_search()
print(g.shortest_path("D"))
print(g.shortest_path("G"))
print(g.shortest_path("Foo"))
|
Breadthfirst search shortest path implementations. doctest: python m doctest v bfsshortestpath.py Manual test: python bfsshortestpath.py Find shortest path between start and goal nodes. Args: graph dict: nodelist of neighboring nodes keyvalue pairs. start: start node. goal: target node. Returns: Shortest path between start and goal nodes as a string of nodes. 'Not found' string if no path found. Example: bfsshortestpathdemograph, G, D 'G', 'C', 'A', 'B', 'D' bfsshortestpathdemograph, G, G 'G' bfsshortestpathdemograph, G, Unknown keep track of explored nodes keep track of all the paths to be checked return path if start is goal keeps looping until all possible paths have been checked pop the first path from the queue get the last node from the path go through all neighbour nodes, construct a new path and push it into the queue return path if neighbour is goal mark node as explored in case there's no path between the 2 nodes Find shortest path distance between start and target nodes. Args: graph: nodelist of neighboring nodes keyvalue pairs. start: node to start search from. target: node to search for. Returns: Number of edges in shortest path between start and target nodes. 1 if no path exists. Example: bfsshortestpathdistancedemograph, G, D 4 bfsshortestpathdistancedemograph, A, A 0 bfsshortestpathdistancedemograph, A, Unknown 1 Keep tab on distances from start node. | demo_graph = {
"A": ["B", "C", "E"],
"B": ["A", "D", "E"],
"C": ["A", "F", "G"],
"D": ["B"],
"E": ["A", "B", "D"],
"F": ["C"],
"G": ["C"],
}
def bfs_shortest_path(graph: dict, start, goal) -> list[str]:
"""Find shortest path between `start` and `goal` nodes.
Args:
graph (dict): node/list of neighboring nodes key/value pairs.
start: start node.
goal: target node.
Returns:
Shortest path between `start` and `goal` nodes as a string of nodes.
'Not found' string if no path found.
Example:
>>> bfs_shortest_path(demo_graph, "G", "D")
['G', 'C', 'A', 'B', 'D']
>>> bfs_shortest_path(demo_graph, "G", "G")
['G']
>>> bfs_shortest_path(demo_graph, "G", "Unknown")
[]
"""
# keep track of explored nodes
explored = set()
# keep track of all the paths to be checked
queue = [[start]]
# return path if start is goal
if start == goal:
return [start]
# keeps looping until all possible paths have been checked
while queue:
# pop the first path from the queue
path = queue.pop(0)
# get the last node from the path
node = path[-1]
if node not in explored:
neighbours = graph[node]
# go through all neighbour nodes, construct a new path and
# push it into the queue
for neighbour in neighbours:
new_path = list(path)
new_path.append(neighbour)
queue.append(new_path)
# return path if neighbour is goal
if neighbour == goal:
return new_path
# mark node as explored
explored.add(node)
# in case there's no path between the 2 nodes
return []
def bfs_shortest_path_distance(graph: dict, start, target) -> int:
"""Find shortest path distance between `start` and `target` nodes.
Args:
graph: node/list of neighboring nodes key/value pairs.
start: node to start search from.
target: node to search for.
Returns:
Number of edges in shortest path between `start` and `target` nodes.
-1 if no path exists.
Example:
>>> bfs_shortest_path_distance(demo_graph, "G", "D")
4
>>> bfs_shortest_path_distance(demo_graph, "A", "A")
0
>>> bfs_shortest_path_distance(demo_graph, "A", "Unknown")
-1
"""
if not graph or start not in graph or target not in graph:
return -1
if start == target:
return 0
queue = [start]
visited = set(start)
# Keep tab on distances from `start` node.
dist = {start: 0, target: -1}
while queue:
node = queue.pop(0)
if node == target:
dist[target] = (
dist[node] if dist[target] == -1 else min(dist[target], dist[node])
)
for adjacent in graph[node]:
if adjacent not in visited:
visited.add(adjacent)
queue.append(adjacent)
dist[adjacent] = dist[node] + 1
return dist[target]
if __name__ == "__main__":
print(bfs_shortest_path(demo_graph, "G", "D")) # returns ['G', 'C', 'A', 'B', 'D']
print(bfs_shortest_path_distance(demo_graph, "G", "D")) # returns 4
|
Finding the shortest path in 01graph in OE V which is faster than dijkstra. 01graph is the weighted graph with the weights equal to 0 or 1. Link: https:codeforces.comblogentry22276 Weighted directed graph edge. destinationvertex: int weight: int class AdjacencyList: Get all the vertices adjacent to the given one. return iterself.graphvertex property def sizeself: return self.size def addedgeself, fromvertex: int, tovertex: int, weight: int: if weight not in 0, 1: raise ValueErrorEdge weight must be either 0 or 1. if tovertex 0 or tovertex self.size: raise ValueErrorVertex indexes must be in 0; size. self.graphfromvertex.appendEdgetovertex, weight def getshortestpathself, startvertex: int, finishvertex: int int None: queue dequestartvertex distances: listint None None self.size distancesstartvertex 0 while queue: currentvertex queue.popleft currentdistance distancescurrentvertex if currentdistance is None: continue for edge in selfcurrentvertex: newdistance currentdistance edge.weight destvertexdistance distancesedge.destinationvertex if isinstancedestvertexdistance, int and newdistance destvertexdistance : continue distancesedge.destinationvertex newdistance if edge.weight 0: queue.appendleftedge.destinationvertex else: queue.appendedge.destinationvertex if distancesfinishvertex is None: raise ValueErrorNo path from startvertex to finishvertex. return distancesfinishvertex if name main: import doctest doctest.testmod | from __future__ import annotations
from collections import deque
from collections.abc import Iterator
from dataclasses import dataclass
@dataclass
class Edge:
"""Weighted directed graph edge."""
destination_vertex: int
weight: int
class AdjacencyList:
"""Graph adjacency list."""
def __init__(self, size: int):
self._graph: list[list[Edge]] = [[] for _ in range(size)]
self._size = size
def __getitem__(self, vertex: int) -> Iterator[Edge]:
"""Get all the vertices adjacent to the given one."""
return iter(self._graph[vertex])
@property
def size(self):
return self._size
def add_edge(self, from_vertex: int, to_vertex: int, weight: int):
"""
>>> g = AdjacencyList(2)
>>> g.add_edge(0, 1, 0)
>>> g.add_edge(1, 0, 1)
>>> list(g[0])
[Edge(destination_vertex=1, weight=0)]
>>> list(g[1])
[Edge(destination_vertex=0, weight=1)]
>>> g.add_edge(0, 1, 2)
Traceback (most recent call last):
...
ValueError: Edge weight must be either 0 or 1.
>>> g.add_edge(0, 2, 1)
Traceback (most recent call last):
...
ValueError: Vertex indexes must be in [0; size).
"""
if weight not in (0, 1):
raise ValueError("Edge weight must be either 0 or 1.")
if to_vertex < 0 or to_vertex >= self.size:
raise ValueError("Vertex indexes must be in [0; size).")
self._graph[from_vertex].append(Edge(to_vertex, weight))
def get_shortest_path(self, start_vertex: int, finish_vertex: int) -> int | None:
"""
Return the shortest distance from start_vertex to finish_vertex in 0-1-graph.
1 1 1
0--------->3 6--------7>------->8
| ^ ^ ^ |1
| | | |0 v
0| |0 1| 9-------->10
| | | ^ 1
v | | |0
1--------->2<-------4------->5
0 1 1
>>> g = AdjacencyList(11)
>>> g.add_edge(0, 1, 0)
>>> g.add_edge(0, 3, 1)
>>> g.add_edge(1, 2, 0)
>>> g.add_edge(2, 3, 0)
>>> g.add_edge(4, 2, 1)
>>> g.add_edge(4, 5, 1)
>>> g.add_edge(4, 6, 1)
>>> g.add_edge(5, 9, 0)
>>> g.add_edge(6, 7, 1)
>>> g.add_edge(7, 8, 1)
>>> g.add_edge(8, 10, 1)
>>> g.add_edge(9, 7, 0)
>>> g.add_edge(9, 10, 1)
>>> g.add_edge(1, 2, 2)
Traceback (most recent call last):
...
ValueError: Edge weight must be either 0 or 1.
>>> g.get_shortest_path(0, 3)
0
>>> g.get_shortest_path(0, 4)
Traceback (most recent call last):
...
ValueError: No path from start_vertex to finish_vertex.
>>> g.get_shortest_path(4, 10)
2
>>> g.get_shortest_path(4, 8)
2
>>> g.get_shortest_path(0, 1)
0
>>> g.get_shortest_path(1, 0)
Traceback (most recent call last):
...
ValueError: No path from start_vertex to finish_vertex.
"""
queue = deque([start_vertex])
distances: list[int | None] = [None] * self.size
distances[start_vertex] = 0
while queue:
current_vertex = queue.popleft()
current_distance = distances[current_vertex]
if current_distance is None:
continue
for edge in self[current_vertex]:
new_distance = current_distance + edge.weight
dest_vertex_distance = distances[edge.destination_vertex]
if (
isinstance(dest_vertex_distance, int)
and new_distance >= dest_vertex_distance
):
continue
distances[edge.destination_vertex] = new_distance
if edge.weight == 0:
queue.appendleft(edge.destination_vertex)
else:
queue.append(edge.destination_vertex)
if distances[finish_vertex] is None:
raise ValueError("No path from start_vertex to finish_vertex.")
return distances[finish_vertex]
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Check if a graph is bipartite using depthfirst search DFS. Args: graph: Adjacency list representing the graph. Returns: True if bipartite, False otherwise. Checks if the graph can be divided into two sets of vertices, such that no two vertices within the same set are connected by an edge. Examples: FIXME: This test should pass. isbipartitedfsdefaultdictlist, 0: 1, 2, 1: 0, 3, 2: 0, 4 Traceback most recent call last: ... RuntimeError: dictionary changed size during iteration isbipartitedfsdefaultdictlist, 0: 1, 2, 1: 0, 3, 2: 0, 1 False isbipartitedfs True isbipartitedfs0: 1, 3, 1: 0, 2, 2: 1, 3, 3: 0, 2 True isbipartitedfs0: 1, 2, 3, 1: 0, 2, 2: 0, 1, 3, 3: 0, 2 False isbipartitedfs0: 4, 1: , 2: 4, 3: 4, 4: 0, 2, 3 True isbipartitedfs0: 1, 3, 1: 0, 2, 2: 1, 3, 3: 0, 2, 4: 0 False isbipartitedfs7: 1, 3, 1: 0, 2, 2: 1, 3, 3: 0, 2, 4: 0 Traceback most recent call last: ... KeyError: 0 FIXME: This test should fails with KeyError: 4. isbipartitedfs0: 1, 3, 1: 0, 2, 2: 1, 3, 3: 0, 2, 9: 0 False isbipartitedfs0: 1, 3, 1: 0, 2 Traceback most recent call last: ... KeyError: 1 isbipartitedfs1: 0, 2, 0: 1, 1, 1: 0, 2, 2: 1, 1 True isbipartitedfs0.9: 1, 3, 1: 0, 2, 2: 1, 3, 3: 0, 2 Traceback most recent call last: ... KeyError: 0 FIXME: This test should fails with TypeError: list indices must be integers or... isbipartitedfs0: 1.0, 3.0, 1.0: 0, 2.0, 2.0: 1.0, 3.0, 3.0: 0, 2.0 True isbipartitedfsa: 1, 3, b: 0, 2, c: 1, 3, d: 0, 2 Traceback most recent call last: ... KeyError: 1 isbipartitedfs0: b, d, 1: a, c, 2: b, d, 3: a, c Traceback most recent call last: ... KeyError: 'b' Perform DepthFirst Search DFS on the graph starting from a node. Args: node: The current node being visited. color: The color assigned to the current node. Returns: True if the graph is bipartite starting from the current node, False otherwise. Check if a graph is bipartite using a breadthfirst search BFS. Args: graph: Adjacency list representing the graph. Returns: True if bipartite, False otherwise. Check if the graph can be divided into two sets of vertices, such that no two vertices within the same set are connected by an edge. Examples: FIXME: This test should pass. isbipartitebfsdefaultdictlist, 0: 1, 2, 1: 0, 3, 2: 0, 4 Traceback most recent call last: ... RuntimeError: dictionary changed size during iteration isbipartitebfsdefaultdictlist, 0: 1, 2, 1: 0, 2, 2: 0, 1 False isbipartitebfs True isbipartitebfs0: 1, 3, 1: 0, 2, 2: 1, 3, 3: 0, 2 True isbipartitebfs0: 1, 2, 3, 1: 0, 2, 2: 0, 1, 3, 3: 0, 2 False isbipartitebfs0: 4, 1: , 2: 4, 3: 4, 4: 0, 2, 3 True isbipartitebfs0: 1, 3, 1: 0, 2, 2: 1, 3, 3: 0, 2, 4: 0 False isbipartitebfs7: 1, 3, 1: 0, 2, 2: 1, 3, 3: 0, 2, 4: 0 Traceback most recent call last: ... KeyError: 0 FIXME: This test should fails with KeyError: 4. isbipartitebfs0: 1, 3, 1: 0, 2, 2: 1, 3, 3: 0, 2, 9: 0 False isbipartitebfs0: 1, 3, 1: 0, 2 Traceback most recent call last: ... KeyError: 1 isbipartitebfs1: 0, 2, 0: 1, 1, 1: 0, 2, 2: 1, 1 True isbipartitebfs0.9: 1, 3, 1: 0, 2, 2: 1, 3, 3: 0, 2 Traceback most recent call last: ... KeyError: 0 FIXME: This test should fails with TypeError: list indices must be integers or... isbipartitebfs0: 1.0, 3.0, 1.0: 0, 2.0, 2.0: 1.0, 3.0, 3.0: 0, 2.0 True isbipartitebfsa: 1, 3, b: 0, 2, c: 1, 3, d: 0, 2 Traceback most recent call last: ... KeyError: 1 isbipartitebfs0: b, d, 1: a, c, 2: b, d, 3: a, c Traceback most recent call last: ... KeyError: 'b' | from collections import defaultdict, deque
def is_bipartite_dfs(graph: defaultdict[int, list[int]]) -> bool:
"""
Check if a graph is bipartite using depth-first search (DFS).
Args:
graph: Adjacency list representing the graph.
Returns:
True if bipartite, False otherwise.
Checks if the graph can be divided into two sets of vertices, such that no two
vertices within the same set are connected by an edge.
Examples:
# FIXME: This test should pass.
>>> is_bipartite_dfs(defaultdict(list, {0: [1, 2], 1: [0, 3], 2: [0, 4]}))
Traceback (most recent call last):
...
RuntimeError: dictionary changed size during iteration
>>> is_bipartite_dfs(defaultdict(list, {0: [1, 2], 1: [0, 3], 2: [0, 1]}))
False
>>> is_bipartite_dfs({})
True
>>> is_bipartite_dfs({0: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2]})
True
>>> is_bipartite_dfs({0: [1, 2, 3], 1: [0, 2], 2: [0, 1, 3], 3: [0, 2]})
False
>>> is_bipartite_dfs({0: [4], 1: [], 2: [4], 3: [4], 4: [0, 2, 3]})
True
>>> is_bipartite_dfs({0: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2], 4: [0]})
False
>>> is_bipartite_dfs({7: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2], 4: [0]})
Traceback (most recent call last):
...
KeyError: 0
# FIXME: This test should fails with KeyError: 4.
>>> is_bipartite_dfs({0: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2], 9: [0]})
False
>>> is_bipartite_dfs({0: [-1, 3], 1: [0, -2]})
Traceback (most recent call last):
...
KeyError: -1
>>> is_bipartite_dfs({-1: [0, 2], 0: [-1, 1], 1: [0, 2], 2: [-1, 1]})
True
>>> is_bipartite_dfs({0.9: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2]})
Traceback (most recent call last):
...
KeyError: 0
# FIXME: This test should fails with TypeError: list indices must be integers or...
>>> is_bipartite_dfs({0: [1.0, 3.0], 1.0: [0, 2.0], 2.0: [1.0, 3.0], 3.0: [0, 2.0]})
True
>>> is_bipartite_dfs({"a": [1, 3], "b": [0, 2], "c": [1, 3], "d": [0, 2]})
Traceback (most recent call last):
...
KeyError: 1
>>> is_bipartite_dfs({0: ["b", "d"], 1: ["a", "c"], 2: ["b", "d"], 3: ["a", "c"]})
Traceback (most recent call last):
...
KeyError: 'b'
"""
def depth_first_search(node: int, color: int) -> bool:
"""
Perform Depth-First Search (DFS) on the graph starting from a node.
Args:
node: The current node being visited.
color: The color assigned to the current node.
Returns:
True if the graph is bipartite starting from the current node,
False otherwise.
"""
if visited[node] == -1:
visited[node] = color
for neighbor in graph[node]:
if not depth_first_search(neighbor, 1 - color):
return False
return visited[node] == color
visited: defaultdict[int, int] = defaultdict(lambda: -1)
for node in graph:
if visited[node] == -1 and not depth_first_search(node, 0):
return False
return True
def is_bipartite_bfs(graph: defaultdict[int, list[int]]) -> bool:
"""
Check if a graph is bipartite using a breadth-first search (BFS).
Args:
graph: Adjacency list representing the graph.
Returns:
True if bipartite, False otherwise.
Check if the graph can be divided into two sets of vertices, such that no two
vertices within the same set are connected by an edge.
Examples:
# FIXME: This test should pass.
>>> is_bipartite_bfs(defaultdict(list, {0: [1, 2], 1: [0, 3], 2: [0, 4]}))
Traceback (most recent call last):
...
RuntimeError: dictionary changed size during iteration
>>> is_bipartite_bfs(defaultdict(list, {0: [1, 2], 1: [0, 2], 2: [0, 1]}))
False
>>> is_bipartite_bfs({})
True
>>> is_bipartite_bfs({0: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2]})
True
>>> is_bipartite_bfs({0: [1, 2, 3], 1: [0, 2], 2: [0, 1, 3], 3: [0, 2]})
False
>>> is_bipartite_bfs({0: [4], 1: [], 2: [4], 3: [4], 4: [0, 2, 3]})
True
>>> is_bipartite_bfs({0: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2], 4: [0]})
False
>>> is_bipartite_bfs({7: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2], 4: [0]})
Traceback (most recent call last):
...
KeyError: 0
# FIXME: This test should fails with KeyError: 4.
>>> is_bipartite_bfs({0: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2], 9: [0]})
False
>>> is_bipartite_bfs({0: [-1, 3], 1: [0, -2]})
Traceback (most recent call last):
...
KeyError: -1
>>> is_bipartite_bfs({-1: [0, 2], 0: [-1, 1], 1: [0, 2], 2: [-1, 1]})
True
>>> is_bipartite_bfs({0.9: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2]})
Traceback (most recent call last):
...
KeyError: 0
# FIXME: This test should fails with TypeError: list indices must be integers or...
>>> is_bipartite_bfs({0: [1.0, 3.0], 1.0: [0, 2.0], 2.0: [1.0, 3.0], 3.0: [0, 2.0]})
True
>>> is_bipartite_bfs({"a": [1, 3], "b": [0, 2], "c": [1, 3], "d": [0, 2]})
Traceback (most recent call last):
...
KeyError: 1
>>> is_bipartite_bfs({0: ["b", "d"], 1: ["a", "c"], 2: ["b", "d"], 3: ["a", "c"]})
Traceback (most recent call last):
...
KeyError: 'b'
"""
visited: defaultdict[int, int] = defaultdict(lambda: -1)
for node in graph:
if visited[node] == -1:
queue: deque[int] = deque()
queue.append(node)
visited[node] = 0
while queue:
curr_node = queue.popleft()
for neighbor in graph[curr_node]:
if visited[neighbor] == -1:
visited[neighbor] = 1 - visited[curr_node]
queue.append(neighbor)
elif visited[neighbor] == visited[curr_node]:
return False
return True
if __name__ == "__main":
import doctest
result = doctest.testmod()
if result.failed:
print(f"{result.failed} test(s) failed.")
else:
print("All tests passed!")
|
Program to check if a cycle is present in a given graph Returns True if graph is cyclic else False checkcyclegraph0:, 1:0, 3, 2:0, 4, 3:5, 4:5, 5: False checkcyclegraph0:1, 2, 1:2, 2:0, 3, 3:3 True Keep track of visited nodes To detect a back edge, keep track of vertices currently in the recursion stack Recur for all neighbours. If any neighbour is visited and in recstk then graph is cyclic. graph 0:, 1:0, 3, 2:0, 4, 3:5, 4:5, 5: vertex, visited, recstk 0, set, set depthfirstsearchgraph, vertex, visited, recstk False Mark current node as visited and add to recursion stack The node needs to be removed from recursion stack before function ends | def check_cycle(graph: dict) -> bool:
"""
Returns True if graph is cyclic else False
>>> check_cycle(graph={0:[], 1:[0, 3], 2:[0, 4], 3:[5], 4:[5], 5:[]})
False
>>> check_cycle(graph={0:[1, 2], 1:[2], 2:[0, 3], 3:[3]})
True
"""
# Keep track of visited nodes
visited: set[int] = set()
# To detect a back edge, keep track of vertices currently in the recursion stack
rec_stk: set[int] = set()
return any(
node not in visited and depth_first_search(graph, node, visited, rec_stk)
for node in graph
)
def depth_first_search(graph: dict, vertex: int, visited: set, rec_stk: set) -> bool:
"""
Recur for all neighbours.
If any neighbour is visited and in rec_stk then graph is cyclic.
>>> graph = {0:[], 1:[0, 3], 2:[0, 4], 3:[5], 4:[5], 5:[]}
>>> vertex, visited, rec_stk = 0, set(), set()
>>> depth_first_search(graph, vertex, visited, rec_stk)
False
"""
# Mark current node as visited and add to recursion stack
visited.add(vertex)
rec_stk.add(vertex)
for node in graph[vertex]:
if node not in visited:
if depth_first_search(graph, node, visited, rec_stk):
return True
elif node in rec_stk:
return True
# The node needs to be removed from recursion stack before function ends
rec_stk.remove(vertex)
return False
if __name__ == "__main__":
from doctest import testmod
testmod()
|
https:en.wikipedia.orgwikiComponentgraphtheory Finding connected components in graph Use depth first search to find all vertices being in the same component as initial vertex dfstestgraph1, 0, 5 False 0, 1, 3, 2 dfstestgraph2, 0, 6 False 0, 1, 3, 2 This function takes graph as a parameter and then returns the list of connected components connectedcomponentstestgraph1 0, 1, 3, 2, 4, 5, 6 connectedcomponentstestgraph2 0, 1, 3, 2, 4, 5 | test_graph_1 = {0: [1, 2], 1: [0, 3], 2: [0], 3: [1], 4: [5, 6], 5: [4, 6], 6: [4, 5]}
test_graph_2 = {0: [1, 2, 3], 1: [0, 3], 2: [0], 3: [0, 1], 4: [], 5: []}
def dfs(graph: dict, vert: int, visited: list) -> list:
"""
Use depth first search to find all vertices
being in the same component as initial vertex
>>> dfs(test_graph_1, 0, 5 * [False])
[0, 1, 3, 2]
>>> dfs(test_graph_2, 0, 6 * [False])
[0, 1, 3, 2]
"""
visited[vert] = True
connected_verts = []
for neighbour in graph[vert]:
if not visited[neighbour]:
connected_verts += dfs(graph, neighbour, visited)
return [vert, *connected_verts]
def connected_components(graph: dict) -> list:
"""
This function takes graph as a parameter
and then returns the list of connected components
>>> connected_components(test_graph_1)
[[0, 1, 3, 2], [4, 5, 6]]
>>> connected_components(test_graph_2)
[[0, 1, 3, 2], [4], [5]]
"""
graph_size = len(graph)
visited = graph_size * [False]
components_list = []
for i in range(graph_size):
if not visited[i]:
i_connected = dfs(graph, i, visited)
components_list.append(i_connected)
return components_list
if __name__ == "__main__":
import doctest
doctest.testmod()
|
LeetCode 133. Clone Graph https:leetcode.comproblemsclonegraph Given a reference of a node in a connected undirected graph. Return a deep copy clone of the graph. Each node in the graph contains a value int and a list ListNode of its neighbors. Node3.neighbors hashNode3 ! 0 True This function returns a clone of a connected undirected graph. clonegraphNode1 Nodevalue1, neighbors clonegraphNode1, Node2 Nodevalue1, neighborsNodevalue2, neighbors clonegraphNone is None True | from dataclasses import dataclass
@dataclass
class Node:
value: int = 0
neighbors: list["Node"] | None = None
def __post_init__(self) -> None:
"""
>>> Node(3).neighbors
[]
"""
self.neighbors = self.neighbors or []
def __hash__(self) -> int:
"""
>>> hash(Node(3)) != 0
True
"""
return id(self)
def clone_graph(node: Node | None) -> Node | None:
"""
This function returns a clone of a connected undirected graph.
>>> clone_graph(Node(1))
Node(value=1, neighbors=[])
>>> clone_graph(Node(1, [Node(2)]))
Node(value=1, neighbors=[Node(value=2, neighbors=[])])
>>> clone_graph(None) is None
True
"""
if not node:
return None
originals_to_clones = {} # map nodes to clones
stack = [node]
while stack:
original = stack.pop()
if original in originals_to_clones:
continue
originals_to_clones[original] = Node(original.value)
stack.extend(original.neighbors or [])
for original, clone in originals_to_clones.items():
for neighbor in original.neighbors or []:
cloned_neighbor = originals_to_clones[neighbor]
if not clone.neighbors:
clone.neighbors = []
clone.neighbors.append(cloned_neighbor)
return originals_to_clones[node]
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Non recursive implementation of a DFS algorithm. from future import annotations def depthfirstsearchgraph: dict, start: str setstr: explored, stack setstart, start while stack: v stack.pop explored.addv Differences from BFS: 1 pop last element instead of first one 2 add adjacent elements to stack without exploring them for adj in reversedgraphv: if adj not in explored: stack.appendadj return explored G A: B, C, D, B: A, D, E, C: A, F, D: B, D, E: B, F, F: C, E, G, G: F, if name main: import doctest doctest.testmod printdepthfirstsearchG, A | from __future__ import annotations
def depth_first_search(graph: dict, start: str) -> set[str]:
"""Depth First Search on Graph
:param graph: directed graph in dictionary format
:param start: starting vertex as a string
:returns: the trace of the search
>>> input_G = { "A": ["B", "C", "D"], "B": ["A", "D", "E"],
... "C": ["A", "F"], "D": ["B", "D"], "E": ["B", "F"],
... "F": ["C", "E", "G"], "G": ["F"] }
>>> output_G = list({'A', 'B', 'C', 'D', 'E', 'F', 'G'})
>>> all(x in output_G for x in list(depth_first_search(input_G, "A")))
True
>>> all(x in output_G for x in list(depth_first_search(input_G, "G")))
True
"""
explored, stack = set(start), [start]
while stack:
v = stack.pop()
explored.add(v)
# Differences from BFS:
# 1) pop last element instead of first one
# 2) add adjacent elements to stack without exploring them
for adj in reversed(graph[v]):
if adj not in explored:
stack.append(adj)
return explored
G = {
"A": ["B", "C", "D"],
"B": ["A", "D", "E"],
"C": ["A", "F"],
"D": ["B", "D"],
"E": ["B", "F"],
"F": ["C", "E", "G"],
"G": ["F"],
}
if __name__ == "__main__":
import doctest
doctest.testmod()
print(depth_first_search(G, "A"))
|
!usrbinpython Author: OMKAR PATHAK class Graph: def initself: self.vertex for printing the Graph vertices def printgraphself None: printself.vertex for i in self.vertex: printi, , .joinstrj for j in self.vertexi for adding the edge between two vertices def addedgeself, fromvertex: int, tovertex: int None: check if vertex is already present, if fromvertex in self.vertex: self.vertexfromvertex.appendtovertex else: else make a new vertex self.vertexfromvertex tovertex def dfsself None: visited array for storing already visited nodes visited False lenself.vertex call the recursive helper function for i in rangelenself.vertex: if not visitedi: self.dfsrecursivei, visited def dfsrecursiveself, startvertex: int, visited: list None: mark start vertex as visited visitedstartvertex True printstartvertex, end Recur for all the vertices that are adjacent to this node for i in self.vertex: if not visitedi: print , end self.dfsrecursivei, visited if name main: import doctest doctest.testmod g Graph g.addedge0, 1 g.addedge0, 2 g.addedge1, 2 g.addedge2, 0 g.addedge2, 3 g.addedge3, 3 g.printgraph printDFS: g.dfs | #!/usr/bin/python
""" Author: OMKAR PATHAK """
class Graph:
def __init__(self):
self.vertex = {}
# for printing the Graph vertices
def print_graph(self) -> None:
"""
Print the graph vertices.
Example:
>>> g = Graph()
>>> g.add_edge(0, 1)
>>> g.add_edge(0, 2)
>>> g.add_edge(1, 2)
>>> g.add_edge(2, 0)
>>> g.add_edge(2, 3)
>>> g.add_edge(3, 3)
>>> g.print_graph()
{0: [1, 2], 1: [2], 2: [0, 3], 3: [3]}
0 -> 1 -> 2
1 -> 2
2 -> 0 -> 3
3 -> 3
"""
print(self.vertex)
for i in self.vertex:
print(i, " -> ", " -> ".join([str(j) for j in self.vertex[i]]))
# for adding the edge between two vertices
def add_edge(self, from_vertex: int, to_vertex: int) -> None:
"""
Add an edge between two vertices.
:param from_vertex: The source vertex.
:param to_vertex: The destination vertex.
Example:
>>> g = Graph()
>>> g.add_edge(0, 1)
>>> g.add_edge(0, 2)
>>> g.print_graph()
{0: [1, 2]}
0 -> 1 -> 2
"""
# check if vertex is already present,
if from_vertex in self.vertex:
self.vertex[from_vertex].append(to_vertex)
else:
# else make a new vertex
self.vertex[from_vertex] = [to_vertex]
def dfs(self) -> None:
"""
Perform depth-first search (DFS) traversal on the graph
and print the visited vertices.
Example:
>>> g = Graph()
>>> g.add_edge(0, 1)
>>> g.add_edge(0, 2)
>>> g.add_edge(1, 2)
>>> g.add_edge(2, 0)
>>> g.add_edge(2, 3)
>>> g.add_edge(3, 3)
>>> g.dfs()
0 1 2 3
"""
# visited array for storing already visited nodes
visited = [False] * len(self.vertex)
# call the recursive helper function
for i in range(len(self.vertex)):
if not visited[i]:
self.dfs_recursive(i, visited)
def dfs_recursive(self, start_vertex: int, visited: list) -> None:
"""
Perform a recursive depth-first search (DFS) traversal on the graph.
:param start_vertex: The starting vertex for the traversal.
:param visited: A list to track visited vertices.
Example:
>>> g = Graph()
>>> g.add_edge(0, 1)
>>> g.add_edge(0, 2)
>>> g.add_edge(1, 2)
>>> g.add_edge(2, 0)
>>> g.add_edge(2, 3)
>>> g.add_edge(3, 3)
>>> visited = [False] * len(g.vertex)
>>> g.dfs_recursive(0, visited)
0 1 2 3
"""
# mark start vertex as visited
visited[start_vertex] = True
print(start_vertex, end="")
# Recur for all the vertices that are adjacent to this node
for i in self.vertex:
if not visited[i]:
print(" ", end="")
self.dfs_recursive(i, visited)
if __name__ == "__main__":
import doctest
doctest.testmod()
g = Graph()
g.add_edge(0, 1)
g.add_edge(0, 2)
g.add_edge(1, 2)
g.add_edge(2, 0)
g.add_edge(2, 3)
g.add_edge(3, 3)
g.print_graph()
print("DFS:")
g.dfs()
|
pseudocode DIJKSTRAgraph G, start vertex s, destination vertex d: all nodes initially unexplored 1 let H min heap data structure, initialized with 0 and s here 0 indicates the distance from start vertex s 2 while H is nonempty: 3 remove the first node and cost of H, call it U and cost 4 if U has been previously explored: 5 go to the while loop, line 2 Once a node is explored there is no need to make it again 6 mark U as explored 7 if U is d: 8 return cost total cost from start to destination vertex 9 for each edgeU, V: ccost of edgeU,V for V in graphU 10 if V explored: 11 go to next V in line 9 12 totalcost cost c 13 add totalcost,V to H You can think at cost as a distance where Dijkstra finds the shortest distance between vertices s and v in a graph G. The use of a min heap as H guarantees that if a vertex has already been explored there will be no other path with shortest distance, that happens because heapq.heappop will always return the next vertex with the shortest distance, considering that the heap stores not only the distance between previous vertex and current vertex but the entire distance between each vertex that makes up the path from start vertex to target vertex. Return the cost of the shortest path between vertices start and end. dijkstraG, E, C 6 dijkstraG2, E, F 3 dijkstraG3, E, F 3 G2 B: C, 1, C: D, 1, D: F, 1, E: B, 1, F, 3, F: , r Layout of G3: E 1 B 1 C 1 D 1 F 2 G 1 | import heapq
def dijkstra(graph, start, end):
"""Return the cost of the shortest path between vertices start and end.
>>> dijkstra(G, "E", "C")
6
>>> dijkstra(G2, "E", "F")
3
>>> dijkstra(G3, "E", "F")
3
"""
heap = [(0, start)] # cost from start node,end node
visited = set()
while heap:
(cost, u) = heapq.heappop(heap)
if u in visited:
continue
visited.add(u)
if u == end:
return cost
for v, c in graph[u]:
if v in visited:
continue
next_item = cost + c
heapq.heappush(heap, (next_item, v))
return -1
G = {
"A": [["B", 2], ["C", 5]],
"B": [["A", 2], ["D", 3], ["E", 1], ["F", 1]],
"C": [["A", 5], ["F", 3]],
"D": [["B", 3]],
"E": [["B", 4], ["F", 3]],
"F": [["C", 3], ["E", 3]],
}
r"""
Layout of G2:
E -- 1 --> B -- 1 --> C -- 1 --> D -- 1 --> F
\ /\
\ ||
----------------- 3 --------------------
"""
G2 = {
"B": [["C", 1]],
"C": [["D", 1]],
"D": [["F", 1]],
"E": [["B", 1], ["F", 3]],
"F": [],
}
r"""
Layout of G3:
E -- 1 --> B -- 1 --> C -- 1 --> D -- 1 --> F
\ /\
\ ||
-------- 2 ---------> G ------- 1 ------
"""
G3 = {
"B": [["C", 1]],
"C": [["D", 1]],
"D": [["F", 1]],
"E": [["B", 1], ["G", 2]],
"F": [],
"G": [["F", 1]],
}
short_distance = dijkstra(G, "E", "C")
print(short_distance) # E -- 3 --> F -- 3 --> C == 6
short_distance = dijkstra(G2, "E", "F")
print(short_distance) # E -- 3 --> F == 3
short_distance = dijkstra(G3, "E", "F")
print(short_distance) # E -- 2 --> G -- 1 --> F == 3
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Title: Dijkstra's Algorithm for finding single source shortest path from scratch Author: Shubham Malik References: https:en.wikipedia.orgwikiDijkstra27salgorithm For storing the vertex set to retrieve node with the lowest distance Based on Min Heap Priority queue class constructor method. Examples: priorityqueuetest PriorityQueue priorityqueuetest.cursize 0 priorityqueuetest.array priorityqueuetest.pos Conditional boolean method to determine if the priority queue is empty or not. Examples: priorityqueuetest PriorityQueue priorityqueuetest.isempty True priorityqueuetest.insert2, 'A' priorityqueuetest.isempty False Sorts the queue array so that the minimum element is root. Examples: priorityqueuetest PriorityQueue priorityqueuetest.cursize 3 priorityqueuetest.pos 'A': 0, 'B': 1, 'C': 2 priorityqueuetest.array 5, 'A', 10, 'B', 15, 'C' priorityqueuetest.minheapify0 Traceback most recent call last: ... TypeError: 'list' object is not callable priorityqueuetest.array 5, 'A', 10, 'B', 15, 'C' priorityqueuetest.array 10, 'A', 5, 'B', 15, 'C' priorityqueuetest.minheapify0 Traceback most recent call last: ... TypeError: 'list' object is not callable priorityqueuetest.array 10, 'A', 5, 'B', 15, 'C' priorityqueuetest.array 10, 'A', 15, 'B', 5, 'C' priorityqueuetest.minheapify0 Traceback most recent call last: ... TypeError: 'list' object is not callable priorityqueuetest.array 10, 'A', 15, 'B', 5, 'C' priorityqueuetest.array 10, 'A', 5, 'B' priorityqueuetest.cursize lenpriorityqueuetest.array priorityqueuetest.pos 'A': 0, 'B': 1 priorityqueuetest.minheapify0 Traceback most recent call last: ... TypeError: 'list' object is not callable priorityqueuetest.array 10, 'A', 5, 'B' Inserts a node into the Priority Queue. Examples: priorityqueuetest PriorityQueue priorityqueuetest.insert10, 'A' priorityqueuetest.array 10, 'A' priorityqueuetest.insert15, 'B' priorityqueuetest.array 10, 'A', 15, 'B' priorityqueuetest.insert5, 'C' priorityqueuetest.array 5, 'C', 10, 'A', 15, 'B' Removes and returns the min element at top of priority queue. Examples: priorityqueuetest PriorityQueue priorityqueuetest.array 10, 'A', 15, 'B' priorityqueuetest.cursize lenpriorityqueuetest.array priorityqueuetest.pos 'A': 0, 'B': 1 priorityqueuetest.insert5, 'C' priorityqueuetest.extractmin 'C' priorityqueuetest.array0 15, 'B' Returns the index of left child Examples: priorityqueuetest PriorityQueue priorityqueuetest.left0 1 priorityqueuetest.left1 3 Returns the index of right child Examples: priorityqueuetest PriorityQueue priorityqueuetest.right0 2 priorityqueuetest.right1 4 Returns the index of parent Examples: priorityqueuetest PriorityQueue priorityqueuetest.par1 0 priorityqueuetest.par2 1 priorityqueuetest.par4 2 Swaps array elements at indices i and j, update the pos Examples: priorityqueuetest PriorityQueue priorityqueuetest.array 10, 'A', 15, 'B' priorityqueuetest.cursize lenpriorityqueuetest.array priorityqueuetest.pos 'A': 0, 'B': 1 priorityqueuetest.swap0, 1 priorityqueuetest.array 15, 'B', 10, 'A' priorityqueuetest.pos 'A': 1, 'B': 0 Decrease the key value for a given tuple, assuming the newd is at most oldd. Examples: priorityqueuetest PriorityQueue priorityqueuetest.array 10, 'A', 15, 'B' priorityqueuetest.cursize lenpriorityqueuetest.array priorityqueuetest.pos 'A': 0, 'B': 1 priorityqueuetest.decreasekey10, 'A', 5 priorityqueuetest.array 5, 'A', 15, 'B' assuming the newd is atmost oldd Graph class constructor Examples: graphtest Graph1 graphtest.numnodes 1 graphtest.dist 0 graphtest.par 1 graphtest.adjList To store the distance from source vertex Add edge going from node u to v and v to u with weight w: u w v, v w u Examples: graphtest Graph1 graphtest.addedge1, 2, 1 graphtest.addedge2, 3, 2 graphtest.adjList 1: 2, 1, 2: 1, 1, 3, 2, 3: 2, 2 Check if u already in graph Assuming undirected graph Show the graph: u vw Examples: graphtest Graph1 graphtest.addedge1, 2, 1 graphtest.showgraph 1 21 2 11 graphtest.addedge2, 3, 2 graphtest.showgraph 1 21 2 11 32 3 22 Dijkstra algorithm Examples: graphtest Graph3 graphtest.addedge0, 1, 2 graphtest.addedge1, 2, 2 graphtest.dijkstra0 Distance from node: 0 Node 0 has distance: 0 Node 1 has distance: 2 Node 2 has distance: 4 graphtest.dist 0, 2, 4 graphtest Graph2 graphtest.addedge0, 1, 2 graphtest.dijkstra0 Distance from node: 0 Node 0 has distance: 0 Node 1 has distance: 2 graphtest.dist 0, 2 graphtest Graph3 graphtest.addedge0, 1, 2 graphtest.dijkstra0 Distance from node: 0 Node 0 has distance: 0 Node 1 has distance: 2 Node 2 has distance: 0 graphtest.dist 0, 2, 0 graphtest Graph3 graphtest.addedge0, 1, 2 graphtest.addedge1, 2, 2 graphtest.addedge0, 2, 1 graphtest.dijkstra0 Distance from node: 0 Node 0 has distance: 0 Node 1 has distance: 2 Node 2 has distance: 1 graphtest.dist 0, 2, 1 graphtest Graph4 graphtest.addedge0, 1, 4 graphtest.addedge1, 2, 2 graphtest.addedge2, 3, 1 graphtest.addedge0, 2, 3 graphtest.dijkstra0 Distance from node: 0 Node 0 has distance: 0 Node 1 has distance: 4 Node 2 has distance: 3 Node 3 has distance: 4 graphtest.dist 0, 4, 3, 4 graphtest Graph4 graphtest.addedge0, 1, 4 graphtest.addedge1, 2, 2 graphtest.addedge2, 3, 1 graphtest.addedge0, 2, 7 graphtest.dijkstra0 Distance from node: 0 Node 0 has distance: 0 Node 1 has distance: 4 Node 2 has distance: 6 Node 3 has distance: 7 graphtest.dist 0, 4, 6, 7 Flush old junk values in par src is the source node Update the distance of all the neighbours of u and if their prev dist was INFINITY then push them in Q Show the shortest distances from src Show the distances from src to all other nodes in a graph Examples: graphtest Graph1 graphtest.showdistances0 Distance from node: 0 Node 0 has distance: 0 Shows the shortest path from src to dest. WARNING: Use it after calling dijkstra. Examples: graphtest Graph4 graphtest.addedge0, 1, 1 graphtest.addedge1, 2, 2 graphtest.addedge2, 3, 3 graphtest.dijkstra0 Distance from node: 0 Node 0 has distance: 0 Node 1 has distance: 1 Node 2 has distance: 3 Node 3 has distance: 6 graphtest.showpath0, 3 doctest: NORMALIZEWHITESPACE Path to reach 3 from 0 0 1 2 3 Total cost of path: 6 Backtracking from dest to src OUTPUT 0 14 78 1 04 28 711 7 08 111 61 87 2 18 37 82 54 3 27 49 514 8 22 66 77 5 24 314 410 62 4 39 510 6 52 71 86 Distance from node: 0 Node 0 has distance: 0 Node 1 has distance: 4 Node 2 has distance: 12 Node 3 has distance: 19 Node 4 has distance: 21 Node 5 has distance: 11 Node 6 has distance: 9 Node 7 has distance: 8 Node 8 has distance: 14 Path to reach 4 from 0 0 7 6 5 4 Total cost of path: 21 | # Title: Dijkstra's Algorithm for finding single source shortest path from scratch
# Author: Shubham Malik
# References: https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
import math
import sys
# For storing the vertex set to retrieve node with the lowest distance
class PriorityQueue:
# Based on Min Heap
def __init__(self):
"""
Priority queue class constructor method.
Examples:
>>> priority_queue_test = PriorityQueue()
>>> priority_queue_test.cur_size
0
>>> priority_queue_test.array
[]
>>> priority_queue_test.pos
{}
"""
self.cur_size = 0
self.array = []
self.pos = {} # To store the pos of node in array
def is_empty(self):
"""
Conditional boolean method to determine if the priority queue is empty or not.
Examples:
>>> priority_queue_test = PriorityQueue()
>>> priority_queue_test.is_empty()
True
>>> priority_queue_test.insert((2, 'A'))
>>> priority_queue_test.is_empty()
False
"""
return self.cur_size == 0
def min_heapify(self, idx):
"""
Sorts the queue array so that the minimum element is root.
Examples:
>>> priority_queue_test = PriorityQueue()
>>> priority_queue_test.cur_size = 3
>>> priority_queue_test.pos = {'A': 0, 'B': 1, 'C': 2}
>>> priority_queue_test.array = [(5, 'A'), (10, 'B'), (15, 'C')]
>>> priority_queue_test.min_heapify(0)
Traceback (most recent call last):
...
TypeError: 'list' object is not callable
>>> priority_queue_test.array
[(5, 'A'), (10, 'B'), (15, 'C')]
>>> priority_queue_test.array = [(10, 'A'), (5, 'B'), (15, 'C')]
>>> priority_queue_test.min_heapify(0)
Traceback (most recent call last):
...
TypeError: 'list' object is not callable
>>> priority_queue_test.array
[(10, 'A'), (5, 'B'), (15, 'C')]
>>> priority_queue_test.array = [(10, 'A'), (15, 'B'), (5, 'C')]
>>> priority_queue_test.min_heapify(0)
Traceback (most recent call last):
...
TypeError: 'list' object is not callable
>>> priority_queue_test.array
[(10, 'A'), (15, 'B'), (5, 'C')]
>>> priority_queue_test.array = [(10, 'A'), (5, 'B')]
>>> priority_queue_test.cur_size = len(priority_queue_test.array)
>>> priority_queue_test.pos = {'A': 0, 'B': 1}
>>> priority_queue_test.min_heapify(0)
Traceback (most recent call last):
...
TypeError: 'list' object is not callable
>>> priority_queue_test.array
[(10, 'A'), (5, 'B')]
"""
lc = self.left(idx)
rc = self.right(idx)
if lc < self.cur_size and self.array(lc)[0] < self.array[idx][0]:
smallest = lc
else:
smallest = idx
if rc < self.cur_size and self.array(rc)[0] < self.array[smallest][0]:
smallest = rc
if smallest != idx:
self.swap(idx, smallest)
self.min_heapify(smallest)
def insert(self, tup):
"""
Inserts a node into the Priority Queue.
Examples:
>>> priority_queue_test = PriorityQueue()
>>> priority_queue_test.insert((10, 'A'))
>>> priority_queue_test.array
[(10, 'A')]
>>> priority_queue_test.insert((15, 'B'))
>>> priority_queue_test.array
[(10, 'A'), (15, 'B')]
>>> priority_queue_test.insert((5, 'C'))
>>> priority_queue_test.array
[(5, 'C'), (10, 'A'), (15, 'B')]
"""
self.pos[tup[1]] = self.cur_size
self.cur_size += 1
self.array.append((sys.maxsize, tup[1]))
self.decrease_key((sys.maxsize, tup[1]), tup[0])
def extract_min(self):
"""
Removes and returns the min element at top of priority queue.
Examples:
>>> priority_queue_test = PriorityQueue()
>>> priority_queue_test.array = [(10, 'A'), (15, 'B')]
>>> priority_queue_test.cur_size = len(priority_queue_test.array)
>>> priority_queue_test.pos = {'A': 0, 'B': 1}
>>> priority_queue_test.insert((5, 'C'))
>>> priority_queue_test.extract_min()
'C'
>>> priority_queue_test.array[0]
(15, 'B')
"""
min_node = self.array[0][1]
self.array[0] = self.array[self.cur_size - 1]
self.cur_size -= 1
self.min_heapify(1)
del self.pos[min_node]
return min_node
def left(self, i):
"""
Returns the index of left child
Examples:
>>> priority_queue_test = PriorityQueue()
>>> priority_queue_test.left(0)
1
>>> priority_queue_test.left(1)
3
"""
return 2 * i + 1
def right(self, i):
"""
Returns the index of right child
Examples:
>>> priority_queue_test = PriorityQueue()
>>> priority_queue_test.right(0)
2
>>> priority_queue_test.right(1)
4
"""
return 2 * i + 2
def par(self, i):
"""
Returns the index of parent
Examples:
>>> priority_queue_test = PriorityQueue()
>>> priority_queue_test.par(1)
0
>>> priority_queue_test.par(2)
1
>>> priority_queue_test.par(4)
2
"""
return math.floor(i / 2)
def swap(self, i, j):
"""
Swaps array elements at indices i and j, update the pos{}
Examples:
>>> priority_queue_test = PriorityQueue()
>>> priority_queue_test.array = [(10, 'A'), (15, 'B')]
>>> priority_queue_test.cur_size = len(priority_queue_test.array)
>>> priority_queue_test.pos = {'A': 0, 'B': 1}
>>> priority_queue_test.swap(0, 1)
>>> priority_queue_test.array
[(15, 'B'), (10, 'A')]
>>> priority_queue_test.pos
{'A': 1, 'B': 0}
"""
self.pos[self.array[i][1]] = j
self.pos[self.array[j][1]] = i
temp = self.array[i]
self.array[i] = self.array[j]
self.array[j] = temp
def decrease_key(self, tup, new_d):
"""
Decrease the key value for a given tuple, assuming the new_d is at most old_d.
Examples:
>>> priority_queue_test = PriorityQueue()
>>> priority_queue_test.array = [(10, 'A'), (15, 'B')]
>>> priority_queue_test.cur_size = len(priority_queue_test.array)
>>> priority_queue_test.pos = {'A': 0, 'B': 1}
>>> priority_queue_test.decrease_key((10, 'A'), 5)
>>> priority_queue_test.array
[(5, 'A'), (15, 'B')]
"""
idx = self.pos[tup[1]]
# assuming the new_d is atmost old_d
self.array[idx] = (new_d, tup[1])
while idx > 0 and self.array[self.par(idx)][0] > self.array[idx][0]:
self.swap(idx, self.par(idx))
idx = self.par(idx)
class Graph:
def __init__(self, num):
"""
Graph class constructor
Examples:
>>> graph_test = Graph(1)
>>> graph_test.num_nodes
1
>>> graph_test.dist
[0]
>>> graph_test.par
[-1]
>>> graph_test.adjList
{}
"""
self.adjList = {} # To store graph: u -> (v,w)
self.num_nodes = num # Number of nodes in graph
# To store the distance from source vertex
self.dist = [0] * self.num_nodes
self.par = [-1] * self.num_nodes # To store the path
def add_edge(self, u, v, w):
"""
Add edge going from node u to v and v to u with weight w: u (w)-> v, v (w) -> u
Examples:
>>> graph_test = Graph(1)
>>> graph_test.add_edge(1, 2, 1)
>>> graph_test.add_edge(2, 3, 2)
>>> graph_test.adjList
{1: [(2, 1)], 2: [(1, 1), (3, 2)], 3: [(2, 2)]}
"""
# Check if u already in graph
if u in self.adjList:
self.adjList[u].append((v, w))
else:
self.adjList[u] = [(v, w)]
# Assuming undirected graph
if v in self.adjList:
self.adjList[v].append((u, w))
else:
self.adjList[v] = [(u, w)]
def show_graph(self):
"""
Show the graph: u -> v(w)
Examples:
>>> graph_test = Graph(1)
>>> graph_test.add_edge(1, 2, 1)
>>> graph_test.show_graph()
1 -> 2(1)
2 -> 1(1)
>>> graph_test.add_edge(2, 3, 2)
>>> graph_test.show_graph()
1 -> 2(1)
2 -> 1(1) -> 3(2)
3 -> 2(2)
"""
for u in self.adjList:
print(u, "->", " -> ".join(str(f"{v}({w})") for v, w in self.adjList[u]))
def dijkstra(self, src):
"""
Dijkstra algorithm
Examples:
>>> graph_test = Graph(3)
>>> graph_test.add_edge(0, 1, 2)
>>> graph_test.add_edge(1, 2, 2)
>>> graph_test.dijkstra(0)
Distance from node: 0
Node 0 has distance: 0
Node 1 has distance: 2
Node 2 has distance: 4
>>> graph_test.dist
[0, 2, 4]
>>> graph_test = Graph(2)
>>> graph_test.add_edge(0, 1, 2)
>>> graph_test.dijkstra(0)
Distance from node: 0
Node 0 has distance: 0
Node 1 has distance: 2
>>> graph_test.dist
[0, 2]
>>> graph_test = Graph(3)
>>> graph_test.add_edge(0, 1, 2)
>>> graph_test.dijkstra(0)
Distance from node: 0
Node 0 has distance: 0
Node 1 has distance: 2
Node 2 has distance: 0
>>> graph_test.dist
[0, 2, 0]
>>> graph_test = Graph(3)
>>> graph_test.add_edge(0, 1, 2)
>>> graph_test.add_edge(1, 2, 2)
>>> graph_test.add_edge(0, 2, 1)
>>> graph_test.dijkstra(0)
Distance from node: 0
Node 0 has distance: 0
Node 1 has distance: 2
Node 2 has distance: 1
>>> graph_test.dist
[0, 2, 1]
>>> graph_test = Graph(4)
>>> graph_test.add_edge(0, 1, 4)
>>> graph_test.add_edge(1, 2, 2)
>>> graph_test.add_edge(2, 3, 1)
>>> graph_test.add_edge(0, 2, 3)
>>> graph_test.dijkstra(0)
Distance from node: 0
Node 0 has distance: 0
Node 1 has distance: 4
Node 2 has distance: 3
Node 3 has distance: 4
>>> graph_test.dist
[0, 4, 3, 4]
>>> graph_test = Graph(4)
>>> graph_test.add_edge(0, 1, 4)
>>> graph_test.add_edge(1, 2, 2)
>>> graph_test.add_edge(2, 3, 1)
>>> graph_test.add_edge(0, 2, 7)
>>> graph_test.dijkstra(0)
Distance from node: 0
Node 0 has distance: 0
Node 1 has distance: 4
Node 2 has distance: 6
Node 3 has distance: 7
>>> graph_test.dist
[0, 4, 6, 7]
"""
# Flush old junk values in par[]
self.par = [-1] * self.num_nodes
# src is the source node
self.dist[src] = 0
q = PriorityQueue()
q.insert((0, src)) # (dist from src, node)
for u in self.adjList:
if u != src:
self.dist[u] = sys.maxsize # Infinity
self.par[u] = -1
while not q.is_empty():
u = q.extract_min() # Returns node with the min dist from source
# Update the distance of all the neighbours of u and
# if their prev dist was INFINITY then push them in Q
for v, w in self.adjList[u]:
new_dist = self.dist[u] + w
if self.dist[v] > new_dist:
if self.dist[v] == sys.maxsize:
q.insert((new_dist, v))
else:
q.decrease_key((self.dist[v], v), new_dist)
self.dist[v] = new_dist
self.par[v] = u
# Show the shortest distances from src
self.show_distances(src)
def show_distances(self, src):
"""
Show the distances from src to all other nodes in a graph
Examples:
>>> graph_test = Graph(1)
>>> graph_test.show_distances(0)
Distance from node: 0
Node 0 has distance: 0
"""
print(f"Distance from node: {src}")
for u in range(self.num_nodes):
print(f"Node {u} has distance: {self.dist[u]}")
def show_path(self, src, dest):
"""
Shows the shortest path from src to dest.
WARNING: Use it *after* calling dijkstra.
Examples:
>>> graph_test = Graph(4)
>>> graph_test.add_edge(0, 1, 1)
>>> graph_test.add_edge(1, 2, 2)
>>> graph_test.add_edge(2, 3, 3)
>>> graph_test.dijkstra(0)
Distance from node: 0
Node 0 has distance: 0
Node 1 has distance: 1
Node 2 has distance: 3
Node 3 has distance: 6
>>> graph_test.show_path(0, 3) # doctest: +NORMALIZE_WHITESPACE
----Path to reach 3 from 0----
0 -> 1 -> 2 -> 3
Total cost of path: 6
"""
path = []
cost = 0
temp = dest
# Backtracking from dest to src
while self.par[temp] != -1:
path.append(temp)
if temp != src:
for v, w in self.adjList[temp]:
if v == self.par[temp]:
cost += w
break
temp = self.par[temp]
path.append(src)
path.reverse()
print(f"----Path to reach {dest} from {src}----")
for u in path:
print(f"{u}", end=" ")
if u != dest:
print("-> ", end="")
print("\nTotal cost of path: ", cost)
if __name__ == "__main__":
from doctest import testmod
testmod()
graph = Graph(9)
graph.add_edge(0, 1, 4)
graph.add_edge(0, 7, 8)
graph.add_edge(1, 2, 8)
graph.add_edge(1, 7, 11)
graph.add_edge(2, 3, 7)
graph.add_edge(2, 8, 2)
graph.add_edge(2, 5, 4)
graph.add_edge(3, 4, 9)
graph.add_edge(3, 5, 14)
graph.add_edge(4, 5, 10)
graph.add_edge(5, 6, 2)
graph.add_edge(6, 7, 1)
graph.add_edge(6, 8, 6)
graph.add_edge(7, 8, 7)
graph.show_graph()
graph.dijkstra(0)
graph.show_path(0, 4)
# OUTPUT
# 0 -> 1(4) -> 7(8)
# 1 -> 0(4) -> 2(8) -> 7(11)
# 7 -> 0(8) -> 1(11) -> 6(1) -> 8(7)
# 2 -> 1(8) -> 3(7) -> 8(2) -> 5(4)
# 3 -> 2(7) -> 4(9) -> 5(14)
# 8 -> 2(2) -> 6(6) -> 7(7)
# 5 -> 2(4) -> 3(14) -> 4(10) -> 6(2)
# 4 -> 3(9) -> 5(10)
# 6 -> 5(2) -> 7(1) -> 8(6)
# Distance from node: 0
# Node 0 has distance: 0
# Node 1 has distance: 4
# Node 2 has distance: 12
# Node 3 has distance: 19
# Node 4 has distance: 21
# Node 5 has distance: 11
# Node 6 has distance: 9
# Node 7 has distance: 8
# Node 8 has distance: 14
# ----Path to reach 4 from 0----
# 0 -> 7 -> 6 -> 5 -> 4
# Total cost of path: 21
|
graph Graph2 graph.vertices 2 lengraph.graph 2 lengraph.graph0 2 Graph0.printsolution doctest: NORMALIZEWHITESPACE Vertex Distance from Source A utility function to find the vertex with minimum distance value, from the set of vertices not yet included in shortest path tree. Graph3.minimumdistance1, 2, 3, False, False, True 0 Initialize minimum distance for next node Search not nearest vertex not in the shortest path tree Function that implements Dijkstra's single source shortest path algorithm for a graph represented using adjacency matrix representation. Graph4.dijkstra1 doctest: NORMALIZEWHITESPACE Vertex Distance from Source 0 10000000 1 0 2 10000000 3 10000000 Update dist value of the adjacent vertices of the picked vertex only if the current distance is greater than new distance and the vertex in not in the shortest path tree | from __future__ import annotations
class Graph:
def __init__(self, vertices: int) -> None:
"""
>>> graph = Graph(2)
>>> graph.vertices
2
>>> len(graph.graph)
2
>>> len(graph.graph[0])
2
"""
self.vertices = vertices
self.graph = [[0] * vertices for _ in range(vertices)]
def print_solution(self, distances_from_source: list[int]) -> None:
"""
>>> Graph(0).print_solution([]) # doctest: +NORMALIZE_WHITESPACE
Vertex Distance from Source
"""
print("Vertex \t Distance from Source")
for vertex in range(self.vertices):
print(vertex, "\t\t", distances_from_source[vertex])
def minimum_distance(
self, distances_from_source: list[int], visited: list[bool]
) -> int:
"""
A utility function to find the vertex with minimum distance value, from the set
of vertices not yet included in shortest path tree.
>>> Graph(3).minimum_distance([1, 2, 3], [False, False, True])
0
"""
# Initialize minimum distance for next node
minimum = 1e7
min_index = 0
# Search not nearest vertex not in the shortest path tree
for vertex in range(self.vertices):
if distances_from_source[vertex] < minimum and visited[vertex] is False:
minimum = distances_from_source[vertex]
min_index = vertex
return min_index
def dijkstra(self, source: int) -> None:
"""
Function that implements Dijkstra's single source shortest path algorithm for a
graph represented using adjacency matrix representation.
>>> Graph(4).dijkstra(1) # doctest: +NORMALIZE_WHITESPACE
Vertex Distance from Source
0 10000000
1 0
2 10000000
3 10000000
"""
distances = [int(1e7)] * self.vertices # distances from the source
distances[source] = 0
visited = [False] * self.vertices
for _ in range(self.vertices):
u = self.minimum_distance(distances, visited)
visited[u] = True
# Update dist value of the adjacent vertices
# of the picked vertex only if the current
# distance is greater than new distance and
# the vertex in not in the shortest path tree
for v in range(self.vertices):
if (
self.graph[u][v] > 0
and visited[v] is False
and distances[v] > distances[u] + self.graph[u][v]
):
distances[v] = distances[u] + self.graph[u][v]
self.print_solution(distances)
if __name__ == "__main__":
graph = Graph(9)
graph.graph = [
[0, 4, 0, 0, 0, 0, 0, 8, 0],
[4, 0, 8, 0, 0, 0, 0, 11, 0],
[0, 8, 0, 7, 0, 4, 0, 0, 2],
[0, 0, 7, 0, 9, 14, 0, 0, 0],
[0, 0, 0, 9, 0, 10, 0, 0, 0],
[0, 0, 4, 14, 10, 0, 2, 0, 0],
[0, 0, 0, 0, 0, 2, 0, 1, 6],
[8, 11, 0, 0, 0, 0, 1, 0, 7],
[0, 0, 2, 0, 0, 0, 6, 7, 0],
]
graph.dijkstra(0)
|
This script implements the Dijkstra algorithm on a binary grid. The grid consists of 0s and 1s, where 1 represents a walkable node and 0 represents an obstacle. The algorithm finds the shortest path from a start node to a destination node. Diagonal movement can be allowed or disallowed. Implements Dijkstra's algorithm on a binary grid. Args: grid np.ndarray: A 2D numpy array representing the grid. 1 represents a walkable node and 0 represents an obstacle. source Tupleint, int: A tuple representing the start node. destination Tupleint, int: A tuple representing the destination node. allowdiagonal bool: A boolean determining whether diagonal movements are allowed. Returns: TupleUnionfloat, int, ListTupleint, int: The shortest distance from the start node to the destination node and the shortest path as a list of nodes. dijkstranp.array1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 2, 2, False 4.0, 0, 0, 0, 1, 1, 1, 2, 1, 2, 2 dijkstranp.array1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 2, 2, True 2.0, 0, 0, 1, 1, 2, 2 dijkstranp.array1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 2, 2, False 4.0, 0, 0, 0, 1, 0, 2, 1, 2, 2, 2 | from heapq import heappop, heappush
import numpy as np
def dijkstra(
grid: np.ndarray,
source: tuple[int, int],
destination: tuple[int, int],
allow_diagonal: bool,
) -> tuple[float | int, list[tuple[int, int]]]:
"""
Implements Dijkstra's algorithm on a binary grid.
Args:
grid (np.ndarray): A 2D numpy array representing the grid.
1 represents a walkable node and 0 represents an obstacle.
source (Tuple[int, int]): A tuple representing the start node.
destination (Tuple[int, int]): A tuple representing the
destination node.
allow_diagonal (bool): A boolean determining whether
diagonal movements are allowed.
Returns:
Tuple[Union[float, int], List[Tuple[int, int]]]:
The shortest distance from the start node to the destination node
and the shortest path as a list of nodes.
>>> dijkstra(np.array([[1, 1, 1], [0, 1, 0], [0, 1, 1]]), (0, 0), (2, 2), False)
(4.0, [(0, 0), (0, 1), (1, 1), (2, 1), (2, 2)])
>>> dijkstra(np.array([[1, 1, 1], [0, 1, 0], [0, 1, 1]]), (0, 0), (2, 2), True)
(2.0, [(0, 0), (1, 1), (2, 2)])
>>> dijkstra(np.array([[1, 1, 1], [0, 0, 1], [0, 1, 1]]), (0, 0), (2, 2), False)
(4.0, [(0, 0), (0, 1), (0, 2), (1, 2), (2, 2)])
"""
rows, cols = grid.shape
dx = [-1, 1, 0, 0]
dy = [0, 0, -1, 1]
if allow_diagonal:
dx += [-1, -1, 1, 1]
dy += [-1, 1, -1, 1]
queue, visited = [(0, source)], set()
matrix = np.full((rows, cols), np.inf)
matrix[source] = 0
predecessors = np.empty((rows, cols), dtype=object)
predecessors[source] = None
while queue:
(dist, (x, y)) = heappop(queue)
if (x, y) in visited:
continue
visited.add((x, y))
if (x, y) == destination:
path = []
while (x, y) != source:
path.append((x, y))
x, y = predecessors[x, y]
path.append(source) # add the source manually
path.reverse()
return matrix[destination], path
for i in range(len(dx)):
nx, ny = x + dx[i], y + dy[i]
if 0 <= nx < rows and 0 <= ny < cols:
next_node = grid[nx][ny]
if next_node == 1 and matrix[nx, ny] > dist + 1:
heappush(queue, (dist + 1, (nx, ny)))
matrix[nx, ny] = dist + 1
predecessors[nx, ny] = (x, y)
return np.inf, []
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Here we will add our edges containing with the following parameters: vertex closest to source, vertex closest to sink and flow capacity through that edge ... This is a sample depth first search to be used at maxflow Here we calculate the flow that reaches the sink Example to use Will be a bipartite graph, than it has the vertices near the source4 and the vertices near the sink4 Here we make a graphs with 10 vertexsource and sink includes Now we add the vertices next to the font in the font with 1 capacity in this edge source source vertices We will do the same thing for the vertices near the sink, but from vertex to sink sink vertices sink Finally we add the verices near the sink to the vertices near the source. source vertices sink vertices Now we can know that is the maximum flowsource sink | INF = float("inf")
class Dinic:
def __init__(self, n):
self.lvl = [0] * n
self.ptr = [0] * n
self.q = [0] * n
self.adj = [[] for _ in range(n)]
"""
Here we will add our edges containing with the following parameters:
vertex closest to source, vertex closest to sink and flow capacity
through that edge ...
"""
def add_edge(self, a, b, c, rcap=0):
self.adj[a].append([b, len(self.adj[b]), c, 0])
self.adj[b].append([a, len(self.adj[a]) - 1, rcap, 0])
# This is a sample depth first search to be used at max_flow
def depth_first_search(self, vertex, sink, flow):
if vertex == sink or not flow:
return flow
for i in range(self.ptr[vertex], len(self.adj[vertex])):
e = self.adj[vertex][i]
if self.lvl[e[0]] == self.lvl[vertex] + 1:
p = self.depth_first_search(e[0], sink, min(flow, e[2] - e[3]))
if p:
self.adj[vertex][i][3] += p
self.adj[e[0]][e[1]][3] -= p
return p
self.ptr[vertex] = self.ptr[vertex] + 1
return 0
# Here we calculate the flow that reaches the sink
def max_flow(self, source, sink):
flow, self.q[0] = 0, source
for l in range(31): # noqa: E741 l = 30 maybe faster for random data
while True:
self.lvl, self.ptr = [0] * len(self.q), [0] * len(self.q)
qi, qe, self.lvl[source] = 0, 1, 1
while qi < qe and not self.lvl[sink]:
v = self.q[qi]
qi += 1
for e in self.adj[v]:
if not self.lvl[e[0]] and (e[2] - e[3]) >> (30 - l):
self.q[qe] = e[0]
qe += 1
self.lvl[e[0]] = self.lvl[v] + 1
p = self.depth_first_search(source, sink, INF)
while p:
flow += p
p = self.depth_first_search(source, sink, INF)
if not self.lvl[sink]:
break
return flow
# Example to use
"""
Will be a bipartite graph, than it has the vertices near the source(4)
and the vertices near the sink(4)
"""
# Here we make a graphs with 10 vertex(source and sink includes)
graph = Dinic(10)
source = 0
sink = 9
"""
Now we add the vertices next to the font in the font with 1 capacity in this edge
(source -> source vertices)
"""
for vertex in range(1, 5):
graph.add_edge(source, vertex, 1)
"""
We will do the same thing for the vertices near the sink, but from vertex to sink
(sink vertices -> sink)
"""
for vertex in range(5, 9):
graph.add_edge(vertex, sink, 1)
"""
Finally we add the verices near the sink to the vertices near the source.
(source vertices -> sink vertices)
"""
for vertex in range(1, 5):
graph.add_edge(vertex, vertex + 4, 1)
# Now we can know that is the maximum flow(source -> sink)
print(graph.max_flow(source, sink))
|
the default weight is 1 if not assigned but all the implementation is weighted adding vertices and edges adding the weight is optional handles repetition handles if the input does not exist if no destination is meant the default value is 1 check if there is any non isolated nodes check if all the children are visited check if se have reached the starting point c is the count of nodes you want and if you leave it or pass 1 to the function the count will be random from 10 to 10000 every vertex has max 100 edges check if there is any non isolated nodes check if all the children are visited check if se have reached the starting point check if there is any non isolated nodes check if all the children are visited check if se have reached the starting point check if there is any non isolated nodes check if all the children are visited check if se have reached the starting point adding vertices and edges adding the weight is optional handles repetition check if the u exists if there already is a edge if u does not exist add the other way if there already is a edge if u does not exist handles if the input does not exist the other way round if no destination is meant the default value is 1 check if there is any non isolated nodes check if all the children are visited check if se have reached the starting point c is the count of nodes you want and if you leave it or pass 1 to the function the count will be random from 10 to 10000 every vertex has max 100 edges check if there is any non isolated nodes check if all the children are visited check if se have reached the starting point check if there is any non isolated nodes check if all the children are visited check if se have reached the starting point | from collections import deque
from math import floor
from random import random
from time import time
# the default weight is 1 if not assigned but all the implementation is weighted
class DirectedGraph:
def __init__(self):
self.graph = {}
# adding vertices and edges
# adding the weight is optional
# handles repetition
def add_pair(self, u, v, w=1):
if self.graph.get(u):
if self.graph[u].count([w, v]) == 0:
self.graph[u].append([w, v])
else:
self.graph[u] = [[w, v]]
if not self.graph.get(v):
self.graph[v] = []
def all_nodes(self):
return list(self.graph)
# handles if the input does not exist
def remove_pair(self, u, v):
if self.graph.get(u):
for _ in self.graph[u]:
if _[1] == v:
self.graph[u].remove(_)
# if no destination is meant the default value is -1
def dfs(self, s=-2, d=-1):
if s == d:
return []
stack = []
visited = []
if s == -2:
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
ss = s
while True:
# check if there is any non isolated nodes
if len(self.graph[s]) != 0:
ss = s
for node in self.graph[s]:
if visited.count(node[1]) < 1:
if node[1] == d:
visited.append(d)
return visited
else:
stack.append(node[1])
visited.append(node[1])
ss = node[1]
break
# check if all the children are visited
if s == ss:
stack.pop()
if len(stack) != 0:
s = stack[len(stack) - 1]
else:
s = ss
# check if se have reached the starting point
if len(stack) == 0:
return visited
# c is the count of nodes you want and if you leave it or pass -1 to the function
# the count will be random from 10 to 10000
def fill_graph_randomly(self, c=-1):
if c == -1:
c = floor(random() * 10000) + 10
for i in range(c):
# every vertex has max 100 edges
for _ in range(floor(random() * 102) + 1):
n = floor(random() * c) + 1
if n != i:
self.add_pair(i, n, 1)
def bfs(self, s=-2):
d = deque()
visited = []
if s == -2:
s = next(iter(self.graph))
d.append(s)
visited.append(s)
while d:
s = d.popleft()
if len(self.graph[s]) != 0:
for node in self.graph[s]:
if visited.count(node[1]) < 1:
d.append(node[1])
visited.append(node[1])
return visited
def in_degree(self, u):
count = 0
for x in self.graph:
for y in self.graph[x]:
if y[1] == u:
count += 1
return count
def out_degree(self, u):
return len(self.graph[u])
def topological_sort(self, s=-2):
stack = []
visited = []
if s == -2:
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
ss = s
sorted_nodes = []
while True:
# check if there is any non isolated nodes
if len(self.graph[s]) != 0:
ss = s
for node in self.graph[s]:
if visited.count(node[1]) < 1:
stack.append(node[1])
visited.append(node[1])
ss = node[1]
break
# check if all the children are visited
if s == ss:
sorted_nodes.append(stack.pop())
if len(stack) != 0:
s = stack[len(stack) - 1]
else:
s = ss
# check if se have reached the starting point
if len(stack) == 0:
return sorted_nodes
def cycle_nodes(self):
stack = []
visited = []
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
parent = -2
indirect_parents = []
ss = s
on_the_way_back = False
anticipating_nodes = set()
while True:
# check if there is any non isolated nodes
if len(self.graph[s]) != 0:
ss = s
for node in self.graph[s]:
if (
visited.count(node[1]) > 0
and node[1] != parent
and indirect_parents.count(node[1]) > 0
and not on_the_way_back
):
len_stack = len(stack) - 1
while len_stack >= 0:
if stack[len_stack] == node[1]:
anticipating_nodes.add(node[1])
break
else:
anticipating_nodes.add(stack[len_stack])
len_stack -= 1
if visited.count(node[1]) < 1:
stack.append(node[1])
visited.append(node[1])
ss = node[1]
break
# check if all the children are visited
if s == ss:
stack.pop()
on_the_way_back = True
if len(stack) != 0:
s = stack[len(stack) - 1]
else:
on_the_way_back = False
indirect_parents.append(parent)
parent = s
s = ss
# check if se have reached the starting point
if len(stack) == 0:
return list(anticipating_nodes)
def has_cycle(self):
stack = []
visited = []
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
parent = -2
indirect_parents = []
ss = s
on_the_way_back = False
anticipating_nodes = set()
while True:
# check if there is any non isolated nodes
if len(self.graph[s]) != 0:
ss = s
for node in self.graph[s]:
if (
visited.count(node[1]) > 0
and node[1] != parent
and indirect_parents.count(node[1]) > 0
and not on_the_way_back
):
len_stack_minus_one = len(stack) - 1
while len_stack_minus_one >= 0:
if stack[len_stack_minus_one] == node[1]:
anticipating_nodes.add(node[1])
break
else:
return True
if visited.count(node[1]) < 1:
stack.append(node[1])
visited.append(node[1])
ss = node[1]
break
# check if all the children are visited
if s == ss:
stack.pop()
on_the_way_back = True
if len(stack) != 0:
s = stack[len(stack) - 1]
else:
on_the_way_back = False
indirect_parents.append(parent)
parent = s
s = ss
# check if se have reached the starting point
if len(stack) == 0:
return False
def dfs_time(self, s=-2, e=-1):
begin = time()
self.dfs(s, e)
end = time()
return end - begin
def bfs_time(self, s=-2):
begin = time()
self.bfs(s)
end = time()
return end - begin
class Graph:
def __init__(self):
self.graph = {}
# adding vertices and edges
# adding the weight is optional
# handles repetition
def add_pair(self, u, v, w=1):
# check if the u exists
if self.graph.get(u):
# if there already is a edge
if self.graph[u].count([w, v]) == 0:
self.graph[u].append([w, v])
else:
# if u does not exist
self.graph[u] = [[w, v]]
# add the other way
if self.graph.get(v):
# if there already is a edge
if self.graph[v].count([w, u]) == 0:
self.graph[v].append([w, u])
else:
# if u does not exist
self.graph[v] = [[w, u]]
# handles if the input does not exist
def remove_pair(self, u, v):
if self.graph.get(u):
for _ in self.graph[u]:
if _[1] == v:
self.graph[u].remove(_)
# the other way round
if self.graph.get(v):
for _ in self.graph[v]:
if _[1] == u:
self.graph[v].remove(_)
# if no destination is meant the default value is -1
def dfs(self, s=-2, d=-1):
if s == d:
return []
stack = []
visited = []
if s == -2:
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
ss = s
while True:
# check if there is any non isolated nodes
if len(self.graph[s]) != 0:
ss = s
for node in self.graph[s]:
if visited.count(node[1]) < 1:
if node[1] == d:
visited.append(d)
return visited
else:
stack.append(node[1])
visited.append(node[1])
ss = node[1]
break
# check if all the children are visited
if s == ss:
stack.pop()
if len(stack) != 0:
s = stack[len(stack) - 1]
else:
s = ss
# check if se have reached the starting point
if len(stack) == 0:
return visited
# c is the count of nodes you want and if you leave it or pass -1 to the function
# the count will be random from 10 to 10000
def fill_graph_randomly(self, c=-1):
if c == -1:
c = floor(random() * 10000) + 10
for i in range(c):
# every vertex has max 100 edges
for _ in range(floor(random() * 102) + 1):
n = floor(random() * c) + 1
if n != i:
self.add_pair(i, n, 1)
def bfs(self, s=-2):
d = deque()
visited = []
if s == -2:
s = next(iter(self.graph))
d.append(s)
visited.append(s)
while d:
s = d.popleft()
if len(self.graph[s]) != 0:
for node in self.graph[s]:
if visited.count(node[1]) < 1:
d.append(node[1])
visited.append(node[1])
return visited
def degree(self, u):
return len(self.graph[u])
def cycle_nodes(self):
stack = []
visited = []
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
parent = -2
indirect_parents = []
ss = s
on_the_way_back = False
anticipating_nodes = set()
while True:
# check if there is any non isolated nodes
if len(self.graph[s]) != 0:
ss = s
for node in self.graph[s]:
if (
visited.count(node[1]) > 0
and node[1] != parent
and indirect_parents.count(node[1]) > 0
and not on_the_way_back
):
len_stack = len(stack) - 1
while len_stack >= 0:
if stack[len_stack] == node[1]:
anticipating_nodes.add(node[1])
break
else:
anticipating_nodes.add(stack[len_stack])
len_stack -= 1
if visited.count(node[1]) < 1:
stack.append(node[1])
visited.append(node[1])
ss = node[1]
break
# check if all the children are visited
if s == ss:
stack.pop()
on_the_way_back = True
if len(stack) != 0:
s = stack[len(stack) - 1]
else:
on_the_way_back = False
indirect_parents.append(parent)
parent = s
s = ss
# check if se have reached the starting point
if len(stack) == 0:
return list(anticipating_nodes)
def has_cycle(self):
stack = []
visited = []
s = next(iter(self.graph))
stack.append(s)
visited.append(s)
parent = -2
indirect_parents = []
ss = s
on_the_way_back = False
anticipating_nodes = set()
while True:
# check if there is any non isolated nodes
if len(self.graph[s]) != 0:
ss = s
for node in self.graph[s]:
if (
visited.count(node[1]) > 0
and node[1] != parent
and indirect_parents.count(node[1]) > 0
and not on_the_way_back
):
len_stack_minus_one = len(stack) - 1
while len_stack_minus_one >= 0:
if stack[len_stack_minus_one] == node[1]:
anticipating_nodes.add(node[1])
break
else:
return True
if visited.count(node[1]) < 1:
stack.append(node[1])
visited.append(node[1])
ss = node[1]
break
# check if all the children are visited
if s == ss:
stack.pop()
on_the_way_back = True
if len(stack) != 0:
s = stack[len(stack) - 1]
else:
on_the_way_back = False
indirect_parents.append(parent)
parent = s
s = ss
# check if se have reached the starting point
if len(stack) == 0:
return False
def all_nodes(self):
return list(self.graph)
def dfs_time(self, s=-2, e=-1):
begin = time()
self.dfs(s, e)
end = time()
return end - begin
def bfs_time(self, s=-2):
begin = time()
self.bfs(s)
end = time()
return end - begin
|
make only one source and one sink make fake vertex if there are more than one source or sink it's just a reference, so you shouldn't change it in your algorithms, use deep copy before doing that You should override it use this to save your result push some substance to graph Relabeltofront selection rule move through list if it was relabeled, swap elements and start from 0 index if it's neighbour and current vertex is higher graph 0, 0, 4, 6, 0, 0, 0, 0, 5, 2, 0, 0, 0, 0, 0, 0, 4, 4, 0, 0, 0, 0, 6, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, prepare our network set algorithm and calculate | class FlowNetwork:
def __init__(self, graph, sources, sinks):
self.source_index = None
self.sink_index = None
self.graph = graph
self._normalize_graph(sources, sinks)
self.vertices_count = len(graph)
self.maximum_flow_algorithm = None
# make only one source and one sink
def _normalize_graph(self, sources, sinks):
if sources is int:
sources = [sources]
if sinks is int:
sinks = [sinks]
if len(sources) == 0 or len(sinks) == 0:
return
self.source_index = sources[0]
self.sink_index = sinks[0]
# make fake vertex if there are more
# than one source or sink
if len(sources) > 1 or len(sinks) > 1:
max_input_flow = 0
for i in sources:
max_input_flow += sum(self.graph[i])
size = len(self.graph) + 1
for room in self.graph:
room.insert(0, 0)
self.graph.insert(0, [0] * size)
for i in sources:
self.graph[0][i + 1] = max_input_flow
self.source_index = 0
size = len(self.graph) + 1
for room in self.graph:
room.append(0)
self.graph.append([0] * size)
for i in sinks:
self.graph[i + 1][size - 1] = max_input_flow
self.sink_index = size - 1
def find_maximum_flow(self):
if self.maximum_flow_algorithm is None:
raise Exception("You need to set maximum flow algorithm before.")
if self.source_index is None or self.sink_index is None:
return 0
self.maximum_flow_algorithm.execute()
return self.maximum_flow_algorithm.getMaximumFlow()
def set_maximum_flow_algorithm(self, algorithm):
self.maximum_flow_algorithm = algorithm(self)
class FlowNetworkAlgorithmExecutor:
def __init__(self, flow_network):
self.flow_network = flow_network
self.verticies_count = flow_network.verticesCount
self.source_index = flow_network.sourceIndex
self.sink_index = flow_network.sinkIndex
# it's just a reference, so you shouldn't change
# it in your algorithms, use deep copy before doing that
self.graph = flow_network.graph
self.executed = False
def execute(self):
if not self.executed:
self._algorithm()
self.executed = True
# You should override it
def _algorithm(self):
pass
class MaximumFlowAlgorithmExecutor(FlowNetworkAlgorithmExecutor):
def __init__(self, flow_network):
super().__init__(flow_network)
# use this to save your result
self.maximum_flow = -1
def get_maximum_flow(self):
if not self.executed:
raise Exception("You should execute algorithm before using its result!")
return self.maximum_flow
class PushRelabelExecutor(MaximumFlowAlgorithmExecutor):
def __init__(self, flow_network):
super().__init__(flow_network)
self.preflow = [[0] * self.verticies_count for i in range(self.verticies_count)]
self.heights = [0] * self.verticies_count
self.excesses = [0] * self.verticies_count
def _algorithm(self):
self.heights[self.source_index] = self.verticies_count
# push some substance to graph
for nextvertex_index, bandwidth in enumerate(self.graph[self.source_index]):
self.preflow[self.source_index][nextvertex_index] += bandwidth
self.preflow[nextvertex_index][self.source_index] -= bandwidth
self.excesses[nextvertex_index] += bandwidth
# Relabel-to-front selection rule
vertices_list = [
i
for i in range(self.verticies_count)
if i not in {self.source_index, self.sink_index}
]
# move through list
i = 0
while i < len(vertices_list):
vertex_index = vertices_list[i]
previous_height = self.heights[vertex_index]
self.process_vertex(vertex_index)
if self.heights[vertex_index] > previous_height:
# if it was relabeled, swap elements
# and start from 0 index
vertices_list.insert(0, vertices_list.pop(i))
i = 0
else:
i += 1
self.maximum_flow = sum(self.preflow[self.source_index])
def process_vertex(self, vertex_index):
while self.excesses[vertex_index] > 0:
for neighbour_index in range(self.verticies_count):
# if it's neighbour and current vertex is higher
if (
self.graph[vertex_index][neighbour_index]
- self.preflow[vertex_index][neighbour_index]
> 0
and self.heights[vertex_index] > self.heights[neighbour_index]
):
self.push(vertex_index, neighbour_index)
self.relabel(vertex_index)
def push(self, from_index, to_index):
preflow_delta = min(
self.excesses[from_index],
self.graph[from_index][to_index] - self.preflow[from_index][to_index],
)
self.preflow[from_index][to_index] += preflow_delta
self.preflow[to_index][from_index] -= preflow_delta
self.excesses[from_index] -= preflow_delta
self.excesses[to_index] += preflow_delta
def relabel(self, vertex_index):
min_height = None
for to_index in range(self.verticies_count):
if (
self.graph[vertex_index][to_index]
- self.preflow[vertex_index][to_index]
> 0
) and (min_height is None or self.heights[to_index] < min_height):
min_height = self.heights[to_index]
if min_height is not None:
self.heights[vertex_index] = min_height + 1
if __name__ == "__main__":
entrances = [0]
exits = [3]
# graph = [
# [0, 0, 4, 6, 0, 0],
# [0, 0, 5, 2, 0, 0],
# [0, 0, 0, 0, 4, 4],
# [0, 0, 0, 0, 6, 6],
# [0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0],
# ]
graph = [[0, 7, 0, 0], [0, 0, 6, 0], [0, 0, 0, 8], [9, 0, 0, 0]]
# prepare our network
flow_network = FlowNetwork(graph, entrances, exits)
# set algorithm
flow_network.set_maximum_flow_algorithm(PushRelabelExecutor)
# and calculate
maximum_flow = flow_network.find_maximum_flow()
print(f"maximum flow is {maximum_flow}")
|
Eulerian Path is a path in graph that visits every edge exactly once. Eulerian Circuit is an Eulerian Path which starts and ends on the same vertex. time complexity is OVE space complexity is OVE using dfs for finding eulerian path traversal for checking in graph has euler path or circuit all degree is zero | # Eulerian Path is a path in graph that visits every edge exactly once.
# Eulerian Circuit is an Eulerian Path which starts and ends on the same
# vertex.
# time complexity is O(V+E)
# space complexity is O(VE)
# using dfs for finding eulerian path traversal
def dfs(u, graph, visited_edge, path=None):
path = (path or []) + [u]
for v in graph[u]:
if visited_edge[u][v] is False:
visited_edge[u][v], visited_edge[v][u] = True, True
path = dfs(v, graph, visited_edge, path)
return path
# for checking in graph has euler path or circuit
def check_circuit_or_path(graph, max_node):
odd_degree_nodes = 0
odd_node = -1
for i in range(max_node):
if i not in graph:
continue
if len(graph[i]) % 2 == 1:
odd_degree_nodes += 1
odd_node = i
if odd_degree_nodes == 0:
return 1, odd_node
if odd_degree_nodes == 2:
return 2, odd_node
return 3, odd_node
def check_euler(graph, max_node):
visited_edge = [[False for _ in range(max_node + 1)] for _ in range(max_node + 1)]
check, odd_node = check_circuit_or_path(graph, max_node)
if check == 3:
print("graph is not Eulerian")
print("no path")
return
start_node = 1
if check == 2:
start_node = odd_node
print("graph has a Euler path")
if check == 1:
print("graph has a Euler cycle")
path = dfs(start_node, graph, visited_edge)
print(path)
def main():
g1 = {1: [2, 3, 4], 2: [1, 3], 3: [1, 2], 4: [1, 5], 5: [4]}
g2 = {1: [2, 3, 4, 5], 2: [1, 3], 3: [1, 2], 4: [1, 5], 5: [1, 4]}
g3 = {1: [2, 3, 4], 2: [1, 3, 4], 3: [1, 2], 4: [1, 2, 5], 5: [4]}
g4 = {1: [2, 3], 2: [1, 3], 3: [1, 2]}
g5 = {
1: [],
2: []
# all degree is zero
}
max_node = 10
check_euler(g1, max_node)
check_euler(g2, max_node)
check_euler(g3, max_node)
check_euler(g4, max_node)
check_euler(g5, max_node)
if __name__ == "__main__":
main()
|
You are given a treea simple connected graph with no cycles. The tree has N nodes numbered from 1 to N and is rooted at node 1. Find the maximum number of edges you can remove from the tree to get a forest such that each connected component of the forest contains an even number of nodes. Constraints 2 2 100 Note: The tree input will be such that it can always be decomposed into components containing an even number of nodes. pylint: disableinvalidname DFS traversal pylint: disableredefinedoutername ret 1 visitedstart True for v in treestart: if v not in visited: ret dfsv if ret 2 0: cuts.appendstart return ret def eventree: dfs1 if name main: n, m 10, 9 tree defaultdictlist visited: dictint, bool cuts: listint count 0 edges 2, 1, 3, 1, 4, 3, 5, 2, 6, 1, 7, 2, 8, 6, 9, 8, 10, 8 for u, v in edges: treeu.appendv treev.appendu eventree printlencuts 1 | # pylint: disable=invalid-name
from collections import defaultdict
def dfs(start: int) -> int:
"""DFS traversal"""
# pylint: disable=redefined-outer-name
ret = 1
visited[start] = True
for v in tree[start]:
if v not in visited:
ret += dfs(v)
if ret % 2 == 0:
cuts.append(start)
return ret
def even_tree():
"""
2 1
3 1
4 3
5 2
6 1
7 2
8 6
9 8
10 8
On removing edges (1,3) and (1,6), we can get the desired result 2.
"""
dfs(1)
if __name__ == "__main__":
n, m = 10, 9
tree = defaultdict(list)
visited: dict[int, bool] = {}
cuts: list[int] = []
count = 0
edges = [(2, 1), (3, 1), (4, 3), (5, 2), (6, 1), (7, 2), (8, 6), (9, 8), (10, 8)]
for u, v in edges:
tree[u].append(v)
tree[v].append(u)
even_tree()
print(len(cuts) - 1)
|
An edge is a bridge if, after removing it count of connected components in graph will be increased by one. Bridges represent vulnerabilities in a connected network and are useful for designing reliable networks. For example, in a wired computer network, an articulation point indicates the critical computers and a bridge indicates the critical wires or connections. For more details, refer this article: https:www.geeksforgeeks.orgbridgeinagraph Return the list of undirected graph bridges a1, b1, ..., ak, bk; ai bi computebridgesgetdemograph0 3, 4, 2, 3, 2, 5 computebridgesgetdemograph1 6, 7, 0, 6, 1, 9, 3, 4, 2, 4, 2, 5 computebridgesgetdemograph2 1, 6, 4, 6, 0, 4 computebridgesgetdemograph3 computebridges This edge is a back edge and cannot be a bridge | def __get_demo_graph(index):
return [
{
0: [1, 2],
1: [0, 2],
2: [0, 1, 3, 5],
3: [2, 4],
4: [3],
5: [2, 6, 8],
6: [5, 7],
7: [6, 8],
8: [5, 7],
},
{
0: [6],
1: [9],
2: [4, 5],
3: [4],
4: [2, 3],
5: [2],
6: [0, 7],
7: [6],
8: [],
9: [1],
},
{
0: [4],
1: [6],
2: [],
3: [5, 6, 7],
4: [0, 6],
5: [3, 8, 9],
6: [1, 3, 4, 7],
7: [3, 6, 8, 9],
8: [5, 7],
9: [5, 7],
},
{
0: [1, 3],
1: [0, 2, 4],
2: [1, 3, 4],
3: [0, 2, 4],
4: [1, 2, 3],
},
][index]
def compute_bridges(graph: dict[int, list[int]]) -> list[tuple[int, int]]:
"""
Return the list of undirected graph bridges [(a1, b1), ..., (ak, bk)]; ai <= bi
>>> compute_bridges(__get_demo_graph(0))
[(3, 4), (2, 3), (2, 5)]
>>> compute_bridges(__get_demo_graph(1))
[(6, 7), (0, 6), (1, 9), (3, 4), (2, 4), (2, 5)]
>>> compute_bridges(__get_demo_graph(2))
[(1, 6), (4, 6), (0, 4)]
>>> compute_bridges(__get_demo_graph(3))
[]
>>> compute_bridges({})
[]
"""
id_ = 0
n = len(graph) # No of vertices in graph
low = [0] * n
visited = [False] * n
def dfs(at, parent, bridges, id_):
visited[at] = True
low[at] = id_
id_ += 1
for to in graph[at]:
if to == parent:
pass
elif not visited[to]:
dfs(to, at, bridges, id_)
low[at] = min(low[at], low[to])
if id_ <= low[to]:
bridges.append((at, to) if at < to else (to, at))
else:
# This edge is a back edge and cannot be a bridge
low[at] = min(low[at], low[to])
bridges: list[tuple[int, int]] = []
for i in range(n):
if not visited[i]:
dfs(i, -1, bridges, id_)
return bridges
if __name__ == "__main__":
import doctest
doctest.testmod()
|
FPGraphMiner A Fast Frequent Pattern Mining Algorithm for Network Graphs A novel Frequent Pattern Graph Mining algorithm, FPGraphMiner, that compactly represents a set of network graphs as a Frequent Pattern Graph or FPGraph. This graph can be used to efficiently mine frequent subgraphs including maximal frequent subgraphs and maximum common subgraphs. URL: https:www.researchgate.netpublication235255851 fmt: off fmt: on Return Distinct edges from edge array of multiple graphs sortedgetdistinctedgeedgearray 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h' Return bitcode of distinctedge Returns Frequency Table print'bit',bit bt''.joinbit Store Distinct edge, WTBitcode, Bitcode in descending order Returns nodes format nodesbitcode:edges that represent the bitcode getnodes'ab', 5, '11111', 'ac', 5, '11111', 'df', 5, '11111', ... 'bd', 5, '11111', 'bc', 5, '11111' '11111': 'ab', 'ac', 'df', 'bd', 'bc' Returns cluster format cluster:WTbitcode:nodes with same WT Returns support getsupport5: '11111': 'ab', 'ac', 'df', 'bd', 'bc', ... 4: '11101': 'ef', 'eg', 'de', 'fg', '11011': 'cd', ... 3: '11001': 'ad', '10101': 'dg', ... 2: '10010': 'dh', 'bh', '11000': 'be', '10100': 'gh', ... '10001': 'ce', ... 1: '00100': 'fh', 'eh', '10000': 'hi' 100.0, 80.0, 60.0, 40.0, 20.0 create edge between the nodes creates edge only if the condition satisfies find different DFS walk from given node to Header node find edges of multiple frequent subgraphs returns Edge list for frequent subgraphs Preprocess the edge array preprocess'abe1', 'ace3', 'ade5', 'bce4', 'bde2', 'bee6', 'bhe12', ... 'cde2', 'cee4', 'dee1', 'dfe8', 'dge5', 'dhe10', 'efe3', ... 'ege2', 'fge6', 'ghe6', 'hie3' | # fmt: off
edge_array = [
['ab-e1', 'ac-e3', 'ad-e5', 'bc-e4', 'bd-e2', 'be-e6', 'bh-e12', 'cd-e2', 'ce-e4',
'de-e1', 'df-e8', 'dg-e5', 'dh-e10', 'ef-e3', 'eg-e2', 'fg-e6', 'gh-e6', 'hi-e3'],
['ab-e1', 'ac-e3', 'ad-e5', 'bc-e4', 'bd-e2', 'be-e6', 'cd-e2', 'de-e1', 'df-e8',
'ef-e3', 'eg-e2', 'fg-e6'],
['ab-e1', 'ac-e3', 'bc-e4', 'bd-e2', 'de-e1', 'df-e8', 'dg-e5', 'ef-e3', 'eg-e2',
'eh-e12', 'fg-e6', 'fh-e10', 'gh-e6'],
['ab-e1', 'ac-e3', 'bc-e4', 'bd-e2', 'bh-e12', 'cd-e2', 'df-e8', 'dh-e10'],
['ab-e1', 'ac-e3', 'ad-e5', 'bc-e4', 'bd-e2', 'cd-e2', 'ce-e4', 'de-e1', 'df-e8',
'dg-e5', 'ef-e3', 'eg-e2', 'fg-e6']
]
# fmt: on
def get_distinct_edge(edge_array):
"""
Return Distinct edges from edge array of multiple graphs
>>> sorted(get_distinct_edge(edge_array))
['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']
"""
distinct_edge = set()
for row in edge_array:
for item in row:
distinct_edge.add(item[0])
return list(distinct_edge)
def get_bitcode(edge_array, distinct_edge):
"""
Return bitcode of distinct_edge
"""
bitcode = ["0"] * len(edge_array)
for i, row in enumerate(edge_array):
for item in row:
if distinct_edge in item[0]:
bitcode[i] = "1"
break
return "".join(bitcode)
def get_frequency_table(edge_array):
"""
Returns Frequency Table
"""
distinct_edge = get_distinct_edge(edge_array)
frequency_table = {}
for item in distinct_edge:
bit = get_bitcode(edge_array, item)
# print('bit',bit)
# bt=''.join(bit)
s = bit.count("1")
frequency_table[item] = [s, bit]
# Store [Distinct edge, WT(Bitcode), Bitcode] in descending order
sorted_frequency_table = [
[k, v[0], v[1]]
for k, v in sorted(frequency_table.items(), key=lambda v: v[1][0], reverse=True)
]
return sorted_frequency_table
def get_nodes(frequency_table):
"""
Returns nodes
format nodes={bitcode:edges that represent the bitcode}
>>> get_nodes([['ab', 5, '11111'], ['ac', 5, '11111'], ['df', 5, '11111'],
... ['bd', 5, '11111'], ['bc', 5, '11111']])
{'11111': ['ab', 'ac', 'df', 'bd', 'bc']}
"""
nodes = {}
for _, item in enumerate(frequency_table):
nodes.setdefault(item[2], []).append(item[0])
return nodes
def get_cluster(nodes):
"""
Returns cluster
format cluster:{WT(bitcode):nodes with same WT}
"""
cluster = {}
for key, value in nodes.items():
cluster.setdefault(key.count("1"), {})[key] = value
return cluster
def get_support(cluster):
"""
Returns support
>>> get_support({5: {'11111': ['ab', 'ac', 'df', 'bd', 'bc']},
... 4: {'11101': ['ef', 'eg', 'de', 'fg'], '11011': ['cd']},
... 3: {'11001': ['ad'], '10101': ['dg']},
... 2: {'10010': ['dh', 'bh'], '11000': ['be'], '10100': ['gh'],
... '10001': ['ce']},
... 1: {'00100': ['fh', 'eh'], '10000': ['hi']}})
[100.0, 80.0, 60.0, 40.0, 20.0]
"""
return [i * 100 / len(cluster) for i in cluster]
def print_all() -> None:
print("\nNodes\n")
for key, value in nodes.items():
print(key, value)
print("\nSupport\n")
print(support)
print("\n Cluster \n")
for key, value in sorted(cluster.items(), reverse=True):
print(key, value)
print("\n Graph\n")
for key, value in graph.items():
print(key, value)
print("\n Edge List of Frequent subgraphs \n")
for edge_list in freq_subgraph_edge_list:
print(edge_list)
def create_edge(nodes, graph, cluster, c1):
"""
create edge between the nodes
"""
for i in cluster[c1]:
count = 0
c2 = c1 + 1
while c2 < max(cluster.keys()):
for j in cluster[c2]:
"""
creates edge only if the condition satisfies
"""
if int(i, 2) & int(j, 2) == int(i, 2):
if tuple(nodes[i]) in graph:
graph[tuple(nodes[i])].append(nodes[j])
else:
graph[tuple(nodes[i])] = [nodes[j]]
count += 1
if count == 0:
c2 = c2 + 1
else:
break
def construct_graph(cluster, nodes):
x = cluster[max(cluster.keys())]
cluster[max(cluster.keys()) + 1] = "Header"
graph = {}
for i in x:
if (["Header"],) in graph:
graph[(["Header"],)].append(x[i])
else:
graph[(["Header"],)] = [x[i]]
for i in x:
graph[(x[i],)] = [["Header"]]
i = 1
while i < max(cluster) - 1:
create_edge(nodes, graph, cluster, i)
i = i + 1
return graph
def my_dfs(graph, start, end, path=None):
"""
find different DFS walk from given node to Header node
"""
path = (path or []) + [start]
if start == end:
paths.append(path)
for node in graph[start]:
if tuple(node) not in path:
my_dfs(graph, tuple(node), end, path)
def find_freq_subgraph_given_support(s, cluster, graph):
"""
find edges of multiple frequent subgraphs
"""
k = int(s / 100 * (len(cluster) - 1))
for i in cluster[k]:
my_dfs(graph, tuple(cluster[k][i]), (["Header"],))
def freq_subgraphs_edge_list(paths):
"""
returns Edge list for frequent subgraphs
"""
freq_sub_el = []
for edges in paths:
el = []
for j in range(len(edges) - 1):
temp = list(edges[j])
for e in temp:
edge = (e[0], e[1])
el.append(edge)
freq_sub_el.append(el)
return freq_sub_el
def preprocess(edge_array):
"""
Preprocess the edge array
>>> preprocess([['ab-e1', 'ac-e3', 'ad-e5', 'bc-e4', 'bd-e2', 'be-e6', 'bh-e12',
... 'cd-e2', 'ce-e4', 'de-e1', 'df-e8', 'dg-e5', 'dh-e10', 'ef-e3',
... 'eg-e2', 'fg-e6', 'gh-e6', 'hi-e3']])
"""
for i in range(len(edge_array)):
for j in range(len(edge_array[i])):
t = edge_array[i][j].split("-")
edge_array[i][j] = t
if __name__ == "__main__":
preprocess(edge_array)
frequency_table = get_frequency_table(edge_array)
nodes = get_nodes(frequency_table)
cluster = get_cluster(nodes)
support = get_support(cluster)
graph = construct_graph(cluster, nodes)
find_freq_subgraph_given_support(60, cluster, graph)
paths: list = []
freq_subgraph_edge_list = freq_subgraphs_edge_list(paths)
print_all()
|
Author: Phyllipe Bezerra https:github.compmba | # Author: Phyllipe Bezerra (https://github.com/pmba)
clothes = {
0: "underwear",
1: "pants",
2: "belt",
3: "suit",
4: "shoe",
5: "socks",
6: "shirt",
7: "tie",
8: "watch",
}
graph = [[1, 4], [2, 4], [3], [], [], [4], [2, 7], [3], []]
visited = [0 for x in range(len(graph))]
stack = []
def print_stack(stack, clothes):
order = 1
while stack:
current_clothing = stack.pop()
print(order, clothes[current_clothing])
order += 1
def depth_first_search(u, visited, graph):
visited[u] = 1
for v in graph[u]:
if not visited[v]:
depth_first_search(v, visited, graph)
stack.append(u)
def topological_sort(graph, visited):
for v in range(len(graph)):
if not visited[v]:
depth_first_search(v, visited, graph)
if __name__ == "__main__":
topological_sort(graph, visited)
print(stack)
print_stack(stack, clothes)
|
Finds the stable match in any bipartite graph, i.e a pairing where no 2 objects prefer each other over their partner. The function accepts the preferences of oegan donors and recipients where both are assigned numbers from 0 to n1 and returns a list where the index position corresponds to the donor and value at the index is the organ recipient. To better understand the algorithm, see also: https:github.comakashvshroffGaleShapleyStableMatching README. https:www.youtube.comwatch?vQcv1IqHWAzgt13s Numberphile YouTube. donorpref 0, 1, 3, 2, 0, 2, 3, 1, 1, 0, 2, 3, 0, 3, 1, 2 recipientpref 3, 1, 2, 0, 3, 1, 0, 2, 0, 3, 1, 2, 1, 0, 3, 2 stablematchingdonorpref, recipientpref 1, 2, 3, 0 | from __future__ import annotations
def stable_matching(
donor_pref: list[list[int]], recipient_pref: list[list[int]]
) -> list[int]:
"""
Finds the stable match in any bipartite graph, i.e a pairing where no 2 objects
prefer each other over their partner. The function accepts the preferences of
oegan donors and recipients (where both are assigned numbers from 0 to n-1) and
returns a list where the index position corresponds to the donor and value at the
index is the organ recipient.
To better understand the algorithm, see also:
https://github.com/akashvshroff/Gale_Shapley_Stable_Matching (README).
https://www.youtube.com/watch?v=Qcv1IqHWAzg&t=13s (Numberphile YouTube).
>>> donor_pref = [[0, 1, 3, 2], [0, 2, 3, 1], [1, 0, 2, 3], [0, 3, 1, 2]]
>>> recipient_pref = [[3, 1, 2, 0], [3, 1, 0, 2], [0, 3, 1, 2], [1, 0, 3, 2]]
>>> stable_matching(donor_pref, recipient_pref)
[1, 2, 3, 0]
"""
assert len(donor_pref) == len(recipient_pref)
n = len(donor_pref)
unmatched_donors = list(range(n))
donor_record = [-1] * n # who the donor has donated to
rec_record = [-1] * n # who the recipient has received from
num_donations = [0] * n
while unmatched_donors:
donor = unmatched_donors[0]
donor_preference = donor_pref[donor]
recipient = donor_preference[num_donations[donor]]
num_donations[donor] += 1
rec_preference = recipient_pref[recipient]
prev_donor = rec_record[recipient]
if prev_donor != -1:
if rec_preference.index(prev_donor) > rec_preference.index(donor):
rec_record[recipient] = donor
donor_record[donor] = recipient
unmatched_donors.append(prev_donor)
unmatched_donors.remove(donor)
else:
rec_record[recipient] = donor
donor_record[donor] = recipient
unmatched_donors.remove(donor)
return donor_record
|
!usrbinenv python3 Author: Vikram Nithyanandam Description: The following implementation is a robust unweighted Graph data structure implemented using an adjacency list. This vertices and edges of this graph can be effectively initialized and modified while storing your chosen generic value in each vertex. Adjacency List: https:en.wikipedia.orgwikiAdjacencylist Potential Future Ideas: Add a flag to set edge weights on and set edge weights Make edge weights and vertex values customizable to store whatever the client wants Support multigraph functionality if the client wants it Parameters: vertices: listT The list of vertex names the client wants to pass in. Default is empty. edges: listlistT The list of edges the client wants to pass in. Each edge is a 2element list. Default is empty. directed: bool Indicates if graph is directed or undirected. Default is True. Falsey checks Adds a vertex to the graph. If the given vertex already exists, a ValueError will be thrown. Creates an edge from source vertex to destination vertex. If any given vertex doesn't exist or the edge already exists, a ValueError will be thrown. add the destination vertex to the list associated with the source vertex and vice versa if not directed Removes the given vertex from the graph and deletes all incoming and outgoing edges from the given vertex as well. If the given vertex does not exist, a ValueError will be thrown. If not directed, find all neighboring vertices and delete all references of edges connecting to the given vertex If directed, search all neighbors of all vertices and delete all references of edges connecting to the given vertex Finally, delete the given vertex and all of its outgoing edge references Removes the edge between the two vertices. If any given vertex doesn't exist or the edge does not exist, a ValueError will be thrown. remove the destination vertex from the list associated with the source vertex and vice versa if not directed Returns True if the graph contains the vertex, False otherwise. Returns True if the graph contains the edge from the sourcevertex to the destinationvertex, False otherwise. If any given vertex doesn't exist, a ValueError will be thrown. Clears all vertices and edges. generate graph input build graphs test graph initialization with vertices and edges Build graphs WITHOUT edges Test containsvertex build empty graphs run addvertex test addvertex worked build graphs WITHOUT edges test removevertex worked build graphs WITHOUT edges test adding and removing vertices remove all vertices generate graphs and graph input generate all possible edges for testing test containsedge function since this edge exists for undirected but the reverse may not exist for directed generate graph input build graphs WITHOUT edges run and test addedge generate graph input and graphs run and test removeedge make some more edge options! | #!/usr/bin/env python3
"""
Author: Vikram Nithyanandam
Description:
The following implementation is a robust unweighted Graph data structure
implemented using an adjacency list. This vertices and edges of this graph can be
effectively initialized and modified while storing your chosen generic
value in each vertex.
Adjacency List: https://en.wikipedia.org/wiki/Adjacency_list
Potential Future Ideas:
- Add a flag to set edge weights on and set edge weights
- Make edge weights and vertex values customizable to store whatever the client wants
- Support multigraph functionality if the client wants it
"""
from __future__ import annotations
import random
import unittest
from pprint import pformat
from typing import Generic, TypeVar
import pytest
T = TypeVar("T")
class GraphAdjacencyList(Generic[T]):
def __init__(
self, vertices: list[T], edges: list[list[T]], directed: bool = True
) -> None:
"""
Parameters:
- vertices: (list[T]) The list of vertex names the client wants to
pass in. Default is empty.
- edges: (list[list[T]]) The list of edges the client wants to
pass in. Each edge is a 2-element list. Default is empty.
- directed: (bool) Indicates if graph is directed or undirected.
Default is True.
"""
self.adj_list: dict[T, list[T]] = {} # dictionary of lists of T
self.directed = directed
# Falsey checks
edges = edges or []
vertices = vertices or []
for vertex in vertices:
self.add_vertex(vertex)
for edge in edges:
if len(edge) != 2:
msg = f"Invalid input: {edge} is the wrong length."
raise ValueError(msg)
self.add_edge(edge[0], edge[1])
def add_vertex(self, vertex: T) -> None:
"""
Adds a vertex to the graph. If the given vertex already exists,
a ValueError will be thrown.
"""
if self.contains_vertex(vertex):
msg = f"Incorrect input: {vertex} is already in the graph."
raise ValueError(msg)
self.adj_list[vertex] = []
def add_edge(self, source_vertex: T, destination_vertex: T) -> None:
"""
Creates an edge from source vertex to destination vertex. If any
given vertex doesn't exist or the edge already exists, a ValueError
will be thrown.
"""
if not (
self.contains_vertex(source_vertex)
and self.contains_vertex(destination_vertex)
):
msg = (
f"Incorrect input: Either {source_vertex} or "
f"{destination_vertex} does not exist"
)
raise ValueError(msg)
if self.contains_edge(source_vertex, destination_vertex):
msg = (
"Incorrect input: The edge already exists between "
f"{source_vertex} and {destination_vertex}"
)
raise ValueError(msg)
# add the destination vertex to the list associated with the source vertex
# and vice versa if not directed
self.adj_list[source_vertex].append(destination_vertex)
if not self.directed:
self.adj_list[destination_vertex].append(source_vertex)
def remove_vertex(self, vertex: T) -> None:
"""
Removes the given vertex from the graph and deletes all incoming and
outgoing edges from the given vertex as well. If the given vertex
does not exist, a ValueError will be thrown.
"""
if not self.contains_vertex(vertex):
msg = f"Incorrect input: {vertex} does not exist in this graph."
raise ValueError(msg)
if not self.directed:
# If not directed, find all neighboring vertices and delete all references
# of edges connecting to the given vertex
for neighbor in self.adj_list[vertex]:
self.adj_list[neighbor].remove(vertex)
else:
# If directed, search all neighbors of all vertices and delete all
# references of edges connecting to the given vertex
for edge_list in self.adj_list.values():
if vertex in edge_list:
edge_list.remove(vertex)
# Finally, delete the given vertex and all of its outgoing edge references
self.adj_list.pop(vertex)
def remove_edge(self, source_vertex: T, destination_vertex: T) -> None:
"""
Removes the edge between the two vertices. If any given vertex
doesn't exist or the edge does not exist, a ValueError will be thrown.
"""
if not (
self.contains_vertex(source_vertex)
and self.contains_vertex(destination_vertex)
):
msg = (
f"Incorrect input: Either {source_vertex} or "
f"{destination_vertex} does not exist"
)
raise ValueError(msg)
if not self.contains_edge(source_vertex, destination_vertex):
msg = (
"Incorrect input: The edge does NOT exist between "
f"{source_vertex} and {destination_vertex}"
)
raise ValueError(msg)
# remove the destination vertex from the list associated with the source
# vertex and vice versa if not directed
self.adj_list[source_vertex].remove(destination_vertex)
if not self.directed:
self.adj_list[destination_vertex].remove(source_vertex)
def contains_vertex(self, vertex: T) -> bool:
"""
Returns True if the graph contains the vertex, False otherwise.
"""
return vertex in self.adj_list
def contains_edge(self, source_vertex: T, destination_vertex: T) -> bool:
"""
Returns True if the graph contains the edge from the source_vertex to the
destination_vertex, False otherwise. If any given vertex doesn't exist, a
ValueError will be thrown.
"""
if not (
self.contains_vertex(source_vertex)
and self.contains_vertex(destination_vertex)
):
msg = (
f"Incorrect input: Either {source_vertex} "
f"or {destination_vertex} does not exist."
)
raise ValueError(msg)
return destination_vertex in self.adj_list[source_vertex]
def clear_graph(self) -> None:
"""
Clears all vertices and edges.
"""
self.adj_list = {}
def __repr__(self) -> str:
return pformat(self.adj_list)
class TestGraphAdjacencyList(unittest.TestCase):
def __assert_graph_edge_exists_check(
self,
undirected_graph: GraphAdjacencyList,
directed_graph: GraphAdjacencyList,
edge: list[int],
) -> None:
assert undirected_graph.contains_edge(edge[0], edge[1])
assert undirected_graph.contains_edge(edge[1], edge[0])
assert directed_graph.contains_edge(edge[0], edge[1])
def __assert_graph_edge_does_not_exist_check(
self,
undirected_graph: GraphAdjacencyList,
directed_graph: GraphAdjacencyList,
edge: list[int],
) -> None:
assert not undirected_graph.contains_edge(edge[0], edge[1])
assert not undirected_graph.contains_edge(edge[1], edge[0])
assert not directed_graph.contains_edge(edge[0], edge[1])
def __assert_graph_vertex_exists_check(
self,
undirected_graph: GraphAdjacencyList,
directed_graph: GraphAdjacencyList,
vertex: int,
) -> None:
assert undirected_graph.contains_vertex(vertex)
assert directed_graph.contains_vertex(vertex)
def __assert_graph_vertex_does_not_exist_check(
self,
undirected_graph: GraphAdjacencyList,
directed_graph: GraphAdjacencyList,
vertex: int,
) -> None:
assert not undirected_graph.contains_vertex(vertex)
assert not directed_graph.contains_vertex(vertex)
def __generate_random_edges(
self, vertices: list[int], edge_pick_count: int
) -> list[list[int]]:
assert edge_pick_count <= len(vertices)
random_source_vertices: list[int] = random.sample(
vertices[0 : int(len(vertices) / 2)], edge_pick_count
)
random_destination_vertices: list[int] = random.sample(
vertices[int(len(vertices) / 2) :], edge_pick_count
)
random_edges: list[list[int]] = []
for source in random_source_vertices:
for dest in random_destination_vertices:
random_edges.append([source, dest])
return random_edges
def __generate_graphs(
self, vertex_count: int, min_val: int, max_val: int, edge_pick_count: int
) -> tuple[GraphAdjacencyList, GraphAdjacencyList, list[int], list[list[int]]]:
if max_val - min_val + 1 < vertex_count:
raise ValueError(
"Will result in duplicate vertices. Either increase range "
"between min_val and max_val or decrease vertex count."
)
# generate graph input
random_vertices: list[int] = random.sample(
range(min_val, max_val + 1), vertex_count
)
random_edges: list[list[int]] = self.__generate_random_edges(
random_vertices, edge_pick_count
)
# build graphs
undirected_graph = GraphAdjacencyList(
vertices=random_vertices, edges=random_edges, directed=False
)
directed_graph = GraphAdjacencyList(
vertices=random_vertices, edges=random_edges, directed=True
)
return undirected_graph, directed_graph, random_vertices, random_edges
def test_init_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
# test graph initialization with vertices and edges
for num in random_vertices:
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, num
)
for edge in random_edges:
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
assert not undirected_graph.directed
assert directed_graph.directed
def test_contains_vertex(self) -> None:
random_vertices: list[int] = random.sample(range(101), 20)
# Build graphs WITHOUT edges
undirected_graph = GraphAdjacencyList(
vertices=random_vertices, edges=[], directed=False
)
directed_graph = GraphAdjacencyList(
vertices=random_vertices, edges=[], directed=True
)
# Test contains_vertex
for num in range(101):
assert (num in random_vertices) == undirected_graph.contains_vertex(num)
assert (num in random_vertices) == directed_graph.contains_vertex(num)
def test_add_vertices(self) -> None:
random_vertices: list[int] = random.sample(range(101), 20)
# build empty graphs
undirected_graph: GraphAdjacencyList = GraphAdjacencyList(
vertices=[], edges=[], directed=False
)
directed_graph: GraphAdjacencyList = GraphAdjacencyList(
vertices=[], edges=[], directed=True
)
# run add_vertex
for num in random_vertices:
undirected_graph.add_vertex(num)
for num in random_vertices:
directed_graph.add_vertex(num)
# test add_vertex worked
for num in random_vertices:
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, num
)
def test_remove_vertices(self) -> None:
random_vertices: list[int] = random.sample(range(101), 20)
# build graphs WITHOUT edges
undirected_graph = GraphAdjacencyList(
vertices=random_vertices, edges=[], directed=False
)
directed_graph = GraphAdjacencyList(
vertices=random_vertices, edges=[], directed=True
)
# test remove_vertex worked
for num in random_vertices:
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, num
)
undirected_graph.remove_vertex(num)
directed_graph.remove_vertex(num)
self.__assert_graph_vertex_does_not_exist_check(
undirected_graph, directed_graph, num
)
def test_add_and_remove_vertices_repeatedly(self) -> None:
random_vertices1: list[int] = random.sample(range(51), 20)
random_vertices2: list[int] = random.sample(range(51, 101), 20)
# build graphs WITHOUT edges
undirected_graph = GraphAdjacencyList(
vertices=random_vertices1, edges=[], directed=False
)
directed_graph = GraphAdjacencyList(
vertices=random_vertices1, edges=[], directed=True
)
# test adding and removing vertices
for i, _ in enumerate(random_vertices1):
undirected_graph.add_vertex(random_vertices2[i])
directed_graph.add_vertex(random_vertices2[i])
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, random_vertices2[i]
)
undirected_graph.remove_vertex(random_vertices1[i])
directed_graph.remove_vertex(random_vertices1[i])
self.__assert_graph_vertex_does_not_exist_check(
undirected_graph, directed_graph, random_vertices1[i]
)
# remove all vertices
for i, _ in enumerate(random_vertices1):
undirected_graph.remove_vertex(random_vertices2[i])
directed_graph.remove_vertex(random_vertices2[i])
self.__assert_graph_vertex_does_not_exist_check(
undirected_graph, directed_graph, random_vertices2[i]
)
def test_contains_edge(self) -> None:
# generate graphs and graph input
vertex_count = 20
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(vertex_count, 0, 100, 4)
# generate all possible edges for testing
all_possible_edges: list[list[int]] = []
for i in range(vertex_count - 1):
for j in range(i + 1, vertex_count):
all_possible_edges.append([random_vertices[i], random_vertices[j]])
all_possible_edges.append([random_vertices[j], random_vertices[i]])
# test contains_edge function
for edge in all_possible_edges:
if edge in random_edges:
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
elif [edge[1], edge[0]] in random_edges:
# since this edge exists for undirected but the reverse
# may not exist for directed
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, [edge[1], edge[0]]
)
else:
self.__assert_graph_edge_does_not_exist_check(
undirected_graph, directed_graph, edge
)
def test_add_edge(self) -> None:
# generate graph input
random_vertices: list[int] = random.sample(range(101), 15)
random_edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
# build graphs WITHOUT edges
undirected_graph = GraphAdjacencyList(
vertices=random_vertices, edges=[], directed=False
)
directed_graph = GraphAdjacencyList(
vertices=random_vertices, edges=[], directed=True
)
# run and test add_edge
for edge in random_edges:
undirected_graph.add_edge(edge[0], edge[1])
directed_graph.add_edge(edge[0], edge[1])
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
def test_remove_edge(self) -> None:
# generate graph input and graphs
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
# run and test remove_edge
for edge in random_edges:
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
undirected_graph.remove_edge(edge[0], edge[1])
directed_graph.remove_edge(edge[0], edge[1])
self.__assert_graph_edge_does_not_exist_check(
undirected_graph, directed_graph, edge
)
def test_add_and_remove_edges_repeatedly(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
# make some more edge options!
more_random_edges: list[list[int]] = []
while len(more_random_edges) != len(random_edges):
edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
for edge in edges:
if len(more_random_edges) == len(random_edges):
break
elif edge not in more_random_edges and edge not in random_edges:
more_random_edges.append(edge)
for i, _ in enumerate(random_edges):
undirected_graph.add_edge(more_random_edges[i][0], more_random_edges[i][1])
directed_graph.add_edge(more_random_edges[i][0], more_random_edges[i][1])
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, more_random_edges[i]
)
undirected_graph.remove_edge(random_edges[i][0], random_edges[i][1])
directed_graph.remove_edge(random_edges[i][0], random_edges[i][1])
self.__assert_graph_edge_does_not_exist_check(
undirected_graph, directed_graph, random_edges[i]
)
def test_add_vertex_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for vertex in random_vertices:
with pytest.raises(ValueError):
undirected_graph.add_vertex(vertex)
with pytest.raises(ValueError):
directed_graph.add_vertex(vertex)
def test_remove_vertex_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for i in range(101):
if i not in random_vertices:
with pytest.raises(ValueError):
undirected_graph.remove_vertex(i)
with pytest.raises(ValueError):
directed_graph.remove_vertex(i)
def test_add_edge_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for edge in random_edges:
with pytest.raises(ValueError):
undirected_graph.add_edge(edge[0], edge[1])
with pytest.raises(ValueError):
directed_graph.add_edge(edge[0], edge[1])
def test_remove_edge_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
more_random_edges: list[list[int]] = []
while len(more_random_edges) != len(random_edges):
edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
for edge in edges:
if len(more_random_edges) == len(random_edges):
break
elif edge not in more_random_edges and edge not in random_edges:
more_random_edges.append(edge)
for edge in more_random_edges:
with pytest.raises(ValueError):
undirected_graph.remove_edge(edge[0], edge[1])
with pytest.raises(ValueError):
directed_graph.remove_edge(edge[0], edge[1])
def test_contains_edge_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for vertex in random_vertices:
with pytest.raises(ValueError):
undirected_graph.contains_edge(vertex, 102)
with pytest.raises(ValueError):
directed_graph.contains_edge(vertex, 102)
with pytest.raises(ValueError):
undirected_graph.contains_edge(103, 102)
with pytest.raises(ValueError):
directed_graph.contains_edge(103, 102)
if __name__ == "__main__":
unittest.main()
|
!usrbinenv python3 Author: Vikram Nithyanandam Description: The following implementation is a robust unweighted Graph data structure implemented using an adjacency matrix. This vertices and edges of this graph can be effectively initialized and modified while storing your chosen generic value in each vertex. Adjacency Matrix: https:mathworld.wolfram.comAdjacencyMatrix.html Potential Future Ideas: Add a flag to set edge weights on and set edge weights Make edge weights and vertex values customizable to store whatever the client wants Support multigraph functionality if the client wants it Parameters: vertices: listT The list of vertex names the client wants to pass in. Default is empty. edges: listlistT The list of edges the client wants to pass in. Each edge is a 2element list. Default is empty. directed: bool Indicates if graph is directed or undirected. Default is True. Falsey checks Creates an edge from source vertex to destination vertex. If any given vertex doesn't exist or the edge already exists, a ValueError will be thrown. Get the indices of the corresponding vertices and set their edge value to 1. Removes the edge between the two vertices. If any given vertex doesn't exist or the edge does not exist, a ValueError will be thrown. Get the indices of the corresponding vertices and set their edge value to 0. Adds a vertex to the graph. If the given vertex already exists, a ValueError will be thrown. build column for vertex build row for vertex and update other data structures Removes the given vertex from the graph and deletes all incoming and outgoing edges from the given vertex as well. If the given vertex does not exist, a ValueError will be thrown. first slide up the rows by deleting the row corresponding to the vertex being deleted. next, slide the columns to the left by deleting the values in the column corresponding to the vertex being deleted final clean up decrement indices for vertices shifted by the deleted vertex in the adj matrix Returns True if the graph contains the vertex, False otherwise. Returns True if the graph contains the edge from the sourcevertex to the destinationvertex, False otherwise. If any given vertex doesn't exist, a ValueError will be thrown. Clears all vertices and edges. generate graph input build graphs test graph initialization with vertices and edges Build graphs WITHOUT edges Test containsvertex build empty graphs run addvertex test addvertex worked build graphs WITHOUT edges test removevertex worked build graphs WITHOUT edges test adding and removing vertices remove all vertices generate graphs and graph input generate all possible edges for testing test containsedge function since this edge exists for undirected but the reverse may not exist for directed generate graph input build graphs WITHOUT edges run and test addedge generate graph input and graphs run and test removeedge make some more edge options! | #!/usr/bin/env python3
"""
Author: Vikram Nithyanandam
Description:
The following implementation is a robust unweighted Graph data structure
implemented using an adjacency matrix. This vertices and edges of this graph can be
effectively initialized and modified while storing your chosen generic
value in each vertex.
Adjacency Matrix: https://mathworld.wolfram.com/AdjacencyMatrix.html
Potential Future Ideas:
- Add a flag to set edge weights on and set edge weights
- Make edge weights and vertex values customizable to store whatever the client wants
- Support multigraph functionality if the client wants it
"""
from __future__ import annotations
import random
import unittest
from pprint import pformat
from typing import Generic, TypeVar
import pytest
T = TypeVar("T")
class GraphAdjacencyMatrix(Generic[T]):
def __init__(
self, vertices: list[T], edges: list[list[T]], directed: bool = True
) -> None:
"""
Parameters:
- vertices: (list[T]) The list of vertex names the client wants to
pass in. Default is empty.
- edges: (list[list[T]]) The list of edges the client wants to
pass in. Each edge is a 2-element list. Default is empty.
- directed: (bool) Indicates if graph is directed or undirected.
Default is True.
"""
self.directed = directed
self.vertex_to_index: dict[T, int] = {}
self.adj_matrix: list[list[int]] = []
# Falsey checks
edges = edges or []
vertices = vertices or []
for vertex in vertices:
self.add_vertex(vertex)
for edge in edges:
if len(edge) != 2:
msg = f"Invalid input: {edge} must have length 2."
raise ValueError(msg)
self.add_edge(edge[0], edge[1])
def add_edge(self, source_vertex: T, destination_vertex: T) -> None:
"""
Creates an edge from source vertex to destination vertex. If any
given vertex doesn't exist or the edge already exists, a ValueError
will be thrown.
"""
if not (
self.contains_vertex(source_vertex)
and self.contains_vertex(destination_vertex)
):
msg = (
f"Incorrect input: Either {source_vertex} or "
f"{destination_vertex} does not exist"
)
raise ValueError(msg)
if self.contains_edge(source_vertex, destination_vertex):
msg = (
"Incorrect input: The edge already exists between "
f"{source_vertex} and {destination_vertex}"
)
raise ValueError(msg)
# Get the indices of the corresponding vertices and set their edge value to 1.
u: int = self.vertex_to_index[source_vertex]
v: int = self.vertex_to_index[destination_vertex]
self.adj_matrix[u][v] = 1
if not self.directed:
self.adj_matrix[v][u] = 1
def remove_edge(self, source_vertex: T, destination_vertex: T) -> None:
"""
Removes the edge between the two vertices. If any given vertex
doesn't exist or the edge does not exist, a ValueError will be thrown.
"""
if not (
self.contains_vertex(source_vertex)
and self.contains_vertex(destination_vertex)
):
msg = (
f"Incorrect input: Either {source_vertex} or "
f"{destination_vertex} does not exist"
)
raise ValueError(msg)
if not self.contains_edge(source_vertex, destination_vertex):
msg = (
"Incorrect input: The edge does NOT exist between "
f"{source_vertex} and {destination_vertex}"
)
raise ValueError(msg)
# Get the indices of the corresponding vertices and set their edge value to 0.
u: int = self.vertex_to_index[source_vertex]
v: int = self.vertex_to_index[destination_vertex]
self.adj_matrix[u][v] = 0
if not self.directed:
self.adj_matrix[v][u] = 0
def add_vertex(self, vertex: T) -> None:
"""
Adds a vertex to the graph. If the given vertex already exists,
a ValueError will be thrown.
"""
if self.contains_vertex(vertex):
msg = f"Incorrect input: {vertex} already exists in this graph."
raise ValueError(msg)
# build column for vertex
for row in self.adj_matrix:
row.append(0)
# build row for vertex and update other data structures
self.adj_matrix.append([0] * (len(self.adj_matrix) + 1))
self.vertex_to_index[vertex] = len(self.adj_matrix) - 1
def remove_vertex(self, vertex: T) -> None:
"""
Removes the given vertex from the graph and deletes all incoming and
outgoing edges from the given vertex as well. If the given vertex
does not exist, a ValueError will be thrown.
"""
if not self.contains_vertex(vertex):
msg = f"Incorrect input: {vertex} does not exist in this graph."
raise ValueError(msg)
# first slide up the rows by deleting the row corresponding to
# the vertex being deleted.
start_index = self.vertex_to_index[vertex]
self.adj_matrix.pop(start_index)
# next, slide the columns to the left by deleting the values in
# the column corresponding to the vertex being deleted
for lst in self.adj_matrix:
lst.pop(start_index)
# final clean up
self.vertex_to_index.pop(vertex)
# decrement indices for vertices shifted by the deleted vertex in the adj matrix
for vertex in self.vertex_to_index:
if self.vertex_to_index[vertex] >= start_index:
self.vertex_to_index[vertex] = self.vertex_to_index[vertex] - 1
def contains_vertex(self, vertex: T) -> bool:
"""
Returns True if the graph contains the vertex, False otherwise.
"""
return vertex in self.vertex_to_index
def contains_edge(self, source_vertex: T, destination_vertex: T) -> bool:
"""
Returns True if the graph contains the edge from the source_vertex to the
destination_vertex, False otherwise. If any given vertex doesn't exist, a
ValueError will be thrown.
"""
if not (
self.contains_vertex(source_vertex)
and self.contains_vertex(destination_vertex)
):
msg = (
f"Incorrect input: Either {source_vertex} "
f"or {destination_vertex} does not exist."
)
raise ValueError(msg)
u = self.vertex_to_index[source_vertex]
v = self.vertex_to_index[destination_vertex]
return self.adj_matrix[u][v] == 1
def clear_graph(self) -> None:
"""
Clears all vertices and edges.
"""
self.vertex_to_index = {}
self.adj_matrix = []
def __repr__(self) -> str:
first = "Adj Matrix:\n" + pformat(self.adj_matrix)
second = "\nVertex to index mapping:\n" + pformat(self.vertex_to_index)
return first + second
class TestGraphMatrix(unittest.TestCase):
def __assert_graph_edge_exists_check(
self,
undirected_graph: GraphAdjacencyMatrix,
directed_graph: GraphAdjacencyMatrix,
edge: list[int],
) -> None:
assert undirected_graph.contains_edge(edge[0], edge[1])
assert undirected_graph.contains_edge(edge[1], edge[0])
assert directed_graph.contains_edge(edge[0], edge[1])
def __assert_graph_edge_does_not_exist_check(
self,
undirected_graph: GraphAdjacencyMatrix,
directed_graph: GraphAdjacencyMatrix,
edge: list[int],
) -> None:
assert not undirected_graph.contains_edge(edge[0], edge[1])
assert not undirected_graph.contains_edge(edge[1], edge[0])
assert not directed_graph.contains_edge(edge[0], edge[1])
def __assert_graph_vertex_exists_check(
self,
undirected_graph: GraphAdjacencyMatrix,
directed_graph: GraphAdjacencyMatrix,
vertex: int,
) -> None:
assert undirected_graph.contains_vertex(vertex)
assert directed_graph.contains_vertex(vertex)
def __assert_graph_vertex_does_not_exist_check(
self,
undirected_graph: GraphAdjacencyMatrix,
directed_graph: GraphAdjacencyMatrix,
vertex: int,
) -> None:
assert not undirected_graph.contains_vertex(vertex)
assert not directed_graph.contains_vertex(vertex)
def __generate_random_edges(
self, vertices: list[int], edge_pick_count: int
) -> list[list[int]]:
assert edge_pick_count <= len(vertices)
random_source_vertices: list[int] = random.sample(
vertices[0 : int(len(vertices) / 2)], edge_pick_count
)
random_destination_vertices: list[int] = random.sample(
vertices[int(len(vertices) / 2) :], edge_pick_count
)
random_edges: list[list[int]] = []
for source in random_source_vertices:
for dest in random_destination_vertices:
random_edges.append([source, dest])
return random_edges
def __generate_graphs(
self, vertex_count: int, min_val: int, max_val: int, edge_pick_count: int
) -> tuple[GraphAdjacencyMatrix, GraphAdjacencyMatrix, list[int], list[list[int]]]:
if max_val - min_val + 1 < vertex_count:
raise ValueError(
"Will result in duplicate vertices. Either increase "
"range between min_val and max_val or decrease vertex count"
)
# generate graph input
random_vertices: list[int] = random.sample(
range(min_val, max_val + 1), vertex_count
)
random_edges: list[list[int]] = self.__generate_random_edges(
random_vertices, edge_pick_count
)
# build graphs
undirected_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=random_edges, directed=False
)
directed_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=random_edges, directed=True
)
return undirected_graph, directed_graph, random_vertices, random_edges
def test_init_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
# test graph initialization with vertices and edges
for num in random_vertices:
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, num
)
for edge in random_edges:
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
assert not undirected_graph.directed
assert directed_graph.directed
def test_contains_vertex(self) -> None:
random_vertices: list[int] = random.sample(range(101), 20)
# Build graphs WITHOUT edges
undirected_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=[], directed=False
)
directed_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=[], directed=True
)
# Test contains_vertex
for num in range(101):
assert (num in random_vertices) == undirected_graph.contains_vertex(num)
assert (num in random_vertices) == directed_graph.contains_vertex(num)
def test_add_vertices(self) -> None:
random_vertices: list[int] = random.sample(range(101), 20)
# build empty graphs
undirected_graph: GraphAdjacencyMatrix = GraphAdjacencyMatrix(
vertices=[], edges=[], directed=False
)
directed_graph: GraphAdjacencyMatrix = GraphAdjacencyMatrix(
vertices=[], edges=[], directed=True
)
# run add_vertex
for num in random_vertices:
undirected_graph.add_vertex(num)
for num in random_vertices:
directed_graph.add_vertex(num)
# test add_vertex worked
for num in random_vertices:
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, num
)
def test_remove_vertices(self) -> None:
random_vertices: list[int] = random.sample(range(101), 20)
# build graphs WITHOUT edges
undirected_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=[], directed=False
)
directed_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=[], directed=True
)
# test remove_vertex worked
for num in random_vertices:
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, num
)
undirected_graph.remove_vertex(num)
directed_graph.remove_vertex(num)
self.__assert_graph_vertex_does_not_exist_check(
undirected_graph, directed_graph, num
)
def test_add_and_remove_vertices_repeatedly(self) -> None:
random_vertices1: list[int] = random.sample(range(51), 20)
random_vertices2: list[int] = random.sample(range(51, 101), 20)
# build graphs WITHOUT edges
undirected_graph = GraphAdjacencyMatrix(
vertices=random_vertices1, edges=[], directed=False
)
directed_graph = GraphAdjacencyMatrix(
vertices=random_vertices1, edges=[], directed=True
)
# test adding and removing vertices
for i, _ in enumerate(random_vertices1):
undirected_graph.add_vertex(random_vertices2[i])
directed_graph.add_vertex(random_vertices2[i])
self.__assert_graph_vertex_exists_check(
undirected_graph, directed_graph, random_vertices2[i]
)
undirected_graph.remove_vertex(random_vertices1[i])
directed_graph.remove_vertex(random_vertices1[i])
self.__assert_graph_vertex_does_not_exist_check(
undirected_graph, directed_graph, random_vertices1[i]
)
# remove all vertices
for i, _ in enumerate(random_vertices1):
undirected_graph.remove_vertex(random_vertices2[i])
directed_graph.remove_vertex(random_vertices2[i])
self.__assert_graph_vertex_does_not_exist_check(
undirected_graph, directed_graph, random_vertices2[i]
)
def test_contains_edge(self) -> None:
# generate graphs and graph input
vertex_count = 20
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(vertex_count, 0, 100, 4)
# generate all possible edges for testing
all_possible_edges: list[list[int]] = []
for i in range(vertex_count - 1):
for j in range(i + 1, vertex_count):
all_possible_edges.append([random_vertices[i], random_vertices[j]])
all_possible_edges.append([random_vertices[j], random_vertices[i]])
# test contains_edge function
for edge in all_possible_edges:
if edge in random_edges:
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
elif [edge[1], edge[0]] in random_edges:
# since this edge exists for undirected but the reverse may
# not exist for directed
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, [edge[1], edge[0]]
)
else:
self.__assert_graph_edge_does_not_exist_check(
undirected_graph, directed_graph, edge
)
def test_add_edge(self) -> None:
# generate graph input
random_vertices: list[int] = random.sample(range(101), 15)
random_edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
# build graphs WITHOUT edges
undirected_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=[], directed=False
)
directed_graph = GraphAdjacencyMatrix(
vertices=random_vertices, edges=[], directed=True
)
# run and test add_edge
for edge in random_edges:
undirected_graph.add_edge(edge[0], edge[1])
directed_graph.add_edge(edge[0], edge[1])
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
def test_remove_edge(self) -> None:
# generate graph input and graphs
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
# run and test remove_edge
for edge in random_edges:
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, edge
)
undirected_graph.remove_edge(edge[0], edge[1])
directed_graph.remove_edge(edge[0], edge[1])
self.__assert_graph_edge_does_not_exist_check(
undirected_graph, directed_graph, edge
)
def test_add_and_remove_edges_repeatedly(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
# make some more edge options!
more_random_edges: list[list[int]] = []
while len(more_random_edges) != len(random_edges):
edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
for edge in edges:
if len(more_random_edges) == len(random_edges):
break
elif edge not in more_random_edges and edge not in random_edges:
more_random_edges.append(edge)
for i, _ in enumerate(random_edges):
undirected_graph.add_edge(more_random_edges[i][0], more_random_edges[i][1])
directed_graph.add_edge(more_random_edges[i][0], more_random_edges[i][1])
self.__assert_graph_edge_exists_check(
undirected_graph, directed_graph, more_random_edges[i]
)
undirected_graph.remove_edge(random_edges[i][0], random_edges[i][1])
directed_graph.remove_edge(random_edges[i][0], random_edges[i][1])
self.__assert_graph_edge_does_not_exist_check(
undirected_graph, directed_graph, random_edges[i]
)
def test_add_vertex_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for vertex in random_vertices:
with pytest.raises(ValueError):
undirected_graph.add_vertex(vertex)
with pytest.raises(ValueError):
directed_graph.add_vertex(vertex)
def test_remove_vertex_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for i in range(101):
if i not in random_vertices:
with pytest.raises(ValueError):
undirected_graph.remove_vertex(i)
with pytest.raises(ValueError):
directed_graph.remove_vertex(i)
def test_add_edge_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for edge in random_edges:
with pytest.raises(ValueError):
undirected_graph.add_edge(edge[0], edge[1])
with pytest.raises(ValueError):
directed_graph.add_edge(edge[0], edge[1])
def test_remove_edge_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
more_random_edges: list[list[int]] = []
while len(more_random_edges) != len(random_edges):
edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
for edge in edges:
if len(more_random_edges) == len(random_edges):
break
elif edge not in more_random_edges and edge not in random_edges:
more_random_edges.append(edge)
for edge in more_random_edges:
with pytest.raises(ValueError):
undirected_graph.remove_edge(edge[0], edge[1])
with pytest.raises(ValueError):
directed_graph.remove_edge(edge[0], edge[1])
def test_contains_edge_exception_check(self) -> None:
(
undirected_graph,
directed_graph,
random_vertices,
random_edges,
) = self.__generate_graphs(20, 0, 100, 4)
for vertex in random_vertices:
with pytest.raises(ValueError):
undirected_graph.contains_edge(vertex, 102)
with pytest.raises(ValueError):
directed_graph.contains_edge(vertex, 102)
with pytest.raises(ValueError):
undirected_graph.contains_edge(103, 102)
with pytest.raises(ValueError):
directed_graph.contains_edge(103, 102)
if __name__ == "__main__":
unittest.main()
|
!usrbinenv python3 Author: OMKAR PATHAK, Nwachukwu Chidiebere Use a Python dictionary to construct the graph. Adjacency List type Graph Data Structure that accounts for directed and undirected Graphs. Initialize graph object indicating whether it's directed or undirected. Directed graph example: dgraph GraphAdjacencyList printdgraph dgraph.addedge0, 1 0: 1, 1: dgraph.addedge1, 2.addedge1, 4.addedge1, 5 0: 1, 1: 2, 4, 5, 2: , 4: , 5: dgraph.addedge2, 0.addedge2, 6.addedge2, 7 0: 1, 1: 2, 4, 5, 2: 0, 6, 7, 4: , 5: , 6: , 7: dgraph 0: 1, 1: 2, 4, 5, 2: 0, 6, 7, 4: , 5: , 6: , 7: printreprdgraph 0: 1, 1: 2, 4, 5, 2: 0, 6, 7, 4: , 5: , 6: , 7: Undirected graph example: ugraph GraphAdjacencyListdirectedFalse ugraph.addedge0, 1 0: 1, 1: 0 ugraph.addedge1, 2.addedge1, 4.addedge1, 5 0: 1, 1: 0, 2, 4, 5, 2: 1, 4: 1, 5: 1 ugraph.addedge2, 0.addedge2, 6.addedge2, 7 0: 1, 2, 1: 0, 2, 4, 5, 2: 1, 0, 6, 7, 4: 1, 5: 1, 6: 2, 7: 2 ugraph.addedge4, 5 0: 1, 2, 1: 0, 2, 4, 5, 2: 1, 0, 6, 7, 4: 1, 5, 5: 1, 4, 6: 2, 7: 2 printugraph 0: 1, 2, 1: 0, 2, 4, 5, 2: 1, 0, 6, 7, 4: 1, 5, 5: 1, 4, 6: 2, 7: 2 printreprugraph 0: 1, 2, 1: 0, 2, 4, 5, 2: 1, 0, 6, 7, 4: 1, 5, 5: 1, 4, 6: 2, 7: 2 chargraph GraphAdjacencyListdirectedFalse chargraph.addedge'a', 'b' 'a': 'b', 'b': 'a' chargraph.addedge'b', 'c'.addedge'b', 'e'.addedge'b', 'f' 'a': 'b', 'b': 'a', 'c', 'e', 'f', 'c': 'b', 'e': 'b', 'f': 'b' chargraph 'a': 'b', 'b': 'a', 'c', 'e', 'f', 'c': 'b', 'e': 'b', 'f': 'b' Parameters: directed: bool Indicates if graph is directed or undirected. Default is True. Connects vertices together. Creates and Edge from source vertex to destination vertex. Vertices will be created if not found in graph if both source vertex and destination vertex are both present in the adjacency list, add destination vertex to source vertex list of adjacent vertices and add source vertex to destination vertex list of adjacent vertices. if only source vertex is present in adjacency list, add destination vertex to source vertex list of adjacent vertices, then create a new vertex with destination vertex as key and assign a list containing the source vertex as it's first adjacent vertex. if only destination vertex is present in adjacency list, add source vertex to destination vertex list of adjacent vertices, then create a new vertex with source vertex as key and assign a list containing the source vertex as it's first adjacent vertex. if both source vertex and destination vertex are not present in adjacency list, create a new vertex with source vertex as key and assign a list containing the destination vertex as it's first adjacent vertex also create a new vertex with destination vertex as key and assign a list containing the source vertex as it's first adjacent vertex. if both source vertex and destination vertex are present in adjacency list, add destination vertex to source vertex list of adjacent vertices. if only source vertex is present in adjacency list, add destination vertex to source vertex list of adjacent vertices and create a new vertex with destination vertex as key, which has no adjacent vertex if only destination vertex is present in adjacency list, create a new vertex with source vertex as key and assign a list containing destination vertex as first adjacent vertex if both source vertex and destination vertex are not present in adjacency list, create a new vertex with source vertex as key and a list containing destination vertex as it's first adjacent vertex. Then create a new vertex with destination vertex as key, which has no adjacent vertex | #!/usr/bin/env python3
# Author: OMKAR PATHAK, Nwachukwu Chidiebere
# Use a Python dictionary to construct the graph.
from __future__ import annotations
from pprint import pformat
from typing import Generic, TypeVar
T = TypeVar("T")
class GraphAdjacencyList(Generic[T]):
"""
Adjacency List type Graph Data Structure that accounts for directed and undirected
Graphs. Initialize graph object indicating whether it's directed or undirected.
Directed graph example:
>>> d_graph = GraphAdjacencyList()
>>> print(d_graph)
{}
>>> d_graph.add_edge(0, 1)
{0: [1], 1: []}
>>> d_graph.add_edge(1, 2).add_edge(1, 4).add_edge(1, 5)
{0: [1], 1: [2, 4, 5], 2: [], 4: [], 5: []}
>>> d_graph.add_edge(2, 0).add_edge(2, 6).add_edge(2, 7)
{0: [1], 1: [2, 4, 5], 2: [0, 6, 7], 4: [], 5: [], 6: [], 7: []}
>>> d_graph
{0: [1], 1: [2, 4, 5], 2: [0, 6, 7], 4: [], 5: [], 6: [], 7: []}
>>> print(repr(d_graph))
{0: [1], 1: [2, 4, 5], 2: [0, 6, 7], 4: [], 5: [], 6: [], 7: []}
Undirected graph example:
>>> u_graph = GraphAdjacencyList(directed=False)
>>> u_graph.add_edge(0, 1)
{0: [1], 1: [0]}
>>> u_graph.add_edge(1, 2).add_edge(1, 4).add_edge(1, 5)
{0: [1], 1: [0, 2, 4, 5], 2: [1], 4: [1], 5: [1]}
>>> u_graph.add_edge(2, 0).add_edge(2, 6).add_edge(2, 7)
{0: [1, 2], 1: [0, 2, 4, 5], 2: [1, 0, 6, 7], 4: [1], 5: [1], 6: [2], 7: [2]}
>>> u_graph.add_edge(4, 5)
{0: [1, 2],
1: [0, 2, 4, 5],
2: [1, 0, 6, 7],
4: [1, 5],
5: [1, 4],
6: [2],
7: [2]}
>>> print(u_graph)
{0: [1, 2],
1: [0, 2, 4, 5],
2: [1, 0, 6, 7],
4: [1, 5],
5: [1, 4],
6: [2],
7: [2]}
>>> print(repr(u_graph))
{0: [1, 2],
1: [0, 2, 4, 5],
2: [1, 0, 6, 7],
4: [1, 5],
5: [1, 4],
6: [2],
7: [2]}
>>> char_graph = GraphAdjacencyList(directed=False)
>>> char_graph.add_edge('a', 'b')
{'a': ['b'], 'b': ['a']}
>>> char_graph.add_edge('b', 'c').add_edge('b', 'e').add_edge('b', 'f')
{'a': ['b'], 'b': ['a', 'c', 'e', 'f'], 'c': ['b'], 'e': ['b'], 'f': ['b']}
>>> char_graph
{'a': ['b'], 'b': ['a', 'c', 'e', 'f'], 'c': ['b'], 'e': ['b'], 'f': ['b']}
"""
def __init__(self, directed: bool = True) -> None:
"""
Parameters:
directed: (bool) Indicates if graph is directed or undirected. Default is True.
"""
self.adj_list: dict[T, list[T]] = {} # dictionary of lists
self.directed = directed
def add_edge(
self, source_vertex: T, destination_vertex: T
) -> GraphAdjacencyList[T]:
"""
Connects vertices together. Creates and Edge from source vertex to destination
vertex.
Vertices will be created if not found in graph
"""
if not self.directed: # For undirected graphs
# if both source vertex and destination vertex are both present in the
# adjacency list, add destination vertex to source vertex list of adjacent
# vertices and add source vertex to destination vertex list of adjacent
# vertices.
if source_vertex in self.adj_list and destination_vertex in self.adj_list:
self.adj_list[source_vertex].append(destination_vertex)
self.adj_list[destination_vertex].append(source_vertex)
# if only source vertex is present in adjacency list, add destination vertex
# to source vertex list of adjacent vertices, then create a new vertex with
# destination vertex as key and assign a list containing the source vertex
# as it's first adjacent vertex.
elif source_vertex in self.adj_list:
self.adj_list[source_vertex].append(destination_vertex)
self.adj_list[destination_vertex] = [source_vertex]
# if only destination vertex is present in adjacency list, add source vertex
# to destination vertex list of adjacent vertices, then create a new vertex
# with source vertex as key and assign a list containing the source vertex
# as it's first adjacent vertex.
elif destination_vertex in self.adj_list:
self.adj_list[destination_vertex].append(source_vertex)
self.adj_list[source_vertex] = [destination_vertex]
# if both source vertex and destination vertex are not present in adjacency
# list, create a new vertex with source vertex as key and assign a list
# containing the destination vertex as it's first adjacent vertex also
# create a new vertex with destination vertex as key and assign a list
# containing the source vertex as it's first adjacent vertex.
else:
self.adj_list[source_vertex] = [destination_vertex]
self.adj_list[destination_vertex] = [source_vertex]
else: # For directed graphs
# if both source vertex and destination vertex are present in adjacency
# list, add destination vertex to source vertex list of adjacent vertices.
if source_vertex in self.adj_list and destination_vertex in self.adj_list:
self.adj_list[source_vertex].append(destination_vertex)
# if only source vertex is present in adjacency list, add destination
# vertex to source vertex list of adjacent vertices and create a new vertex
# with destination vertex as key, which has no adjacent vertex
elif source_vertex in self.adj_list:
self.adj_list[source_vertex].append(destination_vertex)
self.adj_list[destination_vertex] = []
# if only destination vertex is present in adjacency list, create a new
# vertex with source vertex as key and assign a list containing destination
# vertex as first adjacent vertex
elif destination_vertex in self.adj_list:
self.adj_list[source_vertex] = [destination_vertex]
# if both source vertex and destination vertex are not present in adjacency
# list, create a new vertex with source vertex as key and a list containing
# destination vertex as it's first adjacent vertex. Then create a new vertex
# with destination vertex as key, which has no adjacent vertex
else:
self.adj_list[source_vertex] = [destination_vertex]
self.adj_list[destination_vertex] = []
return self
def __repr__(self) -> str:
return pformat(self.adj_list)
|
floydwarshall.py The problem is to find the shortest distance between all pairs of vertices in a weighted directed graph that can have negative edge weights. :param graph: 2D array calculated from weightedgei, j :type graph: ListListfloat :param v: number of vertices :type v: int :return: shortest distance between all vertex pairs distanceuv will contain the shortest distance from vertex u to v. 1. For all edges from v to n, distanceij weightedgei, j. 3. The algorithm then performs distanceij mindistanceij, distanceik distancekj for each possible pair i, j of vertices. 4. The above is repeated for each vertex k in the graph. 5. Whenever distanceij is given a new minimum value, next vertexij is updated to the next vertexik. check vertex k against all other vertices i, j looping through rows of graph array looping through columns of graph array src and dst are indices that must be within the array size graphev failure to follow this will result in an error Example Input Enter number of vertices: 3 Enter number of edges: 2 generated graph from vertex and edge inputs inf, inf, inf, inf, inf, inf, inf, inf, inf 0.0, inf, inf, inf, 0.0, inf, inf, inf, 0.0 specify source, destination and weight for edge 1 Edge 1 Enter source:1 Enter destination:2 Enter weight:2 specify source, destination and weight for edge 2 Edge 2 Enter source:2 Enter destination:1 Enter weight:1 Expected Output from the vertice, edge and src, dst, weight inputs!! 0 INF INF INF 0 2 INF 1 0 | # floyd_warshall.py
"""
The problem is to find the shortest distance between all pairs of vertices in a
weighted directed graph that can have negative edge weights.
"""
def _print_dist(dist, v):
print("\nThe shortest path matrix using Floyd Warshall algorithm\n")
for i in range(v):
for j in range(v):
if dist[i][j] != float("inf"):
print(int(dist[i][j]), end="\t")
else:
print("INF", end="\t")
print()
def floyd_warshall(graph, v):
"""
:param graph: 2D array calculated from weight[edge[i, j]]
:type graph: List[List[float]]
:param v: number of vertices
:type v: int
:return: shortest distance between all vertex pairs
distance[u][v] will contain the shortest distance from vertex u to v.
1. For all edges from v to n, distance[i][j] = weight(edge(i, j)).
3. The algorithm then performs distance[i][j] = min(distance[i][j], distance[i][k] +
distance[k][j]) for each possible pair i, j of vertices.
4. The above is repeated for each vertex k in the graph.
5. Whenever distance[i][j] is given a new minimum value, next vertex[i][j] is
updated to the next vertex[i][k].
"""
dist = [[float("inf") for _ in range(v)] for _ in range(v)]
for i in range(v):
for j in range(v):
dist[i][j] = graph[i][j]
# check vertex k against all other vertices (i, j)
for k in range(v):
# looping through rows of graph array
for i in range(v):
# looping through columns of graph array
for j in range(v):
if (
dist[i][k] != float("inf")
and dist[k][j] != float("inf")
and dist[i][k] + dist[k][j] < dist[i][j]
):
dist[i][j] = dist[i][k] + dist[k][j]
_print_dist(dist, v)
return dist, v
if __name__ == "__main__":
v = int(input("Enter number of vertices: "))
e = int(input("Enter number of edges: "))
graph = [[float("inf") for i in range(v)] for j in range(v)]
for i in range(v):
graph[i][i] = 0.0
# src and dst are indices that must be within the array size graph[e][v]
# failure to follow this will result in an error
for i in range(e):
print("\nEdge ", i + 1)
src = int(input("Enter source:"))
dst = int(input("Enter destination:"))
weight = float(input("Enter weight:"))
graph[src][dst] = weight
floyd_warshall(graph, v)
# Example Input
# Enter number of vertices: 3
# Enter number of edges: 2
# # generated graph from vertex and edge inputs
# [[inf, inf, inf], [inf, inf, inf], [inf, inf, inf]]
# [[0.0, inf, inf], [inf, 0.0, inf], [inf, inf, 0.0]]
# specify source, destination and weight for edge #1
# Edge 1
# Enter source:1
# Enter destination:2
# Enter weight:2
# specify source, destination and weight for edge #2
# Edge 2
# Enter source:2
# Enter destination:1
# Enter weight:1
# # Expected Output from the vertice, edge and src, dst, weight inputs!!
# 0 INF INF
# INF 0 2
# INF 1 0
|
https:en.wikipedia.orgwikiBestfirstsearchGreedyBFS 0's are free path whereas 1's are obstacles k Node0, 0, 4, 5, 0, None k.calculateheuristic 9 n Node1, 4, 3, 4, 2, None n.calculateheuristic 2 l k, n n l0 False l.sort n l0 True The heuristic here is the Manhattan Distance Could elaborate to offer more than one choice grid TESTGRIDS2 gbf GreedyBestFirstgrid, 0, 0, lengrid 1, lengrid0 1 x.pos for x in gbf.getsuccessorsgbf.start 1, 0, 0, 1 gbf.start.posy delta30, gbf.start.posx delta31 0, 1 gbf.start.posy delta20, gbf.start.posx delta21 1, 0 gbf.retracepathgbf.start 0, 0 gbf.search doctest: NORMALIZEWHITESPACE 0, 0, 1, 0, 2, 0, 2, 1, 3, 1, 4, 1, 4, 2, 4, 3, 4, 4 Search for the path, if a path is not found, only the starting position is returned Open Nodes are sorted using lt Returns a list of successors both in the grid and free spaces Retrace the path from parents to parents until start node | from __future__ import annotations
Path = list[tuple[int, int]]
# 0's are free path whereas 1's are obstacles
TEST_GRIDS = [
[
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0],
],
[
[0, 0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 1, 0, 1],
[0, 0, 0, 1, 1, 0, 0],
[0, 1, 0, 0, 1, 0, 0],
[1, 0, 0, 1, 1, 0, 1],
[0, 0, 0, 0, 0, 0, 0],
],
[
[0, 0, 1, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 1],
[1, 0, 0, 1, 1],
[0, 0, 0, 0, 0],
],
]
delta = ([-1, 0], [0, -1], [1, 0], [0, 1]) # up, left, down, right
class Node:
"""
>>> k = Node(0, 0, 4, 5, 0, None)
>>> k.calculate_heuristic()
9
>>> n = Node(1, 4, 3, 4, 2, None)
>>> n.calculate_heuristic()
2
>>> l = [k, n]
>>> n == l[0]
False
>>> l.sort()
>>> n == l[0]
True
"""
def __init__(
self,
pos_x: int,
pos_y: int,
goal_x: int,
goal_y: int,
g_cost: float,
parent: Node | None,
):
self.pos_x = pos_x
self.pos_y = pos_y
self.pos = (pos_y, pos_x)
self.goal_x = goal_x
self.goal_y = goal_y
self.g_cost = g_cost
self.parent = parent
self.f_cost = self.calculate_heuristic()
def calculate_heuristic(self) -> float:
"""
The heuristic here is the Manhattan Distance
Could elaborate to offer more than one choice
"""
dx = abs(self.pos_x - self.goal_x)
dy = abs(self.pos_y - self.goal_y)
return dx + dy
def __lt__(self, other) -> bool:
return self.f_cost < other.f_cost
def __eq__(self, other) -> bool:
return self.pos == other.pos
class GreedyBestFirst:
"""
>>> grid = TEST_GRIDS[2]
>>> gbf = GreedyBestFirst(grid, (0, 0), (len(grid) - 1, len(grid[0]) - 1))
>>> [x.pos for x in gbf.get_successors(gbf.start)]
[(1, 0), (0, 1)]
>>> (gbf.start.pos_y + delta[3][0], gbf.start.pos_x + delta[3][1])
(0, 1)
>>> (gbf.start.pos_y + delta[2][0], gbf.start.pos_x + delta[2][1])
(1, 0)
>>> gbf.retrace_path(gbf.start)
[(0, 0)]
>>> gbf.search() # doctest: +NORMALIZE_WHITESPACE
[(0, 0), (1, 0), (2, 0), (2, 1), (3, 1), (4, 1), (4, 2), (4, 3),
(4, 4)]
"""
def __init__(
self, grid: list[list[int]], start: tuple[int, int], goal: tuple[int, int]
):
self.grid = grid
self.start = Node(start[1], start[0], goal[1], goal[0], 0, None)
self.target = Node(goal[1], goal[0], goal[1], goal[0], 99999, None)
self.open_nodes = [self.start]
self.closed_nodes: list[Node] = []
self.reached = False
def search(self) -> Path | None:
"""
Search for the path,
if a path is not found, only the starting position is returned
"""
while self.open_nodes:
# Open Nodes are sorted using __lt__
self.open_nodes.sort()
current_node = self.open_nodes.pop(0)
if current_node.pos == self.target.pos:
self.reached = True
return self.retrace_path(current_node)
self.closed_nodes.append(current_node)
successors = self.get_successors(current_node)
for child_node in successors:
if child_node in self.closed_nodes:
continue
if child_node not in self.open_nodes:
self.open_nodes.append(child_node)
if not self.reached:
return [self.start.pos]
return None
def get_successors(self, parent: Node) -> list[Node]:
"""
Returns a list of successors (both in the grid and free spaces)
"""
return [
Node(
pos_x,
pos_y,
self.target.pos_x,
self.target.pos_y,
parent.g_cost + 1,
parent,
)
for action in delta
if (
0 <= (pos_x := parent.pos_x + action[1]) < len(self.grid[0])
and 0 <= (pos_y := parent.pos_y + action[0]) < len(self.grid)
and self.grid[pos_y][pos_x] == 0
)
]
def retrace_path(self, node: Node | None) -> Path:
"""
Retrace the path from parents to parents until start node
"""
current_node = node
path = []
while current_node is not None:
path.append((current_node.pos_y, current_node.pos_x))
current_node = current_node.parent
path.reverse()
return path
if __name__ == "__main__":
for idx, grid in enumerate(TEST_GRIDS):
print(f"==grid-{idx + 1}==")
init = (0, 0)
goal = (len(grid) - 1, len(grid[0]) - 1)
for elem in grid:
print(elem)
print("------")
greedy_bf = GreedyBestFirst(grid, init, goal)
path = greedy_bf.search()
if path:
for pos_x, pos_y in path:
grid[pos_x][pos_y] = 2
for elem in grid:
print(elem)
|
Author: Manuel Di Lullo https:github.commanueldilullo Description: Approximization algorithm for minimum vertex cover problem. Greedy Approach. Uses graphs represented with an adjacency list URL: https:mathworld.wolfram.comMinimumVertexCover.html URL: https:cs.stackexchange.comquestions129017greedyalgorithmforvertexcover Greedy APX Algorithm for min Vertex Cover input: graph graph stored in an adjacency list where each vertex is represented with an integer example: graph 0: 1, 3, 1: 0, 3, 2: 0, 3, 4, 3: 0, 1, 2, 4: 2, 3 greedyminvertexcovergraph 0, 1, 2, 4 queue used to store nodes and their rank for each node and his adjacency list add them and the rank of the node to queue using heapq module the queue will be filled like a Priority Queue heapq works with a min priority queue, so I used 1lenv to build it Ologn chosenvertices set of chosen vertices while queue isn't empty and there are still edges queue00 is the rank of the node with max rank extract vertex with max rank from queue and add it to chosenvertices Remove all arcs adjacent to argmax if v haven't adjacent node, skip if argmax is reachable from elem remove argmax from elem's adjacent list and update his rank reorder the queue | import heapq
def greedy_min_vertex_cover(graph: dict) -> set[int]:
"""
Greedy APX Algorithm for min Vertex Cover
@input: graph (graph stored in an adjacency list where each vertex
is represented with an integer)
@example:
>>> graph = {0: [1, 3], 1: [0, 3], 2: [0, 3, 4], 3: [0, 1, 2], 4: [2, 3]}
>>> greedy_min_vertex_cover(graph)
{0, 1, 2, 4}
"""
# queue used to store nodes and their rank
queue: list[list] = []
# for each node and his adjacency list add them and the rank of the node to queue
# using heapq module the queue will be filled like a Priority Queue
# heapq works with a min priority queue, so I used -1*len(v) to build it
for key, value in graph.items():
# O(log(n))
heapq.heappush(queue, [-1 * len(value), (key, value)])
# chosen_vertices = set of chosen vertices
chosen_vertices = set()
# while queue isn't empty and there are still edges
# (queue[0][0] is the rank of the node with max rank)
while queue and queue[0][0] != 0:
# extract vertex with max rank from queue and add it to chosen_vertices
argmax = heapq.heappop(queue)[1][0]
chosen_vertices.add(argmax)
# Remove all arcs adjacent to argmax
for elem in queue:
# if v haven't adjacent node, skip
if elem[0] == 0:
continue
# if argmax is reachable from elem
# remove argmax from elem's adjacent list and update his rank
if argmax in elem[1][1]:
index = elem[1][1].index(argmax)
del elem[1][1][index]
elem[0] += 1
# re-order the queue
heapq.heapify(queue)
return chosen_vertices
if __name__ == "__main__":
import doctest
doctest.testmod()
graph = {0: [1, 3], 1: [0, 3], 2: [0, 3, 4], 3: [0, 1, 2], 4: [2, 3]}
print(f"Minimum vertex cover:\n{greedy_min_vertex_cover(graph)}")
|
Finding longest distance in Directed Acyclic Graph using KahnsAlgorithm Adjacency list of Graph | # Finding longest distance in Directed Acyclic Graph using KahnsAlgorithm
def longest_distance(graph):
indegree = [0] * len(graph)
queue = []
long_dist = [1] * len(graph)
for values in graph.values():
for i in values:
indegree[i] += 1
for i in range(len(indegree)):
if indegree[i] == 0:
queue.append(i)
while queue:
vertex = queue.pop(0)
for x in graph[vertex]:
indegree[x] -= 1
if long_dist[vertex] + 1 > long_dist[x]:
long_dist[x] = long_dist[vertex] + 1
if indegree[x] == 0:
queue.append(x)
print(max(long_dist))
# Adjacency list of Graph
graph = {0: [2, 3, 4], 1: [2, 7], 2: [5], 3: [5, 7], 4: [7], 5: [6], 6: [7], 7: []}
longest_distance(graph)
|
Kahn's Algorithm is used to find Topological ordering of Directed Acyclic Graph using BFS Adjacency List of Graph | def topological_sort(graph):
"""
Kahn's Algorithm is used to find Topological ordering of Directed Acyclic Graph
using BFS
"""
indegree = [0] * len(graph)
queue = []
topo = []
cnt = 0
for values in graph.values():
for i in values:
indegree[i] += 1
for i in range(len(indegree)):
if indegree[i] == 0:
queue.append(i)
while queue:
vertex = queue.pop(0)
cnt += 1
topo.append(vertex)
for x in graph[vertex]:
indegree[x] -= 1
if indegree[x] == 0:
queue.append(x)
if cnt != len(graph):
print("Cycle exists")
else:
print(topo)
# Adjacency List of Graph
graph = {0: [1, 2], 1: [3], 2: [3], 3: [4, 5], 4: [], 5: []}
topological_sort(graph)
|
An implementation of Karger's Algorithm for partitioning a graph. Adjacency list representation of this graph: https:en.wikipedia.orgwikiFile:SinglerunofKargerE28099sMincutalgorithm.svg Partitions a graph using Karger's Algorithm. Implemented from pseudocode found here: https:en.wikipedia.orgwikiKarger27salgorithm. This function involves random choices, meaning it will not give consistent outputs. Args: graph: A dictionary containing adacency lists for the graph. Nodes must be strings. Returns: The cutset of the cut found by Karger's Algorithm. graph '0':'1', '1':'0' partitiongraphgraph '0', '1' Dict that maps contracted nodes to a list of all the nodes it contains. Choose a random edge. Contract edge u, v to new node uv Remove nodes u and v. Find cutset. | from __future__ import annotations
import random
# Adjacency list representation of this graph:
# https://en.wikipedia.org/wiki/File:Single_run_of_Karger%E2%80%99s_Mincut_algorithm.svg
TEST_GRAPH = {
"1": ["2", "3", "4", "5"],
"2": ["1", "3", "4", "5"],
"3": ["1", "2", "4", "5", "10"],
"4": ["1", "2", "3", "5", "6"],
"5": ["1", "2", "3", "4", "7"],
"6": ["7", "8", "9", "10", "4"],
"7": ["6", "8", "9", "10", "5"],
"8": ["6", "7", "9", "10"],
"9": ["6", "7", "8", "10"],
"10": ["6", "7", "8", "9", "3"],
}
def partition_graph(graph: dict[str, list[str]]) -> set[tuple[str, str]]:
"""
Partitions a graph using Karger's Algorithm. Implemented from
pseudocode found here:
https://en.wikipedia.org/wiki/Karger%27s_algorithm.
This function involves random choices, meaning it will not give
consistent outputs.
Args:
graph: A dictionary containing adacency lists for the graph.
Nodes must be strings.
Returns:
The cutset of the cut found by Karger's Algorithm.
>>> graph = {'0':['1'], '1':['0']}
>>> partition_graph(graph)
{('0', '1')}
"""
# Dict that maps contracted nodes to a list of all the nodes it "contains."
contracted_nodes = {node: {node} for node in graph}
graph_copy = {node: graph[node][:] for node in graph}
while len(graph_copy) > 2:
# Choose a random edge.
u = random.choice(list(graph_copy.keys()))
v = random.choice(graph_copy[u])
# Contract edge (u, v) to new node uv
uv = u + v
uv_neighbors = list(set(graph_copy[u] + graph_copy[v]))
uv_neighbors.remove(u)
uv_neighbors.remove(v)
graph_copy[uv] = uv_neighbors
for neighbor in uv_neighbors:
graph_copy[neighbor].append(uv)
contracted_nodes[uv] = set(contracted_nodes[u].union(contracted_nodes[v]))
# Remove nodes u and v.
del graph_copy[u]
del graph_copy[v]
for neighbor in uv_neighbors:
if u in graph_copy[neighbor]:
graph_copy[neighbor].remove(u)
if v in graph_copy[neighbor]:
graph_copy[neighbor].remove(v)
# Find cutset.
groups = [contracted_nodes[node] for node in graph_copy]
return {
(node, neighbor)
for node in groups[0]
for neighbor in graph[node]
if neighbor in groups[1]
}
if __name__ == "__main__":
print(partition_graph(TEST_GRAPH))
|
Undirected Unweighted Graph for running Markov Chain Algorithm Running Markov Chain algorithm and calculating the number of times each node is visited transitions ... 'a', 'a', 0.9, ... 'a', 'b', 0.075, ... 'a', 'c', 0.025, ... 'b', 'a', 0.15, ... 'b', 'b', 0.8, ... 'b', 'c', 0.05, ... 'c', 'a', 0.25, ... 'c', 'b', 0.25, ... 'c', 'c', 0.5 ... result gettransitions'a', transitions, 5000 result'a' result'b' result'c' True | from __future__ import annotations
from collections import Counter
from random import random
class MarkovChainGraphUndirectedUnweighted:
"""
Undirected Unweighted Graph for running Markov Chain Algorithm
"""
def __init__(self):
self.connections = {}
def add_node(self, node: str) -> None:
self.connections[node] = {}
def add_transition_probability(
self, node1: str, node2: str, probability: float
) -> None:
if node1 not in self.connections:
self.add_node(node1)
if node2 not in self.connections:
self.add_node(node2)
self.connections[node1][node2] = probability
def get_nodes(self) -> list[str]:
return list(self.connections)
def transition(self, node: str) -> str:
current_probability = 0
random_value = random()
for dest in self.connections[node]:
current_probability += self.connections[node][dest]
if current_probability > random_value:
return dest
return ""
def get_transitions(
start: str, transitions: list[tuple[str, str, float]], steps: int
) -> dict[str, int]:
"""
Running Markov Chain algorithm and calculating the number of times each node is
visited
>>> transitions = [
... ('a', 'a', 0.9),
... ('a', 'b', 0.075),
... ('a', 'c', 0.025),
... ('b', 'a', 0.15),
... ('b', 'b', 0.8),
... ('b', 'c', 0.05),
... ('c', 'a', 0.25),
... ('c', 'b', 0.25),
... ('c', 'c', 0.5)
... ]
>>> result = get_transitions('a', transitions, 5000)
>>> result['a'] > result['b'] > result['c']
True
"""
graph = MarkovChainGraphUndirectedUnweighted()
for node1, node2, probability in transitions:
graph.add_transition_probability(node1, node2, probability)
visited = Counter(graph.get_nodes())
node = start
for _ in range(steps):
node = graph.transition(node)
visited[node] += 1
return visited
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Author: Manuel Di Lullo https:github.commanueldilullo Description: Approximization algorithm for minimum vertex cover problem. Matching Approach. Uses graphs represented with an adjacency list URL: https:mathworld.wolfram.comMinimumVertexCover.html URL: https:www.princeton.eduaaaPublicTeachingORF523ORF523Lec6.pdf APX Algorithm for min Vertex Cover using Matching Approach input: graph graph stored in an adjacency list where each vertex is represented as an integer example: graph 0: 1, 3, 1: 0, 3, 2: 0, 3, 4, 3: 0, 1, 2, 4: 2, 3 matchingminvertexcovergraph 0, 1, 2, 4 chosenvertices set of chosen vertices edges list of graph's edges While there are still elements in edges list, take an arbitrary edge fromnode, tonode and add his extremity to chosenvertices and then remove all arcs adjacent to the fromnode and tonode Return a set of couples that represents all of the edges. input: graph graph stored in an adjacency list where each vertex is represented as an integer example: graph 0: 1, 3, 1: 0, 3, 2: 0, 3, 3: 0, 1, 2 getedgesgraph 0, 1, 3, 1, 0, 3, 2, 0, 3, 0, 2, 3, 1, 0, 3, 2, 1, 3 graph 0: 1, 3, 1: 0, 3, 2: 0, 3, 4, 3: 0, 1, 2, 4: 2, 3 printfMatching vertex cover:nmatchingminvertexcovergraph | def matching_min_vertex_cover(graph: dict) -> set:
"""
APX Algorithm for min Vertex Cover using Matching Approach
@input: graph (graph stored in an adjacency list where each vertex
is represented as an integer)
@example:
>>> graph = {0: [1, 3], 1: [0, 3], 2: [0, 3, 4], 3: [0, 1, 2], 4: [2, 3]}
>>> matching_min_vertex_cover(graph)
{0, 1, 2, 4}
"""
# chosen_vertices = set of chosen vertices
chosen_vertices = set()
# edges = list of graph's edges
edges = get_edges(graph)
# While there are still elements in edges list, take an arbitrary edge
# (from_node, to_node) and add his extremity to chosen_vertices and then
# remove all arcs adjacent to the from_node and to_node
while edges:
from_node, to_node = edges.pop()
chosen_vertices.add(from_node)
chosen_vertices.add(to_node)
for edge in edges.copy():
if from_node in edge or to_node in edge:
edges.discard(edge)
return chosen_vertices
def get_edges(graph: dict) -> set:
"""
Return a set of couples that represents all of the edges.
@input: graph (graph stored in an adjacency list where each vertex is
represented as an integer)
@example:
>>> graph = {0: [1, 3], 1: [0, 3], 2: [0, 3], 3: [0, 1, 2]}
>>> get_edges(graph)
{(0, 1), (3, 1), (0, 3), (2, 0), (3, 0), (2, 3), (1, 0), (3, 2), (1, 3)}
"""
edges = set()
for from_node, to_nodes in graph.items():
for to_node in to_nodes:
edges.add((from_node, to_node))
return edges
if __name__ == "__main__":
import doctest
doctest.testmod()
# graph = {0: [1, 3], 1: [0, 3], 2: [0, 3, 4], 3: [0, 1, 2], 4: [2, 3]}
# print(f"Matching vertex cover:\n{matching_min_vertex_cover(graph)}")
|
Find the path from top left to bottom right of array of numbers with the lowest possible sum and return the sum along this path. minpathsum ... 1, 3, 1, ... 1, 5, 1, ... 4, 2, 1, ... 7 minpathsum ... 1, 0, 5, 6, 7, ... 8, 9, 0, 4, 2, ... 4, 4, 4, 5, 1, ... 9, 6, 3, 1, 0, ... 8, 4, 3, 2, 7, ... 20 minpathsumNone Traceback most recent call last: ... TypeError: The grid does not contain the appropriate information minpathsum Traceback most recent call last: ... TypeError: The grid does not contain the appropriate information fillrow2, 2, 2, 1, 2, 3 3, 4, 5 | def min_path_sum(grid: list) -> int:
"""
Find the path from top left to bottom right of array of numbers
with the lowest possible sum and return the sum along this path.
>>> min_path_sum([
... [1, 3, 1],
... [1, 5, 1],
... [4, 2, 1],
... ])
7
>>> min_path_sum([
... [1, 0, 5, 6, 7],
... [8, 9, 0, 4, 2],
... [4, 4, 4, 5, 1],
... [9, 6, 3, 1, 0],
... [8, 4, 3, 2, 7],
... ])
20
>>> min_path_sum(None)
Traceback (most recent call last):
...
TypeError: The grid does not contain the appropriate information
>>> min_path_sum([[]])
Traceback (most recent call last):
...
TypeError: The grid does not contain the appropriate information
"""
if not grid or not grid[0]:
raise TypeError("The grid does not contain the appropriate information")
for cell_n in range(1, len(grid[0])):
grid[0][cell_n] += grid[0][cell_n - 1]
row_above = grid[0]
for row_n in range(1, len(grid)):
current_row = grid[row_n]
grid[row_n] = fill_row(current_row, row_above)
row_above = grid[row_n]
return grid[-1][-1]
def fill_row(current_row: list, row_above: list) -> list:
"""
>>> fill_row([2, 2, 2], [1, 2, 3])
[3, 4, 5]
"""
current_row[0] += row_above[0]
for cell_n in range(1, len(current_row)):
current_row[cell_n] += min(current_row[cell_n - 1], row_above[cell_n])
return current_row
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Data structure to store graphs based on adjacency lists Adds a vertex to the graph Adds an edge to the graph For Boruvks's algorithm the weights should be distinct Converts the weights to be distinct Returns string representation of the graph Returna all edges in the graph Returns all vertices in the graph Builds a graph from the given set of vertices and edges Disjoint set Union and Find for Boruvka's algorithm Implementation of Boruvka's algorithm g Graph g Graph.build0, 1, 2, 3, 0, 1, 1, 0, 2, 1,2, 3, 1 g.distinctweight bg Graph.boruvkamstg printbg 1 0 1 2 0 2 0 1 1 0 2 2 3 2 3 2 3 3 | class Graph:
"""
Data structure to store graphs (based on adjacency lists)
"""
def __init__(self):
self.num_vertices = 0
self.num_edges = 0
self.adjacency = {}
def add_vertex(self, vertex):
"""
Adds a vertex to the graph
"""
if vertex not in self.adjacency:
self.adjacency[vertex] = {}
self.num_vertices += 1
def add_edge(self, head, tail, weight):
"""
Adds an edge to the graph
"""
self.add_vertex(head)
self.add_vertex(tail)
if head == tail:
return
self.adjacency[head][tail] = weight
self.adjacency[tail][head] = weight
def distinct_weight(self):
"""
For Boruvks's algorithm the weights should be distinct
Converts the weights to be distinct
"""
edges = self.get_edges()
for edge in edges:
head, tail, weight = edge
edges.remove((tail, head, weight))
for i in range(len(edges)):
edges[i] = list(edges[i])
edges.sort(key=lambda e: e[2])
for i in range(len(edges) - 1):
if edges[i][2] >= edges[i + 1][2]:
edges[i + 1][2] = edges[i][2] + 1
for edge in edges:
head, tail, weight = edge
self.adjacency[head][tail] = weight
self.adjacency[tail][head] = weight
def __str__(self):
"""
Returns string representation of the graph
"""
string = ""
for tail in self.adjacency:
for head in self.adjacency[tail]:
weight = self.adjacency[head][tail]
string += f"{head} -> {tail} == {weight}\n"
return string.rstrip("\n")
def get_edges(self):
"""
Returna all edges in the graph
"""
output = []
for tail in self.adjacency:
for head in self.adjacency[tail]:
output.append((tail, head, self.adjacency[head][tail]))
return output
def get_vertices(self):
"""
Returns all vertices in the graph
"""
return self.adjacency.keys()
@staticmethod
def build(vertices=None, edges=None):
"""
Builds a graph from the given set of vertices and edges
"""
g = Graph()
if vertices is None:
vertices = []
if edges is None:
edge = []
for vertex in vertices:
g.add_vertex(vertex)
for edge in edges:
g.add_edge(*edge)
return g
class UnionFind:
"""
Disjoint set Union and Find for Boruvka's algorithm
"""
def __init__(self):
self.parent = {}
self.rank = {}
def __len__(self):
return len(self.parent)
def make_set(self, item):
if item in self.parent:
return self.find(item)
self.parent[item] = item
self.rank[item] = 0
return item
def find(self, item):
if item not in self.parent:
return self.make_set(item)
if item != self.parent[item]:
self.parent[item] = self.find(self.parent[item])
return self.parent[item]
def union(self, item1, item2):
root1 = self.find(item1)
root2 = self.find(item2)
if root1 == root2:
return root1
if self.rank[root1] > self.rank[root2]:
self.parent[root2] = root1
return root1
if self.rank[root1] < self.rank[root2]:
self.parent[root1] = root2
return root2
if self.rank[root1] == self.rank[root2]:
self.rank[root1] += 1
self.parent[root2] = root1
return root1
return None
@staticmethod
def boruvka_mst(graph):
"""
Implementation of Boruvka's algorithm
>>> g = Graph()
>>> g = Graph.build([0, 1, 2, 3], [[0, 1, 1], [0, 2, 1],[2, 3, 1]])
>>> g.distinct_weight()
>>> bg = Graph.boruvka_mst(g)
>>> print(bg)
1 -> 0 == 1
2 -> 0 == 2
0 -> 1 == 1
0 -> 2 == 2
3 -> 2 == 3
2 -> 3 == 3
"""
num_components = graph.num_vertices
union_find = Graph.UnionFind()
mst_edges = []
while num_components > 1:
cheap_edge = {}
for vertex in graph.get_vertices():
cheap_edge[vertex] = -1
edges = graph.get_edges()
for edge in edges:
head, tail, weight = edge
edges.remove((tail, head, weight))
for edge in edges:
head, tail, weight = edge
set1 = union_find.find(head)
set2 = union_find.find(tail)
if set1 != set2:
if cheap_edge[set1] == -1 or cheap_edge[set1][2] > weight:
cheap_edge[set1] = [head, tail, weight]
if cheap_edge[set2] == -1 or cheap_edge[set2][2] > weight:
cheap_edge[set2] = [head, tail, weight]
for vertex in cheap_edge:
if cheap_edge[vertex] != -1:
head, tail, weight = cheap_edge[vertex]
if union_find.find(head) != union_find.find(tail):
union_find.union(head, tail)
mst_edges.append(cheap_edge[vertex])
num_components = num_components - 1
mst = Graph.build(edges=mst_edges)
return mst
|
kruskal4, 0, 1, 3, 1, 2, 5, 2, 3, 1 2, 3, 1, 0, 1, 3, 1, 2, 5 kruskal4, 0, 1, 3, 1, 2, 5, 2, 3, 1, 0, 2, 1, 0, 3, 2 2, 3, 1, 0, 2, 1, 0, 1, 3 kruskal4, 0, 1, 3, 1, 2, 5, 2, 3, 1, 0, 2, 1, 0, 3, 2, ... 2, 1, 1 2, 3, 1, 0, 2, 1, 2, 1, 1 | def kruskal(
num_nodes: int, edges: list[tuple[int, int, int]]
) -> list[tuple[int, int, int]]:
"""
>>> kruskal(4, [(0, 1, 3), (1, 2, 5), (2, 3, 1)])
[(2, 3, 1), (0, 1, 3), (1, 2, 5)]
>>> kruskal(4, [(0, 1, 3), (1, 2, 5), (2, 3, 1), (0, 2, 1), (0, 3, 2)])
[(2, 3, 1), (0, 2, 1), (0, 1, 3)]
>>> kruskal(4, [(0, 1, 3), (1, 2, 5), (2, 3, 1), (0, 2, 1), (0, 3, 2),
... (2, 1, 1)])
[(2, 3, 1), (0, 2, 1), (2, 1, 1)]
"""
edges = sorted(edges, key=lambda edge: edge[2])
parent = list(range(num_nodes))
def find_parent(i):
if i != parent[i]:
parent[i] = find_parent(parent[i])
return parent[i]
minimum_spanning_tree_cost = 0
minimum_spanning_tree = []
for edge in edges:
parent_a = find_parent(edge[0])
parent_b = find_parent(edge[1])
if parent_a != parent_b:
minimum_spanning_tree_cost += edge[2]
minimum_spanning_tree.append(edge)
parent[parent_a] = parent_b
return minimum_spanning_tree
if __name__ == "__main__": # pragma: no cover
num_nodes, num_edges = list(map(int, input().strip().split()))
edges = []
for _ in range(num_edges):
node1, node2, cost = (int(x) for x in input().strip().split())
edges.append((node1, node2, cost))
kruskal(num_nodes, edges)
|
Disjoint Set Node to store the parent and rank Disjoint Set DataStructure map from node name to the node object create a new set with x as its member find the set x belongs to with pathcompression helper function for union operation merge 2 disjoint sets connections: map from the node to the neighbouring nodes with weights add a node ONLY if its not present in the graph add an edge with the given weight Kruskal's Algorithm to generate a Minimum Spanning Tree MST of a graph Details: https:en.wikipedia.orgwikiKruskal27salgorithm Example: g1 GraphUndirectedWeightedint g1.addedge1, 2, 1 g1.addedge2, 3, 2 g1.addedge3, 4, 1 g1.addedge3, 5, 100 Removed in MST g1.addedge4, 5, 5 assert 5 in g1.connections3 mst g1.kruskal assert 5 not in mst.connections3 g2 GraphUndirectedWeightedstr g2.addedge'A', 'B', 1 g2.addedge'B', 'C', 2 g2.addedge'C', 'D', 1 g2.addedge'C', 'E', 100 Removed in MST g2.addedge'D', 'E', 5 assert 'E' in g2.connectionsC mst g2.kruskal assert 'E' not in mst.connections'C' getting the edges in ascending order of weights creating the disjoint set MST generation | from __future__ import annotations
from typing import Generic, TypeVar
T = TypeVar("T")
class DisjointSetTreeNode(Generic[T]):
# Disjoint Set Node to store the parent and rank
def __init__(self, data: T) -> None:
self.data = data
self.parent = self
self.rank = 0
class DisjointSetTree(Generic[T]):
# Disjoint Set DataStructure
def __init__(self) -> None:
# map from node name to the node object
self.map: dict[T, DisjointSetTreeNode[T]] = {}
def make_set(self, data: T) -> None:
# create a new set with x as its member
self.map[data] = DisjointSetTreeNode(data)
def find_set(self, data: T) -> DisjointSetTreeNode[T]:
# find the set x belongs to (with path-compression)
elem_ref = self.map[data]
if elem_ref != elem_ref.parent:
elem_ref.parent = self.find_set(elem_ref.parent.data)
return elem_ref.parent
def link(
self, node1: DisjointSetTreeNode[T], node2: DisjointSetTreeNode[T]
) -> None:
# helper function for union operation
if node1.rank > node2.rank:
node2.parent = node1
else:
node1.parent = node2
if node1.rank == node2.rank:
node2.rank += 1
def union(self, data1: T, data2: T) -> None:
# merge 2 disjoint sets
self.link(self.find_set(data1), self.find_set(data2))
class GraphUndirectedWeighted(Generic[T]):
def __init__(self) -> None:
# connections: map from the node to the neighbouring nodes (with weights)
self.connections: dict[T, dict[T, int]] = {}
def add_node(self, node: T) -> None:
# add a node ONLY if its not present in the graph
if node not in self.connections:
self.connections[node] = {}
def add_edge(self, node1: T, node2: T, weight: int) -> None:
# add an edge with the given weight
self.add_node(node1)
self.add_node(node2)
self.connections[node1][node2] = weight
self.connections[node2][node1] = weight
def kruskal(self) -> GraphUndirectedWeighted[T]:
# Kruskal's Algorithm to generate a Minimum Spanning Tree (MST) of a graph
"""
Details: https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
Example:
>>> g1 = GraphUndirectedWeighted[int]()
>>> g1.add_edge(1, 2, 1)
>>> g1.add_edge(2, 3, 2)
>>> g1.add_edge(3, 4, 1)
>>> g1.add_edge(3, 5, 100) # Removed in MST
>>> g1.add_edge(4, 5, 5)
>>> assert 5 in g1.connections[3]
>>> mst = g1.kruskal()
>>> assert 5 not in mst.connections[3]
>>> g2 = GraphUndirectedWeighted[str]()
>>> g2.add_edge('A', 'B', 1)
>>> g2.add_edge('B', 'C', 2)
>>> g2.add_edge('C', 'D', 1)
>>> g2.add_edge('C', 'E', 100) # Removed in MST
>>> g2.add_edge('D', 'E', 5)
>>> assert 'E' in g2.connections["C"]
>>> mst = g2.kruskal()
>>> assert 'E' not in mst.connections['C']
"""
# getting the edges in ascending order of weights
edges = []
seen = set()
for start in self.connections:
for end in self.connections[start]:
if (start, end) not in seen:
seen.add((end, start))
edges.append((start, end, self.connections[start][end]))
edges.sort(key=lambda x: x[2])
# creating the disjoint set
disjoint_set = DisjointSetTree[T]()
for node in self.connections:
disjoint_set.make_set(node)
# MST generation
num_edges = 0
index = 0
graph = GraphUndirectedWeighted[T]()
while num_edges < len(self.connections) - 1:
u, v, w = edges[index]
index += 1
parent_u = disjoint_set.find_set(u)
parent_v = disjoint_set.find_set(v)
if parent_u != parent_v:
num_edges += 1
graph.add_edge(u, v, w)
disjoint_set.union(u, v)
return graph
|
Update function if value of any node in minheap decreases adjacencylist 0: 1, 1, 3, 3, ... 1: 0, 1, 2, 6, 3, 5, 4, 1, ... 2: 1, 6, 4, 5, 5, 2, ... 3: 0, 3, 1, 5, 4, 1, ... 4: 1, 1, 2, 5, 3, 1, 5, 4, ... 5: 2, 2, 4, 4 prismsalgorithmadjacencylist 0, 1, 1, 4, 4, 3, 4, 5, 5, 2 Minimum Distance of explored vertex with neighboring vertex of partial tree formed in graph Prims Algorithm | import sys
from collections import defaultdict
class Heap:
def __init__(self):
self.node_position = []
def get_position(self, vertex):
return self.node_position[vertex]
def set_position(self, vertex, pos):
self.node_position[vertex] = pos
def top_to_bottom(self, heap, start, size, positions):
if start > size // 2 - 1:
return
else:
if 2 * start + 2 >= size:
smallest_child = 2 * start + 1
else:
if heap[2 * start + 1] < heap[2 * start + 2]:
smallest_child = 2 * start + 1
else:
smallest_child = 2 * start + 2
if heap[smallest_child] < heap[start]:
temp, temp1 = heap[smallest_child], positions[smallest_child]
heap[smallest_child], positions[smallest_child] = (
heap[start],
positions[start],
)
heap[start], positions[start] = temp, temp1
temp = self.get_position(positions[smallest_child])
self.set_position(
positions[smallest_child], self.get_position(positions[start])
)
self.set_position(positions[start], temp)
self.top_to_bottom(heap, smallest_child, size, positions)
# Update function if value of any node in min-heap decreases
def bottom_to_top(self, val, index, heap, position):
temp = position[index]
while index != 0:
parent = int((index - 2) / 2) if index % 2 == 0 else int((index - 1) / 2)
if val < heap[parent]:
heap[index] = heap[parent]
position[index] = position[parent]
self.set_position(position[parent], index)
else:
heap[index] = val
position[index] = temp
self.set_position(temp, index)
break
index = parent
else:
heap[0] = val
position[0] = temp
self.set_position(temp, 0)
def heapify(self, heap, positions):
start = len(heap) // 2 - 1
for i in range(start, -1, -1):
self.top_to_bottom(heap, i, len(heap), positions)
def delete_minimum(self, heap, positions):
temp = positions[0]
heap[0] = sys.maxsize
self.top_to_bottom(heap, 0, len(heap), positions)
return temp
def prisms_algorithm(adjacency_list):
"""
>>> adjacency_list = {0: [[1, 1], [3, 3]],
... 1: [[0, 1], [2, 6], [3, 5], [4, 1]],
... 2: [[1, 6], [4, 5], [5, 2]],
... 3: [[0, 3], [1, 5], [4, 1]],
... 4: [[1, 1], [2, 5], [3, 1], [5, 4]],
... 5: [[2, 2], [4, 4]]}
>>> prisms_algorithm(adjacency_list)
[(0, 1), (1, 4), (4, 3), (4, 5), (5, 2)]
"""
heap = Heap()
visited = [0] * len(adjacency_list)
nbr_tv = [-1] * len(adjacency_list) # Neighboring Tree Vertex of selected vertex
# Minimum Distance of explored vertex with neighboring vertex of partial tree
# formed in graph
distance_tv = [] # Heap of Distance of vertices from their neighboring vertex
positions = []
for vertex in range(len(adjacency_list)):
distance_tv.append(sys.maxsize)
positions.append(vertex)
heap.node_position.append(vertex)
tree_edges = []
visited[0] = 1
distance_tv[0] = sys.maxsize
for neighbor, distance in adjacency_list[0]:
nbr_tv[neighbor] = 0
distance_tv[neighbor] = distance
heap.heapify(distance_tv, positions)
for _ in range(1, len(adjacency_list)):
vertex = heap.delete_minimum(distance_tv, positions)
if visited[vertex] == 0:
tree_edges.append((nbr_tv[vertex], vertex))
visited[vertex] = 1
for neighbor, distance in adjacency_list[vertex]:
if (
visited[neighbor] == 0
and distance < distance_tv[heap.get_position(neighbor)]
):
distance_tv[heap.get_position(neighbor)] = distance
heap.bottom_to_top(
distance, heap.get_position(neighbor), distance_tv, positions
)
nbr_tv[neighbor] = vertex
return tree_edges
if __name__ == "__main__": # pragma: no cover
# < --------- Prims Algorithm --------- >
edges_number = int(input("Enter number of edges: ").strip())
adjacency_list = defaultdict(list)
for _ in range(edges_number):
edge = [int(x) for x in input().strip().split()]
adjacency_list[edge[0]].append([edge[1], edge[2]])
adjacency_list[edge[1]].append([edge[0], edge[2]])
print(prisms_algorithm(adjacency_list))
|
Prim's also known as Jarnk's algorithm is a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. The algorithm operates by building this tree one vertex at a time, from an arbitrary starting vertex, at each step adding the cheapest possible connection from the tree to another vertex. heap helper function get the position of the parent of the current node getparentposition1 0 getparentposition2 0 heap helper function get the position of the left child of the current node getchildleftposition0 1 heap helper function get the position of the right child of the current node getchildrightposition0 2 Minimum Priority Queue Class Functions: isempty: function to check if the priority queue is empty push: function to add an element with given priority to the queue extractmin: function to remove and return the element with lowest weight highest priority updatekey: function to update the weight of the given key bubbleup: helper function to place a node at the proper position upward movement bubbledown: helper function to place a node at the proper position downward movement swapnodes: helper function to swap the nodes at the given positions queue MinPriorityQueue queue.push1, 1000 queue.push2, 100 queue.push3, 4000 queue.push4, 3000 queue.extractmin 2 queue.updatekey4, 50 queue.extractmin 4 queue.extractmin 1 queue.extractmin 3 Check if the priority queue is empty Add an element with given priority to the queue Remove and return the element with lowest weight highest priority Update the weight of the given key Place a node at the proper position upward movement to be used internally only Place a node at the proper position downward movement to be used internally only Swap the nodes at the given positions Graph Undirected Weighted Class Functions: addnode: function to add a node in the graph addedge: function to add an edge between 2 nodes in the graph Add a node in the graph if it is not in the graph Add an edge between 2 nodes in the graph graph GraphUndirectedWeighted graph.addedgea, b, 3 graph.addedgeb, c, 10 graph.addedgec, d, 5 graph.addedgea, c, 15 graph.addedgeb, d, 100 dist, parent primsalgograph absdista distb 3 absdistd distb 15 absdista distc 13 prim's algorithm for minimum spanning tree initialization running prim's algorithm | from __future__ import annotations
from sys import maxsize
from typing import Generic, TypeVar
T = TypeVar("T")
def get_parent_position(position: int) -> int:
"""
heap helper function get the position of the parent of the current node
>>> get_parent_position(1)
0
>>> get_parent_position(2)
0
"""
return (position - 1) // 2
def get_child_left_position(position: int) -> int:
"""
heap helper function get the position of the left child of the current node
>>> get_child_left_position(0)
1
"""
return (2 * position) + 1
def get_child_right_position(position: int) -> int:
"""
heap helper function get the position of the right child of the current node
>>> get_child_right_position(0)
2
"""
return (2 * position) + 2
class MinPriorityQueue(Generic[T]):
"""
Minimum Priority Queue Class
Functions:
is_empty: function to check if the priority queue is empty
push: function to add an element with given priority to the queue
extract_min: function to remove and return the element with lowest weight (highest
priority)
update_key: function to update the weight of the given key
_bubble_up: helper function to place a node at the proper position (upward
movement)
_bubble_down: helper function to place a node at the proper position (downward
movement)
_swap_nodes: helper function to swap the nodes at the given positions
>>> queue = MinPriorityQueue()
>>> queue.push(1, 1000)
>>> queue.push(2, 100)
>>> queue.push(3, 4000)
>>> queue.push(4, 3000)
>>> queue.extract_min()
2
>>> queue.update_key(4, 50)
>>> queue.extract_min()
4
>>> queue.extract_min()
1
>>> queue.extract_min()
3
"""
def __init__(self) -> None:
self.heap: list[tuple[T, int]] = []
self.position_map: dict[T, int] = {}
self.elements: int = 0
def __len__(self) -> int:
return self.elements
def __repr__(self) -> str:
return str(self.heap)
def is_empty(self) -> bool:
# Check if the priority queue is empty
return self.elements == 0
def push(self, elem: T, weight: int) -> None:
# Add an element with given priority to the queue
self.heap.append((elem, weight))
self.position_map[elem] = self.elements
self.elements += 1
self._bubble_up(elem)
def extract_min(self) -> T:
# Remove and return the element with lowest weight (highest priority)
if self.elements > 1:
self._swap_nodes(0, self.elements - 1)
elem, _ = self.heap.pop()
del self.position_map[elem]
self.elements -= 1
if self.elements > 0:
bubble_down_elem, _ = self.heap[0]
self._bubble_down(bubble_down_elem)
return elem
def update_key(self, elem: T, weight: int) -> None:
# Update the weight of the given key
position = self.position_map[elem]
self.heap[position] = (elem, weight)
if position > 0:
parent_position = get_parent_position(position)
_, parent_weight = self.heap[parent_position]
if parent_weight > weight:
self._bubble_up(elem)
else:
self._bubble_down(elem)
else:
self._bubble_down(elem)
def _bubble_up(self, elem: T) -> None:
# Place a node at the proper position (upward movement) [to be used internally
# only]
curr_pos = self.position_map[elem]
if curr_pos == 0:
return None
parent_position = get_parent_position(curr_pos)
_, weight = self.heap[curr_pos]
_, parent_weight = self.heap[parent_position]
if parent_weight > weight:
self._swap_nodes(parent_position, curr_pos)
return self._bubble_up(elem)
return None
def _bubble_down(self, elem: T) -> None:
# Place a node at the proper position (downward movement) [to be used
# internally only]
curr_pos = self.position_map[elem]
_, weight = self.heap[curr_pos]
child_left_position = get_child_left_position(curr_pos)
child_right_position = get_child_right_position(curr_pos)
if child_left_position < self.elements and child_right_position < self.elements:
_, child_left_weight = self.heap[child_left_position]
_, child_right_weight = self.heap[child_right_position]
if child_right_weight < child_left_weight and child_right_weight < weight:
self._swap_nodes(child_right_position, curr_pos)
return self._bubble_down(elem)
if child_left_position < self.elements:
_, child_left_weight = self.heap[child_left_position]
if child_left_weight < weight:
self._swap_nodes(child_left_position, curr_pos)
return self._bubble_down(elem)
else:
return None
if child_right_position < self.elements:
_, child_right_weight = self.heap[child_right_position]
if child_right_weight < weight:
self._swap_nodes(child_right_position, curr_pos)
return self._bubble_down(elem)
return None
def _swap_nodes(self, node1_pos: int, node2_pos: int) -> None:
# Swap the nodes at the given positions
node1_elem = self.heap[node1_pos][0]
node2_elem = self.heap[node2_pos][0]
self.heap[node1_pos], self.heap[node2_pos] = (
self.heap[node2_pos],
self.heap[node1_pos],
)
self.position_map[node1_elem] = node2_pos
self.position_map[node2_elem] = node1_pos
class GraphUndirectedWeighted(Generic[T]):
"""
Graph Undirected Weighted Class
Functions:
add_node: function to add a node in the graph
add_edge: function to add an edge between 2 nodes in the graph
"""
def __init__(self) -> None:
self.connections: dict[T, dict[T, int]] = {}
self.nodes: int = 0
def __repr__(self) -> str:
return str(self.connections)
def __len__(self) -> int:
return self.nodes
def add_node(self, node: T) -> None:
# Add a node in the graph if it is not in the graph
if node not in self.connections:
self.connections[node] = {}
self.nodes += 1
def add_edge(self, node1: T, node2: T, weight: int) -> None:
# Add an edge between 2 nodes in the graph
self.add_node(node1)
self.add_node(node2)
self.connections[node1][node2] = weight
self.connections[node2][node1] = weight
def prims_algo(
graph: GraphUndirectedWeighted[T],
) -> tuple[dict[T, int], dict[T, T | None]]:
"""
>>> graph = GraphUndirectedWeighted()
>>> graph.add_edge("a", "b", 3)
>>> graph.add_edge("b", "c", 10)
>>> graph.add_edge("c", "d", 5)
>>> graph.add_edge("a", "c", 15)
>>> graph.add_edge("b", "d", 100)
>>> dist, parent = prims_algo(graph)
>>> abs(dist["a"] - dist["b"])
3
>>> abs(dist["d"] - dist["b"])
15
>>> abs(dist["a"] - dist["c"])
13
"""
# prim's algorithm for minimum spanning tree
dist: dict[T, int] = {node: maxsize for node in graph.connections}
parent: dict[T, T | None] = {node: None for node in graph.connections}
priority_queue: MinPriorityQueue[T] = MinPriorityQueue()
for node, weight in dist.items():
priority_queue.push(node, weight)
if priority_queue.is_empty():
return dist, parent
# initialization
node = priority_queue.extract_min()
dist[node] = 0
for neighbour in graph.connections[node]:
if dist[neighbour] > dist[node] + graph.connections[node][neighbour]:
dist[neighbour] = dist[node] + graph.connections[node][neighbour]
priority_queue.update_key(neighbour, dist[neighbour])
parent[neighbour] = node
# running prim's algorithm
while not priority_queue.is_empty():
node = priority_queue.extract_min()
for neighbour in graph.connections[node]:
if dist[neighbour] > dist[node] + graph.connections[node][neighbour]:
dist[neighbour] = dist[node] + graph.connections[node][neighbour]
priority_queue.update_key(neighbour, dist[neighbour])
parent[neighbour] = node
return dist, parent
|
update printupdate, item euclidean distance integer division by time variable manhattan distance printx prints, s printj, j printneighbour, neighbours L block hyper parameters start and end destination printopenlist0.minkey, openlisti.minkey | import heapq
import sys
import numpy as np
TPos = tuple[int, int]
class PriorityQueue:
def __init__(self):
self.elements = []
self.set = set()
def minkey(self):
if not self.empty():
return self.elements[0][0]
else:
return float("inf")
def empty(self):
return len(self.elements) == 0
def put(self, item, priority):
if item not in self.set:
heapq.heappush(self.elements, (priority, item))
self.set.add(item)
else:
# update
# print("update", item)
temp = []
(pri, x) = heapq.heappop(self.elements)
while x != item:
temp.append((pri, x))
(pri, x) = heapq.heappop(self.elements)
temp.append((priority, item))
for pro, xxx in temp:
heapq.heappush(self.elements, (pro, xxx))
def remove_element(self, item):
if item in self.set:
self.set.remove(item)
temp = []
(pro, x) = heapq.heappop(self.elements)
while x != item:
temp.append((pro, x))
(pro, x) = heapq.heappop(self.elements)
for prito, yyy in temp:
heapq.heappush(self.elements, (prito, yyy))
def top_show(self):
return self.elements[0][1]
def get(self):
(priority, item) = heapq.heappop(self.elements)
self.set.remove(item)
return (priority, item)
def consistent_heuristic(p: TPos, goal: TPos):
# euclidean distance
a = np.array(p)
b = np.array(goal)
return np.linalg.norm(a - b)
def heuristic_2(p: TPos, goal: TPos):
# integer division by time variable
return consistent_heuristic(p, goal) // t
def heuristic_1(p: TPos, goal: TPos):
# manhattan distance
return abs(p[0] - goal[0]) + abs(p[1] - goal[1])
def key(start: TPos, i: int, goal: TPos, g_function: dict[TPos, float]):
ans = g_function[start] + W1 * heuristics[i](start, goal)
return ans
def do_something(back_pointer, goal, start):
grid = np.chararray((n, n))
for i in range(n):
for j in range(n):
grid[i][j] = "*"
for i in range(n):
for j in range(n):
if (j, (n - 1) - i) in blocks:
grid[i][j] = "#"
grid[0][(n - 1)] = "-"
x = back_pointer[goal]
while x != start:
(x_c, y_c) = x
# print(x)
grid[(n - 1) - y_c][x_c] = "-"
x = back_pointer[x]
grid[(n - 1)][0] = "-"
for i in range(n):
for j in range(n):
if (i, j) == (0, n - 1):
print(grid[i][j], end=" ")
print("<-- End position", end=" ")
else:
print(grid[i][j], end=" ")
print()
print("^")
print("Start position")
print()
print("# is an obstacle")
print("- is the path taken by algorithm")
print("PATH TAKEN BY THE ALGORITHM IS:-")
x = back_pointer[goal]
while x != start:
print(x, end=" ")
x = back_pointer[x]
print(x)
sys.exit()
def valid(p: TPos):
if p[0] < 0 or p[0] > n - 1:
return False
if p[1] < 0 or p[1] > n - 1:
return False
return True
def expand_state(
s,
j,
visited,
g_function,
close_list_anchor,
close_list_inad,
open_list,
back_pointer,
):
for itera in range(n_heuristic):
open_list[itera].remove_element(s)
# print("s", s)
# print("j", j)
(x, y) = s
left = (x - 1, y)
right = (x + 1, y)
up = (x, y + 1)
down = (x, y - 1)
for neighbours in [left, right, up, down]:
if neighbours not in blocks:
if valid(neighbours) and neighbours not in visited:
# print("neighbour", neighbours)
visited.add(neighbours)
back_pointer[neighbours] = -1
g_function[neighbours] = float("inf")
if valid(neighbours) and g_function[neighbours] > g_function[s] + 1:
g_function[neighbours] = g_function[s] + 1
back_pointer[neighbours] = s
if neighbours not in close_list_anchor:
open_list[0].put(neighbours, key(neighbours, 0, goal, g_function))
if neighbours not in close_list_inad:
for var in range(1, n_heuristic):
if key(neighbours, var, goal, g_function) <= W2 * key(
neighbours, 0, goal, g_function
):
open_list[j].put(
neighbours, key(neighbours, var, goal, g_function)
)
def make_common_ground():
some_list = []
for x in range(1, 5):
for y in range(1, 6):
some_list.append((x, y))
for x in range(15, 20):
some_list.append((x, 17))
for x in range(10, 19):
for y in range(1, 15):
some_list.append((x, y))
# L block
for x in range(1, 4):
for y in range(12, 19):
some_list.append((x, y))
for x in range(3, 13):
for y in range(16, 19):
some_list.append((x, y))
return some_list
heuristics = {0: consistent_heuristic, 1: heuristic_1, 2: heuristic_2}
blocks_blk = [
(0, 1),
(1, 1),
(2, 1),
(3, 1),
(4, 1),
(5, 1),
(6, 1),
(7, 1),
(8, 1),
(9, 1),
(10, 1),
(11, 1),
(12, 1),
(13, 1),
(14, 1),
(15, 1),
(16, 1),
(17, 1),
(18, 1),
(19, 1),
]
blocks_all = make_common_ground()
blocks = blocks_blk
# hyper parameters
W1 = 1
W2 = 1
n = 20
n_heuristic = 3 # one consistent and two other inconsistent
# start and end destination
start = (0, 0)
goal = (n - 1, n - 1)
t = 1
def multi_a_star(start: TPos, goal: TPos, n_heuristic: int):
g_function = {start: 0, goal: float("inf")}
back_pointer = {start: -1, goal: -1}
open_list = []
visited = set()
for i in range(n_heuristic):
open_list.append(PriorityQueue())
open_list[i].put(start, key(start, i, goal, g_function))
close_list_anchor: list[int] = []
close_list_inad: list[int] = []
while open_list[0].minkey() < float("inf"):
for i in range(1, n_heuristic):
# print(open_list[0].minkey(), open_list[i].minkey())
if open_list[i].minkey() <= W2 * open_list[0].minkey():
global t
t += 1
if g_function[goal] <= open_list[i].minkey():
if g_function[goal] < float("inf"):
do_something(back_pointer, goal, start)
else:
_, get_s = open_list[i].top_show()
visited.add(get_s)
expand_state(
get_s,
i,
visited,
g_function,
close_list_anchor,
close_list_inad,
open_list,
back_pointer,
)
close_list_inad.append(get_s)
else:
if g_function[goal] <= open_list[0].minkey():
if g_function[goal] < float("inf"):
do_something(back_pointer, goal, start)
else:
get_s = open_list[0].top_show()
visited.add(get_s)
expand_state(
get_s,
0,
visited,
g_function,
close_list_anchor,
close_list_inad,
open_list,
back_pointer,
)
close_list_anchor.append(get_s)
print("No path found to goal")
print()
for i in range(n - 1, -1, -1):
for j in range(n):
if (j, i) in blocks:
print("#", end=" ")
elif (j, i) in back_pointer:
if (j, i) == (n - 1, n - 1):
print("*", end=" ")
else:
print("-", end=" ")
else:
print("*", end=" ")
if (j, i) == (n - 1, n - 1):
print("<-- End position", end=" ")
print()
print("^")
print("Start position")
print()
print("# is an obstacle")
print("- is the path taken by algorithm")
if __name__ == "__main__":
multi_a_star(start, goal, n_heuristic)
|
Author: https:github.combhushanborole The input graph for the algorithm is: A B C A 0 1 1 B 0 0 1 C 1 0 0 | """
The input graph for the algorithm is:
A B C
A 0 1 1
B 0 0 1
C 1 0 0
"""
graph = [[0, 1, 1], [0, 0, 1], [1, 0, 0]]
class Node:
def __init__(self, name):
self.name = name
self.inbound = []
self.outbound = []
def add_inbound(self, node):
self.inbound.append(node)
def add_outbound(self, node):
self.outbound.append(node)
def __repr__(self):
return f"<node={self.name} inbound={self.inbound} outbound={self.outbound}>"
def page_rank(nodes, limit=3, d=0.85):
ranks = {}
for node in nodes:
ranks[node.name] = 1
outbounds = {}
for node in nodes:
outbounds[node.name] = len(node.outbound)
for i in range(limit):
print(f"======= Iteration {i + 1} =======")
for _, node in enumerate(nodes):
ranks[node.name] = (1 - d) + d * sum(
ranks[ib] / outbounds[ib] for ib in node.inbound
)
print(ranks)
def main():
names = list(input("Enter Names of the Nodes: ").split())
nodes = [Node(name) for name in names]
for ri, row in enumerate(graph):
for ci, col in enumerate(row):
if col == 1:
nodes[ci].add_inbound(names[ri])
nodes[ri].add_outbound(names[ci])
print("======= Nodes =======")
for node in nodes:
print(node)
page_rank(nodes)
if __name__ == "__main__":
main()
|
Prim's Algorithm. Determines the minimum spanning treeMST of a graph using the Prim's Algorithm. Details: https:en.wikipedia.orgwikiPrim27salgorithm Class Vertex. def initself, id: self.id strid self.key None self.pi None self.neighbors self.edges vertex:distance def ltself, other: Return the vertex id. return self.id def addneighborself, vertex: Destination vertex and weight. self.edgesvertex.id weight def connectgraph, a, b, edge: add the neighbors: grapha 1.addneighborgraphb 1 graphb 1.addneighborgrapha 1 add the edges: grapha 1.addedgegraphb 1, edge graphb 1.addedgegrapha 1, edge def primgraph: list, root: Vertex list: a for u in graph: u.key math.inf u.pi None root.key 0 q graph: while q: u minq q.removeu for v in u.neighbors: if v in q and u.edgesv.id v.key: v.pi u v.key u.edgesv.id for i in range1, lengraph: a.appendintgraphi.id 1, intgraphi.pi.id 1 return a def primheapgraph: list, root: Vertex Iteratortuple: for u in graph: u.key math.inf u.pi None root.key 0 h listgraph hq.heapifyh while h: u hq.heappoph for v in u.neighbors: if v in h and u.edgesv.id v.key: v.pi u v.key u.edgesv.id hq.heapifyh for i in range1, lengraph: yield intgraphi.id 1, intgraphi.pi.id 1 def testvector None: Creates a list to store x vertices. if name main: import doctest doctest.testmod | import heapq as hq
import math
from collections.abc import Iterator
class Vertex:
"""Class Vertex."""
def __init__(self, id_):
"""
Arguments:
id - input an id to identify the vertex
Attributes:
neighbors - a list of the vertices it is linked to
edges - a dict to store the edges's weight
"""
self.id = str(id_)
self.key = None
self.pi = None
self.neighbors = []
self.edges = {} # {vertex:distance}
def __lt__(self, other):
"""Comparison rule to < operator."""
return self.key < other.key
def __repr__(self):
"""Return the vertex id."""
return self.id
def add_neighbor(self, vertex):
"""Add a pointer to a vertex at neighbor's list."""
self.neighbors.append(vertex)
def add_edge(self, vertex, weight):
"""Destination vertex and weight."""
self.edges[vertex.id] = weight
def connect(graph, a, b, edge):
# add the neighbors:
graph[a - 1].add_neighbor(graph[b - 1])
graph[b - 1].add_neighbor(graph[a - 1])
# add the edges:
graph[a - 1].add_edge(graph[b - 1], edge)
graph[b - 1].add_edge(graph[a - 1], edge)
def prim(graph: list, root: Vertex) -> list:
"""Prim's Algorithm.
Runtime:
O(mn) with `m` edges and `n` vertices
Return:
List with the edges of a Minimum Spanning Tree
Usage:
prim(graph, graph[0])
"""
a = []
for u in graph:
u.key = math.inf
u.pi = None
root.key = 0
q = graph[:]
while q:
u = min(q)
q.remove(u)
for v in u.neighbors:
if (v in q) and (u.edges[v.id] < v.key):
v.pi = u
v.key = u.edges[v.id]
for i in range(1, len(graph)):
a.append((int(graph[i].id) + 1, int(graph[i].pi.id) + 1))
return a
def prim_heap(graph: list, root: Vertex) -> Iterator[tuple]:
"""Prim's Algorithm with min heap.
Runtime:
O((m + n)log n) with `m` edges and `n` vertices
Yield:
Edges of a Minimum Spanning Tree
Usage:
prim(graph, graph[0])
"""
for u in graph:
u.key = math.inf
u.pi = None
root.key = 0
h = list(graph)
hq.heapify(h)
while h:
u = hq.heappop(h)
for v in u.neighbors:
if (v in h) and (u.edges[v.id] < v.key):
v.pi = u
v.key = u.edges[v.id]
hq.heapify(h)
for i in range(1, len(graph)):
yield (int(graph[i].id) + 1, int(graph[i].pi.id) + 1)
def test_vector() -> None:
"""
# Creates a list to store x vertices.
>>> x = 5
>>> G = [Vertex(n) for n in range(x)]
>>> connect(G, 1, 2, 15)
>>> connect(G, 1, 3, 12)
>>> connect(G, 2, 4, 13)
>>> connect(G, 2, 5, 5)
>>> connect(G, 3, 2, 6)
>>> connect(G, 3, 4, 6)
>>> connect(G, 0, 0, 0) # Generate the minimum spanning tree:
>>> G_heap = G[:]
>>> MST = prim(G, G[0])
>>> MST_heap = prim_heap(G, G[0])
>>> for i in MST:
... print(i)
(2, 3)
(3, 1)
(4, 3)
(5, 2)
>>> for i in MST_heap:
... print(i)
(2, 3)
(3, 1)
(4, 3)
(5, 2)
"""
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Author: Manuel Di Lullo https:github.commanueldilullo Description: Random graphs generator. Uses graphs represented with an adjacency list. URL: https:en.wikipedia.orgwikiRandomgraph Generate a random graph input: verticesnumber number of vertices, probability probability that a generic edge u,v exists, directed if True: graph will be a directed graph, otherwise it will be an undirected graph examples: random.seed1 randomgraph4, 0.5 0: 1, 1: 0, 2, 3, 2: 1, 3, 3: 1, 2 random.seed1 randomgraph4, 0.5, True 0: 1, 1: 2, 3, 2: 3, 3: if probability is greater or equal than 1, then generate a complete graph if probability is lower or equal than 0, then return a graph without edges for each couple of nodes, add an edge from u to v if the number randomly generated is greater than probability probability if the graph is undirected, add an edge in from j to i, either Generate a complete graph with verticesnumber vertices. input: verticesnumber number of vertices, directed False if the graph is undirected, True otherwise example: completegraph3 0: 1, 2, 1: 0, 2, 2: 0, 1 | import random
def random_graph(
vertices_number: int, probability: float, directed: bool = False
) -> dict:
"""
Generate a random graph
@input: vertices_number (number of vertices),
probability (probability that a generic edge (u,v) exists),
directed (if True: graph will be a directed graph,
otherwise it will be an undirected graph)
@examples:
>>> random.seed(1)
>>> random_graph(4, 0.5)
{0: [1], 1: [0, 2, 3], 2: [1, 3], 3: [1, 2]}
>>> random.seed(1)
>>> random_graph(4, 0.5, True)
{0: [1], 1: [2, 3], 2: [3], 3: []}
"""
graph: dict = {i: [] for i in range(vertices_number)}
# if probability is greater or equal than 1, then generate a complete graph
if probability >= 1:
return complete_graph(vertices_number)
# if probability is lower or equal than 0, then return a graph without edges
if probability <= 0:
return graph
# for each couple of nodes, add an edge from u to v
# if the number randomly generated is greater than probability probability
for i in range(vertices_number):
for j in range(i + 1, vertices_number):
if random.random() < probability:
graph[i].append(j)
if not directed:
# if the graph is undirected, add an edge in from j to i, either
graph[j].append(i)
return graph
def complete_graph(vertices_number: int) -> dict:
"""
Generate a complete graph with vertices_number vertices.
@input: vertices_number (number of vertices),
directed (False if the graph is undirected, True otherwise)
@example:
>>> complete_graph(3)
{0: [1, 2], 1: [0, 2], 2: [0, 1]}
"""
return {
i: [j for j in range(vertices_number) if i != j] for i in range(vertices_number)
}
if __name__ == "__main__":
import doctest
doctest.testmod()
|
n no of nodes, m no of edges input graph data edges | from __future__ import annotations
def dfs(u):
global graph, reversed_graph, scc, component, visit, stack
if visit[u]:
return
visit[u] = True
for v in graph[u]:
dfs(v)
stack.append(u)
def dfs2(u):
global graph, reversed_graph, scc, component, visit, stack
if visit[u]:
return
visit[u] = True
component.append(u)
for v in reversed_graph[u]:
dfs2(v)
def kosaraju():
global graph, reversed_graph, scc, component, visit, stack
for i in range(n):
dfs(i)
visit = [False] * n
for i in stack[::-1]:
if visit[i]:
continue
component = []
dfs2(i)
scc.append(component)
return scc
if __name__ == "__main__":
# n - no of nodes, m - no of edges
n, m = list(map(int, input().strip().split()))
graph: list[list[int]] = [[] for _ in range(n)] # graph
reversed_graph: list[list[int]] = [[] for i in range(n)] # reversed graph
# input graph data (edges)
for _ in range(m):
u, v = list(map(int, input().strip().split()))
graph[u].append(v)
reversed_graph[v].append(u)
stack: list[int] = []
visit: list[bool] = [False] * n
scc: list[int] = []
component: list[int] = []
print(kosaraju())
|
https:en.wikipedia.orgwikiStronglyconnectedcomponent Finding strongly connected components in directed graph Use depth first search to sort graph At this time graph is the same as input topologysorttestgraph1, 0, 5 False 1, 2, 4, 3, 0 topologysorttestgraph2, 0, 6 False 2, 1, 5, 4, 3, 0 Use depth first search to find strongliy connected vertices. Now graph is reversed findcomponents0: 1, 1: 2, 2: 0, 0, 5 False 0, 1, 2 findcomponents0: 2, 1: 0, 2: 0, 1, 0, 6 False 0, 2, 1 This function takes graph as a parameter and then returns the list of strongly connected components stronglyconnectedcomponentstestgraph1 0, 1, 2, 3, 4 stronglyconnectedcomponentstestgraph2 0, 2, 1, 3, 5, 4 | test_graph_1 = {0: [2, 3], 1: [0], 2: [1], 3: [4], 4: []}
test_graph_2 = {0: [1, 2, 3], 1: [2], 2: [0], 3: [4], 4: [5], 5: [3]}
def topology_sort(
graph: dict[int, list[int]], vert: int, visited: list[bool]
) -> list[int]:
"""
Use depth first search to sort graph
At this time graph is the same as input
>>> topology_sort(test_graph_1, 0, 5 * [False])
[1, 2, 4, 3, 0]
>>> topology_sort(test_graph_2, 0, 6 * [False])
[2, 1, 5, 4, 3, 0]
"""
visited[vert] = True
order = []
for neighbour in graph[vert]:
if not visited[neighbour]:
order += topology_sort(graph, neighbour, visited)
order.append(vert)
return order
def find_components(
reversed_graph: dict[int, list[int]], vert: int, visited: list[bool]
) -> list[int]:
"""
Use depth first search to find strongliy connected
vertices. Now graph is reversed
>>> find_components({0: [1], 1: [2], 2: [0]}, 0, 5 * [False])
[0, 1, 2]
>>> find_components({0: [2], 1: [0], 2: [0, 1]}, 0, 6 * [False])
[0, 2, 1]
"""
visited[vert] = True
component = [vert]
for neighbour in reversed_graph[vert]:
if not visited[neighbour]:
component += find_components(reversed_graph, neighbour, visited)
return component
def strongly_connected_components(graph: dict[int, list[int]]) -> list[list[int]]:
"""
This function takes graph as a parameter
and then returns the list of strongly connected components
>>> strongly_connected_components(test_graph_1)
[[0, 1, 2], [3], [4]]
>>> strongly_connected_components(test_graph_2)
[[0, 2, 1], [3, 5, 4]]
"""
visited = len(graph) * [False]
reversed_graph: dict[int, list[int]] = {vert: [] for vert in range(len(graph))}
for vert, neighbours in graph.items():
for neighbour in neighbours:
reversed_graph[neighbour].append(vert)
order = []
for i, was_visited in enumerate(visited):
if not was_visited:
order += topology_sort(graph, i, visited)
components_list = []
visited = len(graph) * [False]
for i in range(len(graph)):
vert = order[len(graph) - i - 1]
if not visited[vert]:
component = find_components(reversed_graph, vert, visited)
components_list.append(component)
return components_list
|
Tarjan's algo for finding strongly connected components in a directed graph Uses two main attributes of each node to track reachability, the index of that node within a componentindex, and the lowest index reachable from that nodelowlink. We then perform a dfs of the each component making sure to update these parameters for each node and saving the nodes we visit on the way. If ever we find that the lowest reachable node from a current node is equal to the index of the current node then it must be the root of a strongly connected component and so we save it and it's equireachable vertices as a strongly connected component. Complexity: strongconnect is called at most once for each node and has a complexity of OE as it is DFS. Therefore this has complexity OV E for a graph G V, E tarjan2, 3, 4, 2, 3, 4, 0, 1, 3, 0, 1, 2, 1 4, 3, 1, 2, 0 tarjan, , , 0, 1, 2, 3 a 0, 1, 2, 3, 4, 5, 4 b 1, 0, 3, 2, 5, 4, 0 n 7 sortedtarjancreategraphn, listzipa, b sorted ... tarjancreategraphn, listzipa::1, b::1 True a 0, 1, 2, 3, 4, 5, 6 b 0, 1, 2, 3, 4, 5, 6 sortedtarjancreategraphn, listzipa, b 0, 1, 2, 3, 4, 5, 6 n 7 source 0, 0, 1, 2, 3, 3, 4, 4, 6 target 1, 3, 2, 0, 1, 4, 5, 6, 5 edges listzipsource, target creategraphn, edges 1, 3, 2, 0, 1, 4, 5, 6, , 5 Test | from collections import deque
def tarjan(g: list[list[int]]) -> list[list[int]]:
"""
Tarjan's algo for finding strongly connected components in a directed graph
Uses two main attributes of each node to track reachability, the index of that node
within a component(index), and the lowest index reachable from that node(lowlink).
We then perform a dfs of the each component making sure to update these parameters
for each node and saving the nodes we visit on the way.
If ever we find that the lowest reachable node from a current node is equal to the
index of the current node then it must be the root of a strongly connected
component and so we save it and it's equireachable vertices as a strongly
connected component.
Complexity: strong_connect() is called at most once for each node and has a
complexity of O(|E|) as it is DFS.
Therefore this has complexity O(|V| + |E|) for a graph G = (V, E)
>>> tarjan([[2, 3, 4], [2, 3, 4], [0, 1, 3], [0, 1, 2], [1]])
[[4, 3, 1, 2, 0]]
>>> tarjan([[], [], [], []])
[[0], [1], [2], [3]]
>>> a = [0, 1, 2, 3, 4, 5, 4]
>>> b = [1, 0, 3, 2, 5, 4, 0]
>>> n = 7
>>> sorted(tarjan(create_graph(n, list(zip(a, b))))) == sorted(
... tarjan(create_graph(n, list(zip(a[::-1], b[::-1])))))
True
>>> a = [0, 1, 2, 3, 4, 5, 6]
>>> b = [0, 1, 2, 3, 4, 5, 6]
>>> sorted(tarjan(create_graph(n, list(zip(a, b)))))
[[0], [1], [2], [3], [4], [5], [6]]
"""
n = len(g)
stack: deque[int] = deque()
on_stack = [False for _ in range(n)]
index_of = [-1 for _ in range(n)]
lowlink_of = index_of[:]
def strong_connect(v: int, index: int, components: list[list[int]]) -> int:
index_of[v] = index # the number when this node is seen
lowlink_of[v] = index # lowest rank node reachable from here
index += 1
stack.append(v)
on_stack[v] = True
for w in g[v]:
if index_of[w] == -1:
index = strong_connect(w, index, components)
lowlink_of[v] = (
lowlink_of[w] if lowlink_of[w] < lowlink_of[v] else lowlink_of[v]
)
elif on_stack[w]:
lowlink_of[v] = (
lowlink_of[w] if lowlink_of[w] < lowlink_of[v] else lowlink_of[v]
)
if lowlink_of[v] == index_of[v]:
component = []
w = stack.pop()
on_stack[w] = False
component.append(w)
while w != v:
w = stack.pop()
on_stack[w] = False
component.append(w)
components.append(component)
return index
components: list[list[int]] = []
for v in range(n):
if index_of[v] == -1:
strong_connect(v, 0, components)
return components
def create_graph(n: int, edges: list[tuple[int, int]]) -> list[list[int]]:
"""
>>> n = 7
>>> source = [0, 0, 1, 2, 3, 3, 4, 4, 6]
>>> target = [1, 3, 2, 0, 1, 4, 5, 6, 5]
>>> edges = list(zip(source, target))
>>> create_graph(n, edges)
[[1, 3], [2], [0], [1, 4], [5, 6], [], [5]]
"""
g: list[list[int]] = [[] for _ in range(n)]
for u, v in edges:
g[u].append(v)
return g
if __name__ == "__main__":
# Test
n_vertices = 7
source = [0, 0, 1, 2, 3, 3, 4, 4, 6]
target = [1, 3, 2, 0, 1, 4, 5, 6, 5]
edges = list(zip(source, target))
g = create_graph(n_vertices, edges)
assert [[5], [6], [4], [3, 2, 1, 0]] == tarjan(g)
|
Given a list of stock prices calculate the maximum profit that can be made from a single buy and sell of one share of stock. We only allowed to complete one buy transaction and one sell transaction but must buy before we sell. Example : prices 7, 1, 5, 3, 6, 4 maxprofit will return 5 which is by buying at price 1 and selling at price 6. This problem can be solved using the concept of GREEDY ALGORITHM. We iterate over the price array once, keeping track of the lowest price point buy and the maximum profit we can get at each point. The greedy choice at each point is to either buy at the current price if it's less than our current buying price, or sell at the current price if the profit is more than our current maximum profit. maxprofit7, 1, 5, 3, 6, 4 5 maxprofit7, 6, 4, 3, 1 0 | def max_profit(prices: list[int]) -> int:
"""
>>> max_profit([7, 1, 5, 3, 6, 4])
5
>>> max_profit([7, 6, 4, 3, 1])
0
"""
if not prices:
return 0
min_price = prices[0]
max_profit: int = 0
for price in prices:
min_price = min(price, min_price)
max_profit = max(price - min_price, max_profit)
return max_profit
if __name__ == "__main__":
import doctest
doctest.testmod()
print(max_profit([7, 1, 5, 3, 6, 4]))
|
https:en.wikipedia.orgwikiSetcoverproblem Return the valuetoweight ratio for the item. Returns: float: The valuetoweight ratio for the item. Examples: Item10, 65.ratio 6.5 Item20, 100.ratio 5.0 Item30, 120.ratio 4.0 Solve the Fractional Cover Problem. Args: items: A list of items, where each item has weight and value attributes. capacity: The maximum weight capacity of the knapsack. Returns: The maximum value that can be obtained by selecting fractions of items to cover the knapsack's capacity. Raises: ValueError: If capacity is negative. Examples: fractionalcoverItem10, 60, Item20, 100, Item30, 120, capacity50 240.0 fractionalcoverItem20, 100, Item30, 120, Item10, 60, capacity25 135.0 fractionalcoverItem10, 60, Item20, 100, Item30, 120, capacity60 280.0 fractionalcoveritemsItem5, 30, Item10, 60, Item15, 90, capacity30 180.0 fractionalcoveritems, capacity50 0.0 fractionalcoveritemsItem10, 60, capacity5 30.0 fractionalcoveritemsItem10, 60, capacity1 6.0 fractionalcoveritemsItem10, 60, capacity0 0.0 fractionalcoveritemsItem10, 60, capacity1 Traceback most recent call last: ... ValueError: Capacity cannot be negative Sort the items by their valuetoweight ratio in descending order | # https://en.wikipedia.org/wiki/Set_cover_problem
from dataclasses import dataclass
from operator import attrgetter
@dataclass
class Item:
weight: int
value: int
@property
def ratio(self) -> float:
"""
Return the value-to-weight ratio for the item.
Returns:
float: The value-to-weight ratio for the item.
Examples:
>>> Item(10, 65).ratio
6.5
>>> Item(20, 100).ratio
5.0
>>> Item(30, 120).ratio
4.0
"""
return self.value / self.weight
def fractional_cover(items: list[Item], capacity: int) -> float:
"""
Solve the Fractional Cover Problem.
Args:
items: A list of items, where each item has weight and value attributes.
capacity: The maximum weight capacity of the knapsack.
Returns:
The maximum value that can be obtained by selecting fractions of items to cover
the knapsack's capacity.
Raises:
ValueError: If capacity is negative.
Examples:
>>> fractional_cover((Item(10, 60), Item(20, 100), Item(30, 120)), capacity=50)
240.0
>>> fractional_cover([Item(20, 100), Item(30, 120), Item(10, 60)], capacity=25)
135.0
>>> fractional_cover([Item(10, 60), Item(20, 100), Item(30, 120)], capacity=60)
280.0
>>> fractional_cover(items=[Item(5, 30), Item(10, 60), Item(15, 90)], capacity=30)
180.0
>>> fractional_cover(items=[], capacity=50)
0.0
>>> fractional_cover(items=[Item(10, 60)], capacity=5)
30.0
>>> fractional_cover(items=[Item(10, 60)], capacity=1)
6.0
>>> fractional_cover(items=[Item(10, 60)], capacity=0)
0.0
>>> fractional_cover(items=[Item(10, 60)], capacity=-1)
Traceback (most recent call last):
...
ValueError: Capacity cannot be negative
"""
if capacity < 0:
raise ValueError("Capacity cannot be negative")
total_value = 0.0
remaining_capacity = capacity
# Sort the items by their value-to-weight ratio in descending order
for item in sorted(items, key=attrgetter("ratio"), reverse=True):
if remaining_capacity == 0:
break
weight_taken = min(item.weight, remaining_capacity)
total_value += weight_taken * item.ratio
remaining_capacity -= weight_taken
return total_value
if __name__ == "__main__":
import doctest
if result := doctest.testmod().failed:
print(f"{result} test(s) failed")
else:
print("All tests passed")
|
fracknapsack60, 100, 120, 10, 20, 30, 50, 3 240.0 fracknapsack10, 40, 30, 50, 5, 4, 6, 3, 10, 4 105.0 fracknapsack10, 40, 30, 50, 5, 4, 6, 3, 8, 4 95.0 fracknapsack10, 40, 30, 50, 5, 4, 6, 8, 4 60.0 fracknapsack10, 40, 30, 5, 4, 6, 3, 8, 4 60.0 fracknapsack10, 40, 30, 50, 5, 4, 6, 3, 0, 4 0 fracknapsack10, 40, 30, 50, 5, 4, 6, 3, 8, 0 95.0 fracknapsack10, 40, 30, 50, 5, 4, 6, 3, 8, 4 0 fracknapsack10, 40, 30, 50, 5, 4, 6, 3, 8, 4 95.0 fracknapsack10, 40, 30, 50, 5, 4, 6, 3, 800, 4 130 fracknapsack10, 40, 30, 50, 5, 4, 6, 3, 8, 400 95.0 fracknapsackABCD, 5, 4, 6, 3, 8, 400 Traceback most recent call last: ... TypeError: unsupported operand types for : 'str' and 'int' | from bisect import bisect
from itertools import accumulate
def frac_knapsack(vl, wt, w, n):
"""
>>> frac_knapsack([60, 100, 120], [10, 20, 30], 50, 3)
240.0
>>> frac_knapsack([10, 40, 30, 50], [5, 4, 6, 3], 10, 4)
105.0
>>> frac_knapsack([10, 40, 30, 50], [5, 4, 6, 3], 8, 4)
95.0
>>> frac_knapsack([10, 40, 30, 50], [5, 4, 6], 8, 4)
60.0
>>> frac_knapsack([10, 40, 30], [5, 4, 6, 3], 8, 4)
60.0
>>> frac_knapsack([10, 40, 30, 50], [5, 4, 6, 3], 0, 4)
0
>>> frac_knapsack([10, 40, 30, 50], [5, 4, 6, 3], 8, 0)
95.0
>>> frac_knapsack([10, 40, 30, 50], [5, 4, 6, 3], -8, 4)
0
>>> frac_knapsack([10, 40, 30, 50], [5, 4, 6, 3], 8, -4)
95.0
>>> frac_knapsack([10, 40, 30, 50], [5, 4, 6, 3], 800, 4)
130
>>> frac_knapsack([10, 40, 30, 50], [5, 4, 6, 3], 8, 400)
95.0
>>> frac_knapsack("ABCD", [5, 4, 6, 3], 8, 400)
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for /: 'str' and 'int'
"""
r = sorted(zip(vl, wt), key=lambda x: x[0] / x[1], reverse=True)
vl, wt = [i[0] for i in r], [i[1] for i in r]
acc = list(accumulate(wt))
k = bisect(acc, w)
return (
0
if k == 0
else sum(vl[:k]) + (w - acc[k - 1]) * (vl[k]) / (wt[k])
if k != n
else sum(vl[:k])
)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
https:en.wikipedia.orgwikiContinuousknapsackproblem https:www.guru99.comfractionalknapsackproblemgreedy.html https:medium.comwalkinthecodegreedyalgorithmfractionalknapsackproblem9aba1daecc93 value 1, 3, 5, 7, 9 weight 0.9, 0.7, 0.5, 0.3, 0.1 fractionalknapsackvalue, weight, 5 25, 1, 1, 1, 1, 1 fractionalknapsackvalue, weight, 15 25, 1, 1, 1, 1, 1 fractionalknapsackvalue, weight, 25 25, 1, 1, 1, 1, 1 fractionalknapsackvalue, weight, 26 25, 1, 1, 1, 1, 1 fractionalknapsackvalue, weight, 1 90.0, 0, 0, 0, 0, 10.0 fractionalknapsack1, 3, 5, 7, weight, 30 16, 1, 1, 1, 1 fractionalknapsackvalue, 0.9, 0.7, 0.5, 0.3, 0.1, 30 25, 1, 1, 1, 1, 1 fractionalknapsack, , 30 0, | # https://en.wikipedia.org/wiki/Continuous_knapsack_problem
# https://www.guru99.com/fractional-knapsack-problem-greedy.html
# https://medium.com/walkinthecode/greedy-algorithm-fractional-knapsack-problem-9aba1daecc93
from __future__ import annotations
def fractional_knapsack(
value: list[int], weight: list[int], capacity: int
) -> tuple[float, list[float]]:
"""
>>> value = [1, 3, 5, 7, 9]
>>> weight = [0.9, 0.7, 0.5, 0.3, 0.1]
>>> fractional_knapsack(value, weight, 5)
(25, [1, 1, 1, 1, 1])
>>> fractional_knapsack(value, weight, 15)
(25, [1, 1, 1, 1, 1])
>>> fractional_knapsack(value, weight, 25)
(25, [1, 1, 1, 1, 1])
>>> fractional_knapsack(value, weight, 26)
(25, [1, 1, 1, 1, 1])
>>> fractional_knapsack(value, weight, -1)
(-90.0, [0, 0, 0, 0, -10.0])
>>> fractional_knapsack([1, 3, 5, 7], weight, 30)
(16, [1, 1, 1, 1])
>>> fractional_knapsack(value, [0.9, 0.7, 0.5, 0.3, 0.1], 30)
(25, [1, 1, 1, 1, 1])
>>> fractional_knapsack([], [], 30)
(0, [])
"""
index = list(range(len(value)))
ratio = [v / w for v, w in zip(value, weight)]
index.sort(key=lambda i: ratio[i], reverse=True)
max_value: float = 0
fractions: list[float] = [0] * len(value)
for i in index:
if weight[i] <= capacity:
fractions[i] = 1
max_value += value[i]
capacity -= weight[i]
else:
fractions[i] = capacity / weight[i]
max_value += value[i] * capacity / weight[i]
break
return max_value, fractions
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Task: There are n gas stations along a circular route, where the amount of gas at the ith station is gasquantitiesi. You have a car with an unlimited gas tank and it costs costsi of gas to travel from the ith station to its next i 1th station. You begin the journey with an empty tank at one of the gas stations. Given two integer arrays gasquantities and costs, return the starting gas station's index if you can travel around the circuit once in the clockwise direction otherwise, return 1. If there exists a solution, it is guaranteed to be unique Reference: https:leetcode.comproblemsgasstationdescription Implementation notes: First, check whether the total gas is enough to complete the journey. If not, return 1. However, if there is enough gas, it is guaranteed that there is a valid starting index to reach the end of the journey. Greedily calculate the net gain gasquantity cost at each station. If the net gain ever goes below 0 while iterating through the stations, start checking from the next station. This function returns a tuple of gas stations. Args: gasquantities: Amount of gas available at each station costs: The cost of gas required to move from one station to the next Returns: A tuple of gas stations gasstations getgasstations1, 2, 3, 4, 5, 3, 4, 5, 1, 2 lengasstations 5 gasstations0 GasStationgasquantity1, cost3 gasstations1 GasStationgasquantity5, cost2 This function returns the index from which to start the journey in order to reach the end. Args: gasquantities list: Amount of gas available at each station cost list: The cost of gas required to move from one station to the next Returns: start int: start index needed to complete the journey Examples: cancompletejourneygetgasstations1, 2, 3, 4, 5, 3, 4, 5, 1, 2 3 cancompletejourneygetgasstations2, 3, 4, 3, 4, 3 1 | from dataclasses import dataclass
@dataclass
class GasStation:
gas_quantity: int
cost: int
def get_gas_stations(
gas_quantities: list[int], costs: list[int]
) -> tuple[GasStation, ...]:
"""
This function returns a tuple of gas stations.
Args:
gas_quantities: Amount of gas available at each station
costs: The cost of gas required to move from one station to the next
Returns:
A tuple of gas stations
>>> gas_stations = get_gas_stations([1, 2, 3, 4, 5], [3, 4, 5, 1, 2])
>>> len(gas_stations)
5
>>> gas_stations[0]
GasStation(gas_quantity=1, cost=3)
>>> gas_stations[-1]
GasStation(gas_quantity=5, cost=2)
"""
return tuple(
GasStation(quantity, cost) for quantity, cost in zip(gas_quantities, costs)
)
def can_complete_journey(gas_stations: tuple[GasStation, ...]) -> int:
"""
This function returns the index from which to start the journey
in order to reach the end.
Args:
gas_quantities [list]: Amount of gas available at each station
cost [list]: The cost of gas required to move from one station to the next
Returns:
start [int]: start index needed to complete the journey
Examples:
>>> can_complete_journey(get_gas_stations([1, 2, 3, 4, 5], [3, 4, 5, 1, 2]))
3
>>> can_complete_journey(get_gas_stations([2, 3, 4], [3, 4, 3]))
-1
"""
total_gas = sum(gas_station.gas_quantity for gas_station in gas_stations)
total_cost = sum(gas_station.cost for gas_station in gas_stations)
if total_gas < total_cost:
return -1
start = 0
net = 0
for i, gas_station in enumerate(gas_stations):
net += gas_station.gas_quantity - gas_station.cost
if net < 0:
start = i + 1
net = 0
return start
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Test cases: Do you want to enter your denominations ? YN :N Enter the change you want to make in Indian Currency: 987 Following is minimal change for 987 : 500 100 100 100 100 50 20 10 5 2 Do you want to enter your denominations ? YN :Y Enter number of denomination:10 1 5 10 20 50 100 200 500 1000 2000 Enter the change you want to make: 18745 Following is minimal change for 18745 : 2000 2000 2000 2000 2000 2000 2000 2000 2000 500 200 20 20 5 Do you want to enter your denominations ? YN :N Enter the change you want to make: 0 The total value cannot be zero or negative. Do you want to enter your denominations ? YN :N Enter the change you want to make: 98 The total value cannot be zero or negative. Do you want to enter your denominations ? YN :Y Enter number of denomination:5 1 5 100 500 1000 Enter the change you want to make: 456 Following is minimal change for 456 : 100 100 100 100 5 5 5 5 5 5 5 5 5 5 5 1 Find the minimum change from the given denominations and value findminimumchange1, 5, 10, 20, 50, 100, 200, 500, 1000,2000, 18745 2000, 2000, 2000, 2000, 2000, 2000, 2000, 2000, 2000, 500, 200, 20, 20, 5 findminimumchange1, 2, 5, 10, 20, 50, 100, 500, 2000, 987 500, 100, 100, 100, 100, 50, 20, 10, 5, 2 findminimumchange1, 2, 5, 10, 20, 50, 100, 500, 2000, 0 findminimumchange1, 2, 5, 10, 20, 50, 100, 500, 2000, 98 findminimumchange1, 5, 100, 500, 1000, 456 100, 100, 100, 100, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 1 Initialize Result Traverse through all denomination Find denominations Driver Code All denominations of Indian Currency if user does not enter Print result | def find_minimum_change(denominations: list[int], value: str) -> list[int]:
"""
Find the minimum change from the given denominations and value
>>> find_minimum_change([1, 5, 10, 20, 50, 100, 200, 500, 1000,2000], 18745)
[2000, 2000, 2000, 2000, 2000, 2000, 2000, 2000, 2000, 500, 200, 20, 20, 5]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], 987)
[500, 100, 100, 100, 100, 50, 20, 10, 5, 2]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], 0)
[]
>>> find_minimum_change([1, 2, 5, 10, 20, 50, 100, 500, 2000], -98)
[]
>>> find_minimum_change([1, 5, 100, 500, 1000], 456)
[100, 100, 100, 100, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 1]
"""
total_value = int(value)
# Initialize Result
answer = []
# Traverse through all denomination
for denomination in reversed(denominations):
# Find denominations
while int(total_value) >= int(denomination):
total_value -= int(denomination)
answer.append(denomination) # Append the "answers" array
return answer
# Driver Code
if __name__ == "__main__":
denominations = []
value = "0"
if (
input("Do you want to enter your denominations ? (yY/n): ").strip().lower()
== "y"
):
n = int(input("Enter the number of denominations you want to add: ").strip())
for i in range(n):
denominations.append(int(input(f"Denomination {i}: ").strip()))
value = input("Enter the change you want to make in Indian Currency: ").strip()
else:
# All denominations of Indian Currency if user does not enter
denominations = [1, 2, 5, 10, 20, 50, 100, 500, 2000]
value = input("Enter the change you want to make: ").strip()
if int(value) == 0 or int(value) < 0:
print("The total value cannot be zero or negative.")
else:
print(f"Following is minimal change for {value}: ")
answer = find_minimum_change(denominations, value)
# Print result
for i in range(len(answer)):
print(answer[i], end=" ")
|
Calculate the minimum waiting time using a greedy algorithm. reference: https:www.youtube.comwatch?vSf3eiO12eJs For doctests run following command: python m doctest v minimumwaitingtime.py The minimumwaitingtime function uses a greedy algorithm to calculate the minimum time for queries to complete. It sorts the list in nondecreasing order, calculates the waiting time for each query by multiplying its position in the list with the sum of all remaining query times, and returns the total waiting time. A doctest ensures that the function produces the correct output. This function takes a list of query times and returns the minimum waiting time for all queries to be completed. Args: queries: A list of queries measured in picoseconds Returns: totalwaitingtime: Minimum waiting time measured in picoseconds Examples: minimumwaitingtime3, 2, 1, 2, 6 17 minimumwaitingtime3, 2, 1 4 minimumwaitingtime1, 2, 3, 4 10 minimumwaitingtime5, 5, 5, 5 30 minimumwaitingtime 0 | def minimum_waiting_time(queries: list[int]) -> int:
"""
This function takes a list of query times and returns the minimum waiting time
for all queries to be completed.
Args:
queries: A list of queries measured in picoseconds
Returns:
total_waiting_time: Minimum waiting time measured in picoseconds
Examples:
>>> minimum_waiting_time([3, 2, 1, 2, 6])
17
>>> minimum_waiting_time([3, 2, 1])
4
>>> minimum_waiting_time([1, 2, 3, 4])
10
>>> minimum_waiting_time([5, 5, 5, 5])
30
>>> minimum_waiting_time([])
0
"""
n = len(queries)
if n in (0, 1):
return 0
return sum(query * (n - i - 1) for i, query in enumerate(sorted(queries)))
if __name__ == "__main__":
import doctest
doctest.testmod()
|
This is a pure Python implementation of the greedymergesort algorithm reference: https:www.geeksforgeeks.orgoptimalfilemergepatterns For doctests run following command: python3 m doctest v greedymergesort.py Objective Merge a set of sorted files of different length into a single sorted file. We need to find an optimal solution, where the resultant file will be generated in minimum time. Approach If the number of sorted files are given, there are many ways to merge them into a single sorted file. This merge can be performed pair wise. To merge a mrecord file and a nrecord file requires possibly mn record moves the optimal choice being, merge the two smallest files together at each step greedy approach. Function to merge all the files with optimum cost Args: files list: A list of sizes of different files to be merged Returns: optimalmergecost int: Optimal cost to merge all those files Examples: optimalmergepattern2, 3, 4 14 optimalmergepattern5, 10, 20, 30, 30 205 optimalmergepattern8, 8, 8, 8, 8 96 Consider two files with minimum cost to be merged | def optimal_merge_pattern(files: list) -> float:
"""Function to merge all the files with optimum cost
Args:
files [list]: A list of sizes of different files to be merged
Returns:
optimal_merge_cost [int]: Optimal cost to merge all those files
Examples:
>>> optimal_merge_pattern([2, 3, 4])
14
>>> optimal_merge_pattern([5, 10, 20, 30, 30])
205
>>> optimal_merge_pattern([8, 8, 8, 8, 8])
96
"""
optimal_merge_cost = 0
while len(files) > 1:
temp = 0
# Consider two files with minimum cost to be merged
for _ in range(2):
min_index = files.index(min(files))
temp += files[min_index]
files.pop(min_index)
files.append(temp)
optimal_merge_cost += temp
return optimal_merge_cost
if __name__ == "__main__":
import doctest
doctest.testmod()
|
smallestrange function takes a list of sorted integer lists and finds the smallest range that includes at least one number from each list, using a min heap for efficiency. Find the smallest range from each list in nums. Uses min heap for efficiency. The range includes at least one number from each list. Args: nums: List of k sorted integer lists. Returns: list: Smallest range as a twoelement list. Examples: smallestrange4, 10, 15, 24, 26, 0, 9, 12, 20, 5, 18, 22, 30 20, 24 smallestrange1, 2, 3, 1, 2, 3, 1, 2, 3 1, 1 smallestrange1, 2, 3, 1, 2, 3, 1, 2, 3 1, 1 smallestrange3, 2, 1, 0, 0, 0, 1, 2, 3 1, 1 smallestrange1, 2, 3, 4, 5, 6, 7, 8, 9 3, 7 smallestrange0, 0, 0, 0, 0, 0, 0, 0, 0 0, 0 smallestrange, , Traceback most recent call last: ... IndexError: list index out of range Initialize smallestrange with large integer values | from heapq import heappop, heappush
from sys import maxsize
def smallest_range(nums: list[list[int]]) -> list[int]:
"""
Find the smallest range from each list in nums.
Uses min heap for efficiency. The range includes at least one number from each list.
Args:
nums: List of k sorted integer lists.
Returns:
list: Smallest range as a two-element list.
Examples:
>>> smallest_range([[4, 10, 15, 24, 26], [0, 9, 12, 20], [5, 18, 22, 30]])
[20, 24]
>>> smallest_range([[1, 2, 3], [1, 2, 3], [1, 2, 3]])
[1, 1]
>>> smallest_range(((1, 2, 3), (1, 2, 3), (1, 2, 3)))
[1, 1]
>>> smallest_range(((-3, -2, -1), (0, 0, 0), (1, 2, 3)))
[-1, 1]
>>> smallest_range([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
[3, 7]
>>> smallest_range([[0, 0, 0], [0, 0, 0], [0, 0, 0]])
[0, 0]
>>> smallest_range([[], [], []])
Traceback (most recent call last):
...
IndexError: list index out of range
"""
min_heap: list[tuple[int, int, int]] = []
current_max = -maxsize - 1
for i, items in enumerate(nums):
heappush(min_heap, (items[0], i, 0))
current_max = max(current_max, items[0])
# Initialize smallest_range with large integer values
smallest_range = [-maxsize - 1, maxsize]
while min_heap:
current_min, list_index, element_index = heappop(min_heap)
if current_max - current_min < smallest_range[1] - smallest_range[0]:
smallest_range = [current_min, current_max]
if element_index == len(nums[list_index]) - 1:
break
next_element = nums[list_index][element_index + 1]
heappush(min_heap, (next_element, list_index, element_index + 1))
current_max = max(current_max, next_element)
return smallest_range
if __name__ == "__main__":
from doctest import testmod
testmod()
print(f"{smallest_range([[1, 2, 3], [1, 2, 3], [1, 2, 3]])}") # Output: [1, 1]
|
Adler32 is a checksum algorithm which was invented by Mark Adler in 1995. Compared to a cyclic redundancy check of the same length, it trades reliability for speed preferring the latter. Adler32 is more reliable than Fletcher16, and slightly less reliable than Fletcher32.2 source: https:en.wikipedia.orgwikiAdler32 Function implements adler32 hash. Iterates and evaluates a new value for each character adler32'Algorithms' 363791387 adler32'go adler em all' 708642122 | MOD_ADLER = 65521
def adler32(plain_text: str) -> int:
"""
Function implements adler-32 hash.
Iterates and evaluates a new value for each character
>>> adler32('Algorithms')
363791387
>>> adler32('go adler em all')
708642122
"""
a = 1
b = 0
for plain_chr in plain_text:
a = (a + ord(plain_chr)) % MOD_ADLER
b = (b + a) % MOD_ADLER
return (b << 16) | a
|
example of simple chaos machine Chaos Machine K, t, m K 0.33, 0.44, 0.55, 0.44, 0.33 t 3 m 5 Buffer Space with Parameters Space bufferspace: listfloat paramsspace: listfloat Machine Time machinetime 0 def pushseed: global bufferspace, paramsspace, machinetime, K, m, t Choosing Dynamical Systems All for key, value in enumeratebufferspace: Evolution Parameter e floatseed value Control Theory: Orbit Change value bufferspacekey 1 m e 1 Control Theory: Trajectory Change r paramsspacekey e 1 3 Modification Transition Function Jumps bufferspacekey roundfloatr value 1 value, 10 paramsspacekey r Saving to Parameters Space Logistic Map assert maxbufferspace 1 assert maxparamsspace 4 Machine Time machinetime 1 def pull: global bufferspace, paramsspace, machinetime, K, m, t PRNG Xorshift by George Marsaglia def xorshiftx, y: x y 13 y x 17 x y 5 return x Choosing Dynamical Systems Increment key machinetime m Evolution Time Length for in ranget: Variables Position Parameters r paramsspacekey value bufferspacekey Modification Transition Function Flow bufferspacekey roundfloatr value 1 value, 10 paramsspacekey machinetime 0.01 r 1.01 1 3 Choosing Chaotic Data x intbufferspacekey 2 m 1010 y intbufferspacekey 2 m 1010 Machine Time machinetime 1 return xorshiftx, y 0xFFFFFFFF def reset: global bufferspace, paramsspace, machinetime, K, m, t bufferspace K paramsspace 0 m machinetime 0 if name main: Initialization reset Pushing Data Input import random message random.samplerange0xFFFFFFFF, 100 for chunk in message: pushchunk for controlling inp Pulling Data Output while inp in e, E: printfformatpull, '04x' printbufferspace printparamsspace inp inputeexit? .strip | # Chaos Machine (K, t, m)
K = [0.33, 0.44, 0.55, 0.44, 0.33]
t = 3
m = 5
# Buffer Space (with Parameters Space)
buffer_space: list[float] = []
params_space: list[float] = []
# Machine Time
machine_time = 0
def push(seed):
global buffer_space, params_space, machine_time, K, m, t
# Choosing Dynamical Systems (All)
for key, value in enumerate(buffer_space):
# Evolution Parameter
e = float(seed / value)
# Control Theory: Orbit Change
value = (buffer_space[(key + 1) % m] + e) % 1
# Control Theory: Trajectory Change
r = (params_space[key] + e) % 1 + 3
# Modification (Transition Function) - Jumps
buffer_space[key] = round(float(r * value * (1 - value)), 10)
params_space[key] = r # Saving to Parameters Space
# Logistic Map
assert max(buffer_space) < 1
assert max(params_space) < 4
# Machine Time
machine_time += 1
def pull():
global buffer_space, params_space, machine_time, K, m, t
# PRNG (Xorshift by George Marsaglia)
def xorshift(x, y):
x ^= y >> 13
y ^= x << 17
x ^= y >> 5
return x
# Choosing Dynamical Systems (Increment)
key = machine_time % m
# Evolution (Time Length)
for _ in range(t):
# Variables (Position + Parameters)
r = params_space[key]
value = buffer_space[key]
# Modification (Transition Function) - Flow
buffer_space[key] = round(float(r * value * (1 - value)), 10)
params_space[key] = (machine_time * 0.01 + r * 1.01) % 1 + 3
# Choosing Chaotic Data
x = int(buffer_space[(key + 2) % m] * (10**10))
y = int(buffer_space[(key - 2) % m] * (10**10))
# Machine Time
machine_time += 1
return xorshift(x, y) % 0xFFFFFFFF
def reset():
global buffer_space, params_space, machine_time, K, m, t
buffer_space = K
params_space = [0] * m
machine_time = 0
if __name__ == "__main__":
# Initialization
reset()
# Pushing Data (Input)
import random
message = random.sample(range(0xFFFFFFFF), 100)
for chunk in message:
push(chunk)
# for controlling
inp = ""
# Pulling Data (Output)
while inp in ("e", "E"):
print(f"{format(pull(), '#04x')}")
print(buffer_space)
print(params_space)
inp = input("(e)exit? ").strip()
|
This algorithm k33 was first reported by Dan Bernstein many years ago in comp.lang.c Another version of this algorithm now favored by Bernstein uses xor: hashi hashi 1 33 stri; First Magic constant 33: It has never been adequately explained. It's magic because it works better than many other constants, prime or not. Second Magic Constant 5381: 1. odd number 2. prime number 3. deficient number 4. 001010100000101 b source: http:www.cse.yorku.caozhash.html Implementation of djb2 hash algorithm that is popular because of it's magic constants. djb2'Algorithms' 3782405311 djb2'scramble bits' 1609059040 | def djb2(s: str) -> int:
"""
Implementation of djb2 hash algorithm that
is popular because of it's magic constants.
>>> djb2('Algorithms')
3782405311
>>> djb2('scramble bits')
1609059040
"""
hash_value = 5381
for x in s:
hash_value = ((hash_value << 5) + hash_value) + ord(x)
return hash_value & 0xFFFFFFFF
|
Implementation of ElfHash Algorithm, a variant of PJW hash function. elfhash'lorem ipsum' 253956621 | def elf_hash(data: str) -> int:
"""
Implementation of ElfHash Algorithm, a variant of PJW hash function.
>>> elf_hash('lorem ipsum')
253956621
"""
hash_ = x = 0
for letter in data:
hash_ = (hash_ << 4) + ord(letter)
x = hash_ & 0xF0000000
if x != 0:
hash_ ^= x >> 24
hash_ &= ~x
return hash_
if __name__ == "__main__":
import doctest
doctest.testmod()
|
The Fletcher checksum is an algorithm for computing a positiondependent checksum devised by John G. Fletcher 19342012 at Lawrence Livermore Labs in the late 1970s.1 The objective of the Fletcher checksum was to provide errordetection properties approaching those of a cyclic redundancy check but with the lower computational effort associated with summation techniques. Source: https:en.wikipedia.orgwikiFletcher27schecksum Loop through every character in the data and add to two sums. fletcher16'hello world' 6752 fletcher16'onethousandfourhundredthirtyfour' 28347 fletcher16'The quick brown fox jumps over the lazy dog.' 5655 | def fletcher16(text: str) -> int:
"""
Loop through every character in the data and add to two sums.
>>> fletcher16('hello world')
6752
>>> fletcher16('onethousandfourhundredthirtyfour')
28347
>>> fletcher16('The quick brown fox jumps over the lazy dog.')
5655
"""
data = bytes(text, "ascii")
sum1 = 0
sum2 = 0
for character in data:
sum1 = (sum1 + character) % 255
sum2 = (sum1 + sum2) % 255
return (sum2 << 8) | sum1
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Author: Joo Gustavo A. Amorim Gabriel Kunz Author email: joaogustavoamorimgmail.com and gabrielkunzuergs.edu.br Coding date: apr 2019 Black: True This code implement the Hamming code: https:en.wikipedia.orgwikiHammingcode In telecommunication, Hamming codes are a family of linear errorcorrecting codes. Hamming codes can detect up to twobit errors or correct onebit errors without detection of uncorrected errors. By contrast, the simple parity code cannot correct errors, and can detect only an odd number of bits in error. Hamming codes are perfect codes, that is, they achieve the highest possible rate for codes with their block length and minimum distance of three. the implemented code consists of: a function responsible for encoding the message emitterConverter return the encoded message a function responsible for decoding the message receptorConverter return the decoded message and a ack of data integrity how to use: to be used you must declare how many parity bits sizePari you want to include in the message. it is desired for test purposes to select a bit to be set as an error. This serves to check whether the code is working correctly. Lastly, the variable of the messageword that must be desired to be encoded text. how this work: declaration of variables sizePari, be, text converts the messageword text to binary using the texttobits function encodes the message using the rules of hamming encoding decodes the message using the rules of hamming encoding print the original message, the encoded message and the decoded message forces an error in the coded text variable decodes the message that was forced the error print the original message, the encoded message, the bit changed message and the decoded message Imports Functions of binary conversion texttobitsmsg '011011010111001101100111' textfrombits'011011010111001101100111' 'msg' Functions of hamming code :param sizepar: how many parity bits the message must have :param data: information bits :return: message to be transmitted by unreliable medium bits of information merged with parity bits emitterconverter4, 101010111111 '1', '1', '1', '1', '0', '1', '0', '0', '1', '0', '1', '1', '1', '1', '1', '1' emitterconverter5, 101010111111 Traceback most recent call last: ... ValueError: size of parity don't match with size of data sorted information data for the size of the output data data position template parity parity bit counter counter position of data bits Performs a template of bit positions who should be given, and who should be parity Sorts the data to the new output size Calculates parity Bit counter one for a given parity counter to control the loop reading Mount the message receptorconverter4, 1111010010111111 '1', '0', '1', '0', '1', '0', '1', '1', '1', '1', '1', '1', True data position template parity Parity bit counter Counter p data bit reading list of parity received Performs a template of bit positions who should be given, and who should be parity Sorts the data to the new output size calculates the parity with the data sorted information data for the size of the output data Data position feedback parity Parity bit counter Counter p data bit reading Performs a template position of bits who should be given, and who should be parity Sorts the data to the new output size Calculates parity Bit counter one for a certain parity Counter to control loop reading Mount the message Example how to use number of parity bits sizePari 4 location of the bit that will be forced an error be 2 Messageword to be encoded and decoded with hamming text inputEnter the word to be read: text Message01 Convert the message to binary binaryText texttobitstext Prints the binary of the string printText input in binary is ' binaryText ' total transmitted bits totalBits lenbinaryText sizePari printSize of data is strtotalBits printn Message exchange printData to send binaryText dataOut emitterConvertersizePari, binaryText printData converted .joindataOut dataReceiv, ack receptorConvertersizePari, dataOut print Data receive .joindataReceiv tt Data integrity: strack printn Force error printData to send binaryText dataOut emitterConvertersizePari, binaryText printData converted .joindataOut forces error dataOutbe 1 dataOutbe 0 0 dataOutbe 1 printData after transmission .joindataOut dataReceiv, ack receptorConvertersizePari, dataOut print Data receive .joindataReceiv tt Data integrity: strack | # Author: João Gustavo A. Amorim & Gabriel Kunz
# Author email: joaogustavoamorim@gmail.com and gabriel-kunz@uergs.edu.br
# Coding date: apr 2019
# Black: True
"""
* This code implement the Hamming code:
https://en.wikipedia.org/wiki/Hamming_code - In telecommunication,
Hamming codes are a family of linear error-correcting codes. Hamming
codes can detect up to two-bit errors or correct one-bit errors
without detection of uncorrected errors. By contrast, the simple
parity code cannot correct errors, and can detect only an odd number
of bits in error. Hamming codes are perfect codes, that is, they
achieve the highest possible rate for codes with their block length
and minimum distance of three.
* the implemented code consists of:
* a function responsible for encoding the message (emitterConverter)
* return the encoded message
* a function responsible for decoding the message (receptorConverter)
* return the decoded message and a ack of data integrity
* how to use:
to be used you must declare how many parity bits (sizePari)
you want to include in the message.
it is desired (for test purposes) to select a bit to be set
as an error. This serves to check whether the code is working correctly.
Lastly, the variable of the message/word that must be desired to be
encoded (text).
* how this work:
declaration of variables (sizePari, be, text)
converts the message/word (text) to binary using the
text_to_bits function
encodes the message using the rules of hamming encoding
decodes the message using the rules of hamming encoding
print the original message, the encoded message and the
decoded message
forces an error in the coded text variable
decodes the message that was forced the error
print the original message, the encoded message, the bit changed
message and the decoded message
"""
# Imports
import numpy as np
# Functions of binary conversion--------------------------------------
def text_to_bits(text, encoding="utf-8", errors="surrogatepass"):
"""
>>> text_to_bits("msg")
'011011010111001101100111'
"""
bits = bin(int.from_bytes(text.encode(encoding, errors), "big"))[2:]
return bits.zfill(8 * ((len(bits) + 7) // 8))
def text_from_bits(bits, encoding="utf-8", errors="surrogatepass"):
"""
>>> text_from_bits('011011010111001101100111')
'msg'
"""
n = int(bits, 2)
return n.to_bytes((n.bit_length() + 7) // 8, "big").decode(encoding, errors) or "\0"
# Functions of hamming code-------------------------------------------
def emitter_converter(size_par, data):
"""
:param size_par: how many parity bits the message must have
:param data: information bits
:return: message to be transmitted by unreliable medium
- bits of information merged with parity bits
>>> emitter_converter(4, "101010111111")
['1', '1', '1', '1', '0', '1', '0', '0', '1', '0', '1', '1', '1', '1', '1', '1']
>>> emitter_converter(5, "101010111111")
Traceback (most recent call last):
...
ValueError: size of parity don't match with size of data
"""
if size_par + len(data) <= 2**size_par - (len(data) - 1):
raise ValueError("size of parity don't match with size of data")
data_out = []
parity = []
bin_pos = [bin(x)[2:] for x in range(1, size_par + len(data) + 1)]
# sorted information data for the size of the output data
data_ord = []
# data position template + parity
data_out_gab = []
# parity bit counter
qtd_bp = 0
# counter position of data bits
cont_data = 0
for x in range(1, size_par + len(data) + 1):
# Performs a template of bit positions - who should be given,
# and who should be parity
if qtd_bp < size_par:
if (np.log(x) / np.log(2)).is_integer():
data_out_gab.append("P")
qtd_bp = qtd_bp + 1
else:
data_out_gab.append("D")
else:
data_out_gab.append("D")
# Sorts the data to the new output size
if data_out_gab[-1] == "D":
data_ord.append(data[cont_data])
cont_data += 1
else:
data_ord.append(None)
# Calculates parity
qtd_bp = 0 # parity bit counter
for bp in range(1, size_par + 1):
# Bit counter one for a given parity
cont_bo = 0
# counter to control the loop reading
cont_loop = 0
for x in data_ord:
if x is not None:
try:
aux = (bin_pos[cont_loop])[-1 * (bp)]
except IndexError:
aux = "0"
if aux == "1" and x == "1":
cont_bo += 1
cont_loop += 1
parity.append(cont_bo % 2)
qtd_bp += 1
# Mount the message
cont_bp = 0 # parity bit counter
for x in range(size_par + len(data)):
if data_ord[x] is None:
data_out.append(str(parity[cont_bp]))
cont_bp += 1
else:
data_out.append(data_ord[x])
return data_out
def receptor_converter(size_par, data):
"""
>>> receptor_converter(4, "1111010010111111")
(['1', '0', '1', '0', '1', '0', '1', '1', '1', '1', '1', '1'], True)
"""
# data position template + parity
data_out_gab = []
# Parity bit counter
qtd_bp = 0
# Counter p data bit reading
cont_data = 0
# list of parity received
parity_received = []
data_output = []
for x in range(1, len(data) + 1):
# Performs a template of bit positions - who should be given,
# and who should be parity
if qtd_bp < size_par and (np.log(x) / np.log(2)).is_integer():
data_out_gab.append("P")
qtd_bp = qtd_bp + 1
else:
data_out_gab.append("D")
# Sorts the data to the new output size
if data_out_gab[-1] == "D":
data_output.append(data[cont_data])
else:
parity_received.append(data[cont_data])
cont_data += 1
# -----------calculates the parity with the data
data_out = []
parity = []
bin_pos = [bin(x)[2:] for x in range(1, size_par + len(data_output) + 1)]
# sorted information data for the size of the output data
data_ord = []
# Data position feedback + parity
data_out_gab = []
# Parity bit counter
qtd_bp = 0
# Counter p data bit reading
cont_data = 0
for x in range(1, size_par + len(data_output) + 1):
# Performs a template position of bits - who should be given,
# and who should be parity
if qtd_bp < size_par and (np.log(x) / np.log(2)).is_integer():
data_out_gab.append("P")
qtd_bp = qtd_bp + 1
else:
data_out_gab.append("D")
# Sorts the data to the new output size
if data_out_gab[-1] == "D":
data_ord.append(data_output[cont_data])
cont_data += 1
else:
data_ord.append(None)
# Calculates parity
qtd_bp = 0 # parity bit counter
for bp in range(1, size_par + 1):
# Bit counter one for a certain parity
cont_bo = 0
# Counter to control loop reading
cont_loop = 0
for x in data_ord:
if x is not None:
try:
aux = (bin_pos[cont_loop])[-1 * (bp)]
except IndexError:
aux = "0"
if aux == "1" and x == "1":
cont_bo += 1
cont_loop += 1
parity.append(str(cont_bo % 2))
qtd_bp += 1
# Mount the message
cont_bp = 0 # Parity bit counter
for x in range(size_par + len(data_output)):
if data_ord[x] is None:
data_out.append(str(parity[cont_bp]))
cont_bp += 1
else:
data_out.append(data_ord[x])
ack = parity_received == parity
return data_output, ack
# ---------------------------------------------------------------------
"""
# Example how to use
# number of parity bits
sizePari = 4
# location of the bit that will be forced an error
be = 2
# Message/word to be encoded and decoded with hamming
# text = input("Enter the word to be read: ")
text = "Message01"
# Convert the message to binary
binaryText = text_to_bits(text)
# Prints the binary of the string
print("Text input in binary is '" + binaryText + "'")
# total transmitted bits
totalBits = len(binaryText) + sizePari
print("Size of data is " + str(totalBits))
print("\n --Message exchange--")
print("Data to send ------------> " + binaryText)
dataOut = emitterConverter(sizePari, binaryText)
print("Data converted ----------> " + "".join(dataOut))
dataReceiv, ack = receptorConverter(sizePari, dataOut)
print(
"Data receive ------------> "
+ "".join(dataReceiv)
+ "\t\t -- Data integrity: "
+ str(ack)
)
print("\n --Force error--")
print("Data to send ------------> " + binaryText)
dataOut = emitterConverter(sizePari, binaryText)
print("Data converted ----------> " + "".join(dataOut))
# forces error
dataOut[-be] = "1" * (dataOut[-be] == "0") + "0" * (dataOut[-be] == "1")
print("Data after transmission -> " + "".join(dataOut))
dataReceiv, ack = receptorConverter(sizePari, dataOut)
print(
"Data receive ------------> "
+ "".join(dataReceiv)
+ "\t\t -- Data integrity: "
+ str(ack)
)
"""
|
Luhn Algorithm from future import annotations def isluhnstring: str bool: checkdigit: int vector: liststr liststring vector, checkdigit vector:1, intvector1 vector: listint intdigit for digit in vector vector.reverse for i, digit in enumeratevector: if i 1 0: doubled: int digit 2 if doubled 9: doubled 9 checkdigit doubled else: checkdigit digit return checkdigit 10 0 if name main: import doctest doctest.testmod assert isluhn79927398713 assert not isluhn79927398714 | from __future__ import annotations
def is_luhn(string: str) -> bool:
"""
Perform Luhn validation on an input string
Algorithm:
* Double every other digit starting from 2nd last digit.
* Subtract 9 if number is greater than 9.
* Sum the numbers
*
>>> test_cases = (79927398710, 79927398711, 79927398712, 79927398713,
... 79927398714, 79927398715, 79927398716, 79927398717, 79927398718,
... 79927398719)
>>> [is_luhn(str(test_case)) for test_case in test_cases]
[False, False, False, True, False, False, False, False, False, False]
"""
check_digit: int
_vector: list[str] = list(string)
__vector, check_digit = _vector[:-1], int(_vector[-1])
vector: list[int] = [int(digit) for digit in __vector]
vector.reverse()
for i, digit in enumerate(vector):
if i & 1 == 0:
doubled: int = digit * 2
if doubled > 9:
doubled -= 9
check_digit += doubled
else:
check_digit += digit
return check_digit % 10 == 0
if __name__ == "__main__":
import doctest
doctest.testmod()
assert is_luhn("79927398713")
assert not is_luhn("79927398714")
|
The MD5 algorithm is a hash function that's commonly used as a checksum to detect data corruption. The algorithm works by processing a given message in blocks of 512 bits, padding the message as needed. It uses the blocks to operate a 128bit state and performs a total of 64 such operations. Note that all values are littleendian, so inputs are converted as needed. Although MD5 was used as a cryptographic hash function in the past, it's since been cracked, so it shouldn't be used for security purposes. For more info, see https:en.wikipedia.orgwikiMD5 Converts the given string to littleendian in groups of 8 chars. Arguments: string32 string 32char string Raises: ValueError input is not 32 char Returns: 32char littleendian string tolittleendianb'1234567890abcdfghijklmnopqrstuvw' b'pqrstuvwhijklmno90abcdfg12345678' tolittleendianb'1234567890' Traceback most recent call last: ... ValueError: Input must be of length 32 Converts the given nonnegative integer to hex string. Example: Suppose the input is the following: i 1234 The input is 0x000004d2 in hex, so the littleendian hex string is d2040000. Arguments: i int integer Raises: ValueError input is negative Returns: 8char littleendian hex string reformathex1234 b'd2040000' reformathex666 b'9a020000' reformathex0 b'00000000' reformathex1234567890 b'd2029649' reformathex1234567890987654321 b'b11c6cb1' reformathex1 Traceback most recent call last: ... ValueError: Input must be nonnegative Preprocesses the message string: Convert message to bit string Pad bit string to a multiple of 512 chars: Append a 1 Append 0's until length 448 mod 512 Append length of original message 64 chars Example: Suppose the input is the following: message a The message bit string is 01100001, which is 8 bits long. Thus, the bit string needs 439 bits of padding so that bitstring 1 padding 448 mod 512. The message length is 000010000...0 in 64bit littleendian binary. The combined bit string is then 512 bits long. Arguments: message string message string Returns: processed bit string padded to a multiple of 512 chars preprocessba b01100001 b1 ... b0 439 b00001000 b0 56 True preprocessb b1 b0 447 b0 64 True Pad bitstring to a multiple of 512 chars Splits bit string into blocks of 512 chars and yields each block as a list of 32bit words Example: Suppose the input is the following: bitstring 000000000...0 0x00 32 bits, padded to the right 000000010...0 0x01 32 bits, padded to the right 000000100...0 0x02 32 bits, padded to the right 000000110...0 0x03 32 bits, padded to the right ... 000011110...0 0x0a 32 bits, padded to the right Then lenbitstring 512, so there'll be 1 block. The block is split into 32bit words, and each word is converted to little endian. The first word is interpreted as 0 in decimal, the second word is interpreted as 1 in decimal, etc. Thus, blockwords 0, 1, 2, 3, ..., 15. Arguments: bitstring string bit string with multiple of 512 as length Raises: ValueError length of bit string isn't multiple of 512 Yields: a list of 16 32bit words teststring .joinformatn 24, 032b for n in range16 ... .encodeutf8 listgetblockwordsteststring 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 listgetblockwordsteststring 4 listrange16 4 True listgetblockwordsb1 512 4294967295 16 True listgetblockwordsb listgetblockwordsb1111 Traceback most recent call last: ... ValueError: Input must have length that's a multiple of 512 Perform bitwise NOT on given int. Arguments: i int given int Raises: ValueError input is negative Returns: Result of bitwise NOT on i not3234 4294967261 not321234 4294966061 not324294966061 1234 not320 4294967295 not321 4294967294 not321 Traceback most recent call last: ... ValueError: Input must be nonnegative Add two numbers as 32bit ints. Arguments: a int first given int b int second given int Returns: a b as an unsigned 32bit int sum321, 1 2 sum322, 3 5 sum320, 0 0 sum321, 1 4294967294 sum324294967295, 1 0 Rotate the bits of a given int left by a given amount. Arguments: i int given int shift int shift amount Raises: ValueError either given int or shift is negative Returns: i rotated to the left by shift bits leftrotate321234, 1 2468 leftrotate321111, 4 17776 leftrotate322147483648, 1 1 leftrotate322147483648, 3 4 leftrotate324294967295, 4 4294967295 leftrotate321234, 0 1234 leftrotate320, 0 0 leftrotate321, 0 Traceback most recent call last: ... ValueError: Input must be nonnegative leftrotate320, 1 Traceback most recent call last: ... ValueError: Shift must be nonnegative Returns the 32char MD5 hash of a given message. Reference: https:en.wikipedia.orgwikiMD5Algorithm Arguments: message string message Returns: 32char MD5 hash string md5meb b'd41d8cd98f00b204e9800998ecf8427e' md5mebThe quick brown fox jumps over the lazy dog b'9e107d9d372bb6826bd81d3542a419d6' md5mebThe quick brown fox jumps over the lazy dog. b'e4d909c290d0fb1ca068ffaddf22cbd0' import hashlib from string import asciiletters msgs b, asciiletters.encodeutf8, .encodeutf8, ... bThe quick brown fox jumps over the lazy dog. allmd5memsg hashlib.md5msg.hexdigest.encodeutf8 for msg in msgs True Convert to bit string, add padding and append message length Starting states Process bit string in chunks, each with 16 32char words Hash current chunk f b c not32b d Alternate definition for f f d b not32d c Alternate definition for f Add hashed chunk to running total | from collections.abc import Generator
from math import sin
def to_little_endian(string_32: bytes) -> bytes:
"""
Converts the given string to little-endian in groups of 8 chars.
Arguments:
string_32 {[string]} -- [32-char string]
Raises:
ValueError -- [input is not 32 char]
Returns:
32-char little-endian string
>>> to_little_endian(b'1234567890abcdfghijklmnopqrstuvw')
b'pqrstuvwhijklmno90abcdfg12345678'
>>> to_little_endian(b'1234567890')
Traceback (most recent call last):
...
ValueError: Input must be of length 32
"""
if len(string_32) != 32:
raise ValueError("Input must be of length 32")
little_endian = b""
for i in [3, 2, 1, 0]:
little_endian += string_32[8 * i : 8 * i + 8]
return little_endian
def reformat_hex(i: int) -> bytes:
"""
Converts the given non-negative integer to hex string.
Example: Suppose the input is the following:
i = 1234
The input is 0x000004d2 in hex, so the little-endian hex string is
"d2040000".
Arguments:
i {[int]} -- [integer]
Raises:
ValueError -- [input is negative]
Returns:
8-char little-endian hex string
>>> reformat_hex(1234)
b'd2040000'
>>> reformat_hex(666)
b'9a020000'
>>> reformat_hex(0)
b'00000000'
>>> reformat_hex(1234567890)
b'd2029649'
>>> reformat_hex(1234567890987654321)
b'b11c6cb1'
>>> reformat_hex(-1)
Traceback (most recent call last):
...
ValueError: Input must be non-negative
"""
if i < 0:
raise ValueError("Input must be non-negative")
hex_rep = format(i, "08x")[-8:]
little_endian_hex = b""
for i in [3, 2, 1, 0]:
little_endian_hex += hex_rep[2 * i : 2 * i + 2].encode("utf-8")
return little_endian_hex
def preprocess(message: bytes) -> bytes:
"""
Preprocesses the message string:
- Convert message to bit string
- Pad bit string to a multiple of 512 chars:
- Append a 1
- Append 0's until length = 448 (mod 512)
- Append length of original message (64 chars)
Example: Suppose the input is the following:
message = "a"
The message bit string is "01100001", which is 8 bits long. Thus, the
bit string needs 439 bits of padding so that
(bit_string + "1" + padding) = 448 (mod 512).
The message length is "000010000...0" in 64-bit little-endian binary.
The combined bit string is then 512 bits long.
Arguments:
message {[string]} -- [message string]
Returns:
processed bit string padded to a multiple of 512 chars
>>> preprocess(b"a") == (b"01100001" + b"1" +
... (b"0" * 439) + b"00001000" + (b"0" * 56))
True
>>> preprocess(b"") == b"1" + (b"0" * 447) + (b"0" * 64)
True
"""
bit_string = b""
for char in message:
bit_string += format(char, "08b").encode("utf-8")
start_len = format(len(bit_string), "064b").encode("utf-8")
# Pad bit_string to a multiple of 512 chars
bit_string += b"1"
while len(bit_string) % 512 != 448:
bit_string += b"0"
bit_string += to_little_endian(start_len[32:]) + to_little_endian(start_len[:32])
return bit_string
def get_block_words(bit_string: bytes) -> Generator[list[int], None, None]:
"""
Splits bit string into blocks of 512 chars and yields each block as a list
of 32-bit words
Example: Suppose the input is the following:
bit_string =
"000000000...0" + # 0x00 (32 bits, padded to the right)
"000000010...0" + # 0x01 (32 bits, padded to the right)
"000000100...0" + # 0x02 (32 bits, padded to the right)
"000000110...0" + # 0x03 (32 bits, padded to the right)
...
"000011110...0" # 0x0a (32 bits, padded to the right)
Then len(bit_string) == 512, so there'll be 1 block. The block is split
into 32-bit words, and each word is converted to little endian. The
first word is interpreted as 0 in decimal, the second word is
interpreted as 1 in decimal, etc.
Thus, block_words == [[0, 1, 2, 3, ..., 15]].
Arguments:
bit_string {[string]} -- [bit string with multiple of 512 as length]
Raises:
ValueError -- [length of bit string isn't multiple of 512]
Yields:
a list of 16 32-bit words
>>> test_string = ("".join(format(n << 24, "032b") for n in range(16))
... .encode("utf-8"))
>>> list(get_block_words(test_string))
[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]]
>>> list(get_block_words(test_string * 4)) == [list(range(16))] * 4
True
>>> list(get_block_words(b"1" * 512)) == [[4294967295] * 16]
True
>>> list(get_block_words(b""))
[]
>>> list(get_block_words(b"1111"))
Traceback (most recent call last):
...
ValueError: Input must have length that's a multiple of 512
"""
if len(bit_string) % 512 != 0:
raise ValueError("Input must have length that's a multiple of 512")
for pos in range(0, len(bit_string), 512):
block = bit_string[pos : pos + 512]
block_words = []
for i in range(0, 512, 32):
block_words.append(int(to_little_endian(block[i : i + 32]), 2))
yield block_words
def not_32(i: int) -> int:
"""
Perform bitwise NOT on given int.
Arguments:
i {[int]} -- [given int]
Raises:
ValueError -- [input is negative]
Returns:
Result of bitwise NOT on i
>>> not_32(34)
4294967261
>>> not_32(1234)
4294966061
>>> not_32(4294966061)
1234
>>> not_32(0)
4294967295
>>> not_32(1)
4294967294
>>> not_32(-1)
Traceback (most recent call last):
...
ValueError: Input must be non-negative
"""
if i < 0:
raise ValueError("Input must be non-negative")
i_str = format(i, "032b")
new_str = ""
for c in i_str:
new_str += "1" if c == "0" else "0"
return int(new_str, 2)
def sum_32(a: int, b: int) -> int:
"""
Add two numbers as 32-bit ints.
Arguments:
a {[int]} -- [first given int]
b {[int]} -- [second given int]
Returns:
(a + b) as an unsigned 32-bit int
>>> sum_32(1, 1)
2
>>> sum_32(2, 3)
5
>>> sum_32(0, 0)
0
>>> sum_32(-1, -1)
4294967294
>>> sum_32(4294967295, 1)
0
"""
return (a + b) % 2**32
def left_rotate_32(i: int, shift: int) -> int:
"""
Rotate the bits of a given int left by a given amount.
Arguments:
i {[int]} -- [given int]
shift {[int]} -- [shift amount]
Raises:
ValueError -- [either given int or shift is negative]
Returns:
`i` rotated to the left by `shift` bits
>>> left_rotate_32(1234, 1)
2468
>>> left_rotate_32(1111, 4)
17776
>>> left_rotate_32(2147483648, 1)
1
>>> left_rotate_32(2147483648, 3)
4
>>> left_rotate_32(4294967295, 4)
4294967295
>>> left_rotate_32(1234, 0)
1234
>>> left_rotate_32(0, 0)
0
>>> left_rotate_32(-1, 0)
Traceback (most recent call last):
...
ValueError: Input must be non-negative
>>> left_rotate_32(0, -1)
Traceback (most recent call last):
...
ValueError: Shift must be non-negative
"""
if i < 0:
raise ValueError("Input must be non-negative")
if shift < 0:
raise ValueError("Shift must be non-negative")
return ((i << shift) ^ (i >> (32 - shift))) % 2**32
def md5_me(message: bytes) -> bytes:
"""
Returns the 32-char MD5 hash of a given message.
Reference: https://en.wikipedia.org/wiki/MD5#Algorithm
Arguments:
message {[string]} -- [message]
Returns:
32-char MD5 hash string
>>> md5_me(b"")
b'd41d8cd98f00b204e9800998ecf8427e'
>>> md5_me(b"The quick brown fox jumps over the lazy dog")
b'9e107d9d372bb6826bd81d3542a419d6'
>>> md5_me(b"The quick brown fox jumps over the lazy dog.")
b'e4d909c290d0fb1ca068ffaddf22cbd0'
>>> import hashlib
>>> from string import ascii_letters
>>> msgs = [b"", ascii_letters.encode("utf-8"), "Üñîçø∂é".encode("utf-8"),
... b"The quick brown fox jumps over the lazy dog."]
>>> all(md5_me(msg) == hashlib.md5(msg).hexdigest().encode("utf-8") for msg in msgs)
True
"""
# Convert to bit string, add padding and append message length
bit_string = preprocess(message)
added_consts = [int(2**32 * abs(sin(i + 1))) for i in range(64)]
# Starting states
a0 = 0x67452301
b0 = 0xEFCDAB89
c0 = 0x98BADCFE
d0 = 0x10325476
shift_amounts = [
7,
12,
17,
22,
7,
12,
17,
22,
7,
12,
17,
22,
7,
12,
17,
22,
5,
9,
14,
20,
5,
9,
14,
20,
5,
9,
14,
20,
5,
9,
14,
20,
4,
11,
16,
23,
4,
11,
16,
23,
4,
11,
16,
23,
4,
11,
16,
23,
6,
10,
15,
21,
6,
10,
15,
21,
6,
10,
15,
21,
6,
10,
15,
21,
]
# Process bit string in chunks, each with 16 32-char words
for block_words in get_block_words(bit_string):
a = a0
b = b0
c = c0
d = d0
# Hash current chunk
for i in range(64):
if i <= 15:
# f = (b & c) | (not_32(b) & d) # Alternate definition for f
f = d ^ (b & (c ^ d))
g = i
elif i <= 31:
# f = (d & b) | (not_32(d) & c) # Alternate definition for f
f = c ^ (d & (b ^ c))
g = (5 * i + 1) % 16
elif i <= 47:
f = b ^ c ^ d
g = (3 * i + 5) % 16
else:
f = c ^ (b | not_32(d))
g = (7 * i) % 16
f = (f + a + added_consts[i] + block_words[g]) % 2**32
a = d
d = c
c = b
b = sum_32(b, left_rotate_32(f, shift_amounts[i]))
# Add hashed chunk to running total
a0 = sum_32(a0, a)
b0 = sum_32(b0, b)
c0 = sum_32(c0, c)
d0 = sum_32(d0, d)
digest = reformat_hex(a0) + reformat_hex(b0) + reformat_hex(c0) + reformat_hex(d0)
return digest
if __name__ == "__main__":
import doctest
doctest.testmod()
|
This algorithm was created for sdbm a publicdomain reimplementation of ndbm database library. It was found to do well in scrambling bits, causing better distribution of the keys and fewer splits. It also happens to be a good general hashing function with good distribution. The actual function pseudo code is: for i in i..lenstr: hashi hashi 1 65599 stri; What is included below is the faster version used in gawk. there is even a faster, duffdevice version The magic constant 65599 was picked out of thin air while experimenting with different constants. It turns out to be a prime. This is one of the algorithms used in berkeley db see sleepycat and elsewhere. source: http:www.cse.yorku.caozhash.html Function implements sdbm hash, easy to use, great for bits scrambling. iterates over each character in the given string and applies function to each of them. sdbm'Algorithms' 1462174910723540325254304520539387479031000036 sdbm'scramble bits' 730247649148944819640658295400555317318720608290373040936089 | def sdbm(plain_text: str) -> int:
"""
Function implements sdbm hash, easy to use, great for bits scrambling.
iterates over each character in the given string and applies function to each of
them.
>>> sdbm('Algorithms')
1462174910723540325254304520539387479031000036
>>> sdbm('scramble bits')
730247649148944819640658295400555317318720608290373040936089
"""
hash_value = 0
for plain_chr in plain_text:
hash_value = (
ord(plain_chr) + (hash_value << 6) + (hash_value << 16) - hash_value
)
return hash_value
|