Prompt
stringlengths 16
11.5k
| Completions
stringlengths 84
25.3k
|
---|---|
Implementation of the SHA1 hash function and gives utilities to find hash of string or hash of text from a file. Also contains a Test class to verify that the generated hash matches what is returned by the hashlib library Usage: python sha1.py string Hello World!! python sha1.py file helloworld.txt When run without any arguments, it prints the hash of the string Hello World!! Welcome to Cryptography SHA1 hash or SHA1 sum of a string is a cryptographic function, which means it is easy to calculate forwards but extremely difficult to calculate backwards. What this means is you can easily calculate the hash of a string, but it is extremely difficult to know the original string if you have its hash. This property is useful for communicating securely, send encrypted messages and is very useful in payment systems, blockchain and cryptocurrency etc. The algorithm as described in the reference: First we start with a message. The message is padded and the length of the message is added to the end. It is then split into blocks of 512 bits or 64 bytes. The blocks are then processed one at a time. Each block must be expanded and compressed. The value after each compression is added to a 160bit buffer called the current hash state. After the last block is processed, the current hash state is returned as the final hash. Reference: https:deadhacker.com20060221sha1illustrated Class to contain the entire pipeline for SHA1 hashing algorithm SHA1Hashbytes'Allan', 'utf8'.finalhash '872af2d8ac3d8695387e7c804bf0e02c18df9e6e' Initiates the variables data and h. h is a list of 5 8digit hexadecimal numbers corresponding to 1732584193, 4023233417, 2562383102, 271733878, 3285377520 respectively. We will start with this as a message digest. 0x is how you write hexadecimal numbers in Python Static method to be used inside other methods. Left rotates n by b. SHA1Hash''.rotate12,2 48 Pads the input message with zeros so that paddeddata has 64 bytes or 512 bits Returns a list of bytestrings each of length 64 staticmethod Takes a bytestringblock of length 64, unpacks it to a list of integers and returns a list of 80 integers after some bit operations Calls all the other methods to process the input. Pads the data, then splits into blocks and then does a series of operations for each block including expansion. For each block, the variable h that was initialized is copied to a,b,c,d,e and these 5 variables a,b,c,d,e undergo several changes. After all the blocks are processed, these 5 variables are pairwise added to h ie a to h0, b to h1 and so on. This h becomes our final hash which is returned. Provides option 'string' or 'file' to take input and prints the calculated SHA1 hash. unittest.main has been commented out because we probably don't want to run the test each time. unittest.main In any case hash input should be a bytestring | import argparse
import hashlib # hashlib is only used inside the Test class
import struct
class SHA1Hash:
"""
Class to contain the entire pipeline for SHA1 hashing algorithm
>>> SHA1Hash(bytes('Allan', 'utf-8')).final_hash()
'872af2d8ac3d8695387e7c804bf0e02c18df9e6e'
"""
def __init__(self, data):
"""
Initiates the variables data and h. h is a list of 5 8-digit hexadecimal
numbers corresponding to
(1732584193, 4023233417, 2562383102, 271733878, 3285377520)
respectively. We will start with this as a message digest. 0x is how you write
hexadecimal numbers in Python
"""
self.data = data
self.h = [0x67452301, 0xEFCDAB89, 0x98BADCFE, 0x10325476, 0xC3D2E1F0]
@staticmethod
def rotate(n, b):
"""
Static method to be used inside other methods. Left rotates n by b.
>>> SHA1Hash('').rotate(12,2)
48
"""
return ((n << b) | (n >> (32 - b))) & 0xFFFFFFFF
def padding(self):
"""
Pads the input message with zeros so that padded_data has 64 bytes or 512 bits
"""
padding = b"\x80" + b"\x00" * (63 - (len(self.data) + 8) % 64)
padded_data = self.data + padding + struct.pack(">Q", 8 * len(self.data))
return padded_data
def split_blocks(self):
"""
Returns a list of bytestrings each of length 64
"""
return [
self.padded_data[i : i + 64] for i in range(0, len(self.padded_data), 64)
]
# @staticmethod
def expand_block(self, block):
"""
Takes a bytestring-block of length 64, unpacks it to a list of integers and
returns a list of 80 integers after some bit operations
"""
w = list(struct.unpack(">16L", block)) + [0] * 64
for i in range(16, 80):
w[i] = self.rotate((w[i - 3] ^ w[i - 8] ^ w[i - 14] ^ w[i - 16]), 1)
return w
def final_hash(self):
"""
Calls all the other methods to process the input. Pads the data, then splits
into blocks and then does a series of operations for each block (including
expansion).
For each block, the variable h that was initialized is copied to a,b,c,d,e
and these 5 variables a,b,c,d,e undergo several changes. After all the blocks
are processed, these 5 variables are pairwise added to h ie a to h[0], b to h[1]
and so on. This h becomes our final hash which is returned.
"""
self.padded_data = self.padding()
self.blocks = self.split_blocks()
for block in self.blocks:
expanded_block = self.expand_block(block)
a, b, c, d, e = self.h
for i in range(80):
if 0 <= i < 20:
f = (b & c) | ((~b) & d)
k = 0x5A827999
elif 20 <= i < 40:
f = b ^ c ^ d
k = 0x6ED9EBA1
elif 40 <= i < 60:
f = (b & c) | (b & d) | (c & d)
k = 0x8F1BBCDC
elif 60 <= i < 80:
f = b ^ c ^ d
k = 0xCA62C1D6
a, b, c, d, e = (
self.rotate(a, 5) + f + e + k + expanded_block[i] & 0xFFFFFFFF,
a,
self.rotate(b, 30),
c,
d,
)
self.h = (
self.h[0] + a & 0xFFFFFFFF,
self.h[1] + b & 0xFFFFFFFF,
self.h[2] + c & 0xFFFFFFFF,
self.h[3] + d & 0xFFFFFFFF,
self.h[4] + e & 0xFFFFFFFF,
)
return ("{:08x}" * 5).format(*self.h)
def test_sha1_hash():
msg = b"Test String"
assert SHA1Hash(msg).final_hash() == hashlib.sha1(msg).hexdigest() # noqa: S324
def main():
"""
Provides option 'string' or 'file' to take input and prints the calculated SHA1
hash. unittest.main() has been commented out because we probably don't want to run
the test each time.
"""
# unittest.main()
parser = argparse.ArgumentParser(description="Process some strings or files")
parser.add_argument(
"--string",
dest="input_string",
default="Hello World!! Welcome to Cryptography",
help="Hash the string",
)
parser.add_argument("--file", dest="input_file", help="Hash contents of a file")
args = parser.parse_args()
input_string = args.input_string
# In any case hash input should be a bytestring
if args.input_file:
with open(args.input_file, "rb") as f:
hash_input = f.read()
else:
hash_input = bytes(input_string, "utf-8")
print(SHA1Hash(hash_input).final_hash())
if __name__ == "__main__":
main()
import doctest
doctest.testmod()
|
Author: M. Yathurshan Black Formatter: True Implementation of SHA256 Hash function in a Python class and provides utilities to find hash of string or hash of text from a file. Usage: python sha256.py string Hello World!! python sha256.py file helloworld.txt When run without any arguments, it prints the hash of the string Hello World!! Welcome to Cryptography References: https:qvault.iocryptographyhowsha2worksstepbystepsha256 https:en.wikipedia.orgwikiSHA2 Class to contain the entire pipeline for SHA1 Hashing Algorithm SHA256b'Python'.hash '18885f27b5af9012df19e496460f9294d5ab76128824c6f993787004f6d9a7db' SHA256b'hello world'.hash 'b94d27b9934d3e08a52e52d7da7dabfac484efe37a5380ee9088f7ace2efcde9' Initialize hash values Initialize round constants Convert into blocks of 64 bytes Convert the given block into a list of 4 byte integers add 48 0ed integers modify the zeroed indexes at the end of the array Compression Modify final values Right rotate a given unsigned number by a certain amount of rotations Test class for the SHA256 class. Inherits the TestCase class from unittest Provides option 'string' or 'file' to take input and prints the calculated SHA256 hash unittest.main hash input should be a bytestring | # Author: M. Yathurshan
# Black Formatter: True
"""
Implementation of SHA256 Hash function in a Python class and provides utilities
to find hash of string or hash of text from a file.
Usage: python sha256.py --string "Hello World!!"
python sha256.py --file "hello_world.txt"
When run without any arguments,
it prints the hash of the string "Hello World!! Welcome to Cryptography"
References:
https://qvault.io/cryptography/how-sha-2-works-step-by-step-sha-256/
https://en.wikipedia.org/wiki/SHA-2
"""
import argparse
import struct
import unittest
class SHA256:
"""
Class to contain the entire pipeline for SHA1 Hashing Algorithm
>>> SHA256(b'Python').hash
'18885f27b5af9012df19e496460f9294d5ab76128824c6f993787004f6d9a7db'
>>> SHA256(b'hello world').hash
'b94d27b9934d3e08a52e52d7da7dabfac484efe37a5380ee9088f7ace2efcde9'
"""
def __init__(self, data: bytes) -> None:
self.data = data
# Initialize hash values
self.hashes = [
0x6A09E667,
0xBB67AE85,
0x3C6EF372,
0xA54FF53A,
0x510E527F,
0x9B05688C,
0x1F83D9AB,
0x5BE0CD19,
]
# Initialize round constants
self.round_constants = [
0x428A2F98,
0x71374491,
0xB5C0FBCF,
0xE9B5DBA5,
0x3956C25B,
0x59F111F1,
0x923F82A4,
0xAB1C5ED5,
0xD807AA98,
0x12835B01,
0x243185BE,
0x550C7DC3,
0x72BE5D74,
0x80DEB1FE,
0x9BDC06A7,
0xC19BF174,
0xE49B69C1,
0xEFBE4786,
0x0FC19DC6,
0x240CA1CC,
0x2DE92C6F,
0x4A7484AA,
0x5CB0A9DC,
0x76F988DA,
0x983E5152,
0xA831C66D,
0xB00327C8,
0xBF597FC7,
0xC6E00BF3,
0xD5A79147,
0x06CA6351,
0x14292967,
0x27B70A85,
0x2E1B2138,
0x4D2C6DFC,
0x53380D13,
0x650A7354,
0x766A0ABB,
0x81C2C92E,
0x92722C85,
0xA2BFE8A1,
0xA81A664B,
0xC24B8B70,
0xC76C51A3,
0xD192E819,
0xD6990624,
0xF40E3585,
0x106AA070,
0x19A4C116,
0x1E376C08,
0x2748774C,
0x34B0BCB5,
0x391C0CB3,
0x4ED8AA4A,
0x5B9CCA4F,
0x682E6FF3,
0x748F82EE,
0x78A5636F,
0x84C87814,
0x8CC70208,
0x90BEFFFA,
0xA4506CEB,
0xBEF9A3F7,
0xC67178F2,
]
self.preprocessed_data = self.preprocessing(self.data)
self.final_hash()
@staticmethod
def preprocessing(data: bytes) -> bytes:
padding = b"\x80" + (b"\x00" * (63 - (len(data) + 8) % 64))
big_endian_integer = struct.pack(">Q", (len(data) * 8))
return data + padding + big_endian_integer
def final_hash(self) -> None:
# Convert into blocks of 64 bytes
self.blocks = [
self.preprocessed_data[x : x + 64]
for x in range(0, len(self.preprocessed_data), 64)
]
for block in self.blocks:
# Convert the given block into a list of 4 byte integers
words = list(struct.unpack(">16L", block))
# add 48 0-ed integers
words += [0] * 48
a, b, c, d, e, f, g, h = self.hashes
for index in range(64):
if index > 15:
# modify the zero-ed indexes at the end of the array
s0 = (
self.ror(words[index - 15], 7)
^ self.ror(words[index - 15], 18)
^ (words[index - 15] >> 3)
)
s1 = (
self.ror(words[index - 2], 17)
^ self.ror(words[index - 2], 19)
^ (words[index - 2] >> 10)
)
words[index] = (
words[index - 16] + s0 + words[index - 7] + s1
) % 0x100000000
# Compression
s1 = self.ror(e, 6) ^ self.ror(e, 11) ^ self.ror(e, 25)
ch = (e & f) ^ ((~e & (0xFFFFFFFF)) & g)
temp1 = (
h + s1 + ch + self.round_constants[index] + words[index]
) % 0x100000000
s0 = self.ror(a, 2) ^ self.ror(a, 13) ^ self.ror(a, 22)
maj = (a & b) ^ (a & c) ^ (b & c)
temp2 = (s0 + maj) % 0x100000000
h, g, f, e, d, c, b, a = (
g,
f,
e,
((d + temp1) % 0x100000000),
c,
b,
a,
((temp1 + temp2) % 0x100000000),
)
mutated_hash_values = [a, b, c, d, e, f, g, h]
# Modify final values
self.hashes = [
((element + mutated_hash_values[index]) % 0x100000000)
for index, element in enumerate(self.hashes)
]
self.hash = "".join([hex(value)[2:].zfill(8) for value in self.hashes])
def ror(self, value: int, rotations: int) -> int:
"""
Right rotate a given unsigned number by a certain amount of rotations
"""
return 0xFFFFFFFF & (value << (32 - rotations)) | (value >> rotations)
class SHA256HashTest(unittest.TestCase):
"""
Test class for the SHA256 class. Inherits the TestCase class from unittest
"""
def test_match_hashes(self) -> None:
import hashlib
msg = bytes("Test String", "utf-8")
assert SHA256(msg).hash == hashlib.sha256(msg).hexdigest()
def main() -> None:
"""
Provides option 'string' or 'file' to take input
and prints the calculated SHA-256 hash
"""
# unittest.main()
import doctest
doctest.testmod()
parser = argparse.ArgumentParser()
parser.add_argument(
"-s",
"--string",
dest="input_string",
default="Hello World!! Welcome to Cryptography",
help="Hash the string",
)
parser.add_argument(
"-f", "--file", dest="input_file", help="Hash contents of a file"
)
args = parser.parse_args()
input_string = args.input_string
# hash input should be a bytestring
if args.input_file:
with open(args.input_file, "rb") as f:
hash_input = f.read()
else:
hash_input = bytes(input_string, "utf-8")
print(SHA256(hash_input).hash)
if __name__ == "__main__":
main()
|
To get an insight into Greedy Algorithm through the Knapsack problem A shopkeeper has bags of wheat that each have different weights and different profits. eg. profit 5 8 7 1 12 3 4 weight 2 7 1 6 4 2 5 maxweight 100 Constraints: maxweight 0 profiti 0 weighti 0 Calculate the maximum profit that the shopkeeper can make given maxmum weight that can be carried. Function description is as follows :param profit: Take a list of profits :param weight: Take a list of weight if bags corresponding to the profits :param maxweight: Maximum weight that could be carried :return: Maximum expected gain calcprofit1, 2, 3, 3, 4, 5, 15 6 calcprofit10, 9 , 8, 3 ,4 , 5, 25 27 List created to store profit gained for the 1kg in case of each weight respectively. Calculate and append profitweight for each element. Creating a copy of the list and sorting profitweight in ascending order declaring useful variables loop till the total weight do not reach max limit e.g. 15 kg and till ilength flag value for encountered greatest element in sortedprofitbyweight Calculate the index of the biggestprofitbyweight in profitbyweight list. This will give the index of the first encountered element which is same as of biggestprofitbyweight. There may be one or more values same as that of biggestprofitbyweight but index always encounter the very first element only. To curb this alter the values in profitbyweight once they are used here it is done to 1 because neither profit nor weight can be in negative. check if the weight encountered is less than the total weight encountered before. Adding profit gained for the given weight 1 weightindexweightindex Since the weight encountered is greater than limit, therefore take the required number of remaining kgs and calculate profit for it. weight remaining weightindex Function Call | # To get an insight into Greedy Algorithm through the Knapsack problem
"""
A shopkeeper has bags of wheat that each have different weights and different profits.
eg.
profit 5 8 7 1 12 3 4
weight 2 7 1 6 4 2 5
max_weight 100
Constraints:
max_weight > 0
profit[i] >= 0
weight[i] >= 0
Calculate the maximum profit that the shopkeeper can make given maxmum weight that can
be carried.
"""
def calc_profit(profit: list, weight: list, max_weight: int) -> int:
"""
Function description is as follows-
:param profit: Take a list of profits
:param weight: Take a list of weight if bags corresponding to the profits
:param max_weight: Maximum weight that could be carried
:return: Maximum expected gain
>>> calc_profit([1, 2, 3], [3, 4, 5], 15)
6
>>> calc_profit([10, 9 , 8], [3 ,4 , 5], 25)
27
"""
if len(profit) != len(weight):
raise ValueError("The length of profit and weight must be same.")
if max_weight <= 0:
raise ValueError("max_weight must greater than zero.")
if any(p < 0 for p in profit):
raise ValueError("Profit can not be negative.")
if any(w < 0 for w in weight):
raise ValueError("Weight can not be negative.")
# List created to store profit gained for the 1kg in case of each weight
# respectively. Calculate and append profit/weight for each element.
profit_by_weight = [p / w for p, w in zip(profit, weight)]
# Creating a copy of the list and sorting profit/weight in ascending order
sorted_profit_by_weight = sorted(profit_by_weight)
# declaring useful variables
length = len(sorted_profit_by_weight)
limit = 0
gain = 0
i = 0
# loop till the total weight do not reach max limit e.g. 15 kg and till i<length
while limit <= max_weight and i < length:
# flag value for encountered greatest element in sorted_profit_by_weight
biggest_profit_by_weight = sorted_profit_by_weight[length - i - 1]
"""
Calculate the index of the biggest_profit_by_weight in profit_by_weight list.
This will give the index of the first encountered element which is same as of
biggest_profit_by_weight. There may be one or more values same as that of
biggest_profit_by_weight but index always encounter the very first element
only. To curb this alter the values in profit_by_weight once they are used
here it is done to -1 because neither profit nor weight can be in negative.
"""
index = profit_by_weight.index(biggest_profit_by_weight)
profit_by_weight[index] = -1
# check if the weight encountered is less than the total weight
# encountered before.
if max_weight - limit >= weight[index]:
limit += weight[index]
# Adding profit gained for the given weight 1 ===
# weight[index]/weight[index]
gain += 1 * profit[index]
else:
# Since the weight encountered is greater than limit, therefore take the
# required number of remaining kgs and calculate profit for it.
# weight remaining / weight[index]
gain += (max_weight - limit) / weight[index] * profit[index]
break
i += 1
return gain
if __name__ == "__main__":
print(
"Input profits, weights, and then max_weight (all positive ints) separated by "
"spaces."
)
profit = [int(x) for x in input("Input profits separated by spaces: ").split()]
weight = [int(x) for x in input("Input weights separated by spaces: ").split()]
max_weight = int(input("Max weight allowed: "))
# Function Call
calc_profit(profit, weight, max_weight)
|
A naive recursive implementation of 01 Knapsack Problem https:en.wikipedia.orgwikiKnapsackproblem Returns the maximum value that can be put in a knapsack of a capacity cap, whereby each weight w has a specific value val. cap 50 val 60, 100, 120 w 10, 20, 30 c lenval knapsackcap, w, val, c 220 The result is 220 cause the values of 100 and 120 got the weight of 50 which is the limit of the capacity. Base Case If weight of the nth item is more than Knapsack of capacity, then this item cannot be included in the optimal solution, else return the maximum of two cases: 1 nth item included 2 not included | from __future__ import annotations
def knapsack(capacity: int, weights: list[int], values: list[int], counter: int) -> int:
"""
Returns the maximum value that can be put in a knapsack of a capacity cap,
whereby each weight w has a specific value val.
>>> cap = 50
>>> val = [60, 100, 120]
>>> w = [10, 20, 30]
>>> c = len(val)
>>> knapsack(cap, w, val, c)
220
The result is 220 cause the values of 100 and 120 got the weight of 50
which is the limit of the capacity.
"""
# Base Case
if counter == 0 or capacity == 0:
return 0
# If weight of the nth item is more than Knapsack of capacity,
# then this item cannot be included in the optimal solution,
# else return the maximum of two cases:
# (1) nth item included
# (2) not included
if weights[counter - 1] > capacity:
return knapsack(capacity, weights, values, counter - 1)
else:
left_capacity = capacity - weights[counter - 1]
new_value_included = values[counter - 1] + knapsack(
left_capacity, weights, values, counter - 1
)
without_new_value = knapsack(capacity, weights, values, counter - 1)
return max(new_value_included, without_new_value)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
To get an insight into naive recursive way to solve the Knapsack problem A shopkeeper has bags of wheat that each have different weights and different profits. eg. noofitems 4 profit 5 4 8 6 weight 1 2 4 5 maxweight 5 Constraints: maxweight 0 profiti 0 weighti 0 Calculate the maximum profit that the shopkeeper can make given maxmum weight that can be carried. Function description is as follows :param weights: Take a list of weights :param values: Take a list of profits corresponding to the weights :param numberofitems: number of items available to pick from :param maxweight: Maximum weight that could be carried :param index: the element we are looking at :return: Maximum expected gain knapsack1, 2, 4, 5, 5, 4, 8, 6, 4, 5, 0 13 knapsack3 ,4 , 5, 10, 9 , 8, 3, 25, 0 27 | # To get an insight into naive recursive way to solve the Knapsack problem
"""
A shopkeeper has bags of wheat that each have different weights and different profits.
eg.
no_of_items 4
profit 5 4 8 6
weight 1 2 4 5
max_weight 5
Constraints:
max_weight > 0
profit[i] >= 0
weight[i] >= 0
Calculate the maximum profit that the shopkeeper can make given maxmum weight that can
be carried.
"""
def knapsack(
weights: list, values: list, number_of_items: int, max_weight: int, index: int
) -> int:
"""
Function description is as follows-
:param weights: Take a list of weights
:param values: Take a list of profits corresponding to the weights
:param number_of_items: number of items available to pick from
:param max_weight: Maximum weight that could be carried
:param index: the element we are looking at
:return: Maximum expected gain
>>> knapsack([1, 2, 4, 5], [5, 4, 8, 6], 4, 5, 0)
13
>>> knapsack([3 ,4 , 5], [10, 9 , 8], 3, 25, 0)
27
"""
if index == number_of_items:
return 0
ans1 = 0
ans2 = 0
ans1 = knapsack(weights, values, number_of_items, max_weight, index + 1)
if weights[index] <= max_weight:
ans2 = values[index] + knapsack(
weights, values, number_of_items, max_weight - weights[index], index + 1
)
return max(ans1, ans2)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Test cases for knapsack kp.calcprofit takes the required argument profit, weight, maxweight and returns whether the answer matches to the expected ones Returns ValueError for any negative maxweight value :return: ValueError profit 10, 20, 30, 40, 50, 60 weight 2, 4, 6, 8, 10, 12 maxweight 15 Returns ValueError for any negative profit value in the list :return: ValueError profit 10, 20, 30, 40, 50, 60 weight 2, 4, 6, 8, 10, 12 maxweight 15 Returns ValueError for any negative weight value in the list :return: ValueError profit 10, 20, 30, 40, 50, 60 weight 2, 4, 6, 8, 10, 12 maxweight 15 Returns ValueError for any zero maxweight value :return: ValueError profit 10, 20, 30, 40, 50, 60 weight 2, 4, 6, 8, 10, 12 maxweight null Returns IndexError if length of lists profit and weight are unequal. :return: IndexError profit 10, 20, 30, 40, 50 weight 2, 4, 6, 8, 10, 12 maxweight 100 | import unittest
import pytest
from knapsack import greedy_knapsack as kp
class TestClass(unittest.TestCase):
"""
Test cases for knapsack
"""
def test_sorted(self):
"""
kp.calc_profit takes the required argument (profit, weight, max_weight)
and returns whether the answer matches to the expected ones
"""
profit = [10, 20, 30, 40, 50, 60]
weight = [2, 4, 6, 8, 10, 12]
max_weight = 100
assert kp.calc_profit(profit, weight, max_weight) == 210
def test_negative_max_weight(self):
"""
Returns ValueError for any negative max_weight value
:return: ValueError
"""
# profit = [10, 20, 30, 40, 50, 60]
# weight = [2, 4, 6, 8, 10, 12]
# max_weight = -15
pytest.raises(ValueError, match="max_weight must greater than zero.")
def test_negative_profit_value(self):
"""
Returns ValueError for any negative profit value in the list
:return: ValueError
"""
# profit = [10, -20, 30, 40, 50, 60]
# weight = [2, 4, 6, 8, 10, 12]
# max_weight = 15
pytest.raises(ValueError, match="Weight can not be negative.")
def test_negative_weight_value(self):
"""
Returns ValueError for any negative weight value in the list
:return: ValueError
"""
# profit = [10, 20, 30, 40, 50, 60]
# weight = [2, -4, 6, -8, 10, 12]
# max_weight = 15
pytest.raises(ValueError, match="Profit can not be negative.")
def test_null_max_weight(self):
"""
Returns ValueError for any zero max_weight value
:return: ValueError
"""
# profit = [10, 20, 30, 40, 50, 60]
# weight = [2, 4, 6, 8, 10, 12]
# max_weight = null
pytest.raises(ValueError, match="max_weight must greater than zero.")
def test_unequal_list_length(self):
"""
Returns IndexError if length of lists (profit and weight) are unequal.
:return: IndexError
"""
# profit = [10, 20, 30, 40, 50]
# weight = [2, 4, 6, 8, 10, 12]
# max_weight = 100
pytest.raises(IndexError, match="The length of profit and weight must be same.")
if __name__ == "__main__":
unittest.main()
|
Created on Fri Oct 16 09:31:07 2020 author: Dr. Tobias Schrder license: MITlicense This file contains the testsuite for the knapsack problem. test for the base case test for the base case test for the knapsack | import unittest
from knapsack import knapsack as k
class Test(unittest.TestCase):
def test_base_case(self):
"""
test for the base case
"""
cap = 0
val = [0]
w = [0]
c = len(val)
assert k.knapsack(cap, w, val, c) == 0
val = [60]
w = [10]
c = len(val)
assert k.knapsack(cap, w, val, c) == 0
def test_easy_case(self):
"""
test for the base case
"""
cap = 3
val = [1, 2, 3]
w = [3, 2, 1]
c = len(val)
assert k.knapsack(cap, w, val, c) == 5
def test_knapsack(self):
"""
test for the knapsack
"""
cap = 50
val = [60, 100, 120]
w = [10, 20, 30]
c = len(val)
assert k.knapsack(cap, w, val, c) == 220
if __name__ == "__main__":
unittest.main()
|
Gaussian elimination method for solving a system of linear equations. Gaussian elimination https:en.wikipedia.orgwikiGaussianelimination This function performs a retroactive linear system resolution for triangular matrix Examples: 2x1 2x2 1x3 5 2x1 2x2 1 0x1 2x2 1x3 7 0x1 2x2 1 0x1 0x2 5x3 15 gaussianelimination2, 2, 1, 0, 2, 1, 0, 0, 5, 5, 7, 15 array2., 2., 3. gaussianelimination2, 2, 0, 2, 1, 1 array1. , 0.5 This function performs Gaussian elimination method Examples: 1x1 4x2 2x3 2 1x1 2x2 5 5x1 2x2 2x3 3 5x1 2x2 5 1x1 1x2 0x3 4 gaussianelimination1, 4, 2, 5, 2, 2, 1, 1, 0, 2, 3, 4 array 2.3 , 1.7 , 5.55 gaussianelimination1, 2, 5, 2, 5, 5 array0. , 2.5 coefficients must to be a square matrix so we need to check first augmented matrix scale the matrix leaving it triangular | import numpy as np
from numpy import float64
from numpy.typing import NDArray
def retroactive_resolution(
coefficients: NDArray[float64], vector: NDArray[float64]
) -> NDArray[float64]:
"""
This function performs a retroactive linear system resolution
for triangular matrix
Examples:
2x1 + 2x2 - 1x3 = 5 2x1 + 2x2 = -1
0x1 - 2x2 - 1x3 = -7 0x1 - 2x2 = -1
0x1 + 0x2 + 5x3 = 15
>>> gaussian_elimination([[2, 2, -1], [0, -2, -1], [0, 0, 5]], [[5], [-7], [15]])
array([[2.],
[2.],
[3.]])
>>> gaussian_elimination([[2, 2], [0, -2]], [[-1], [-1]])
array([[-1. ],
[ 0.5]])
"""
rows, columns = np.shape(coefficients)
x: NDArray[float64] = np.zeros((rows, 1), dtype=float)
for row in reversed(range(rows)):
total = np.dot(coefficients[row, row + 1 :], x[row + 1 :])
x[row, 0] = (vector[row][0] - total[0]) / coefficients[row, row]
return x
def gaussian_elimination(
coefficients: NDArray[float64], vector: NDArray[float64]
) -> NDArray[float64]:
"""
This function performs Gaussian elimination method
Examples:
1x1 - 4x2 - 2x3 = -2 1x1 + 2x2 = 5
5x1 + 2x2 - 2x3 = -3 5x1 + 2x2 = 5
1x1 - 1x2 + 0x3 = 4
>>> gaussian_elimination([[1, -4, -2], [5, 2, -2], [1, -1, 0]], [[-2], [-3], [4]])
array([[ 2.3 ],
[-1.7 ],
[ 5.55]])
>>> gaussian_elimination([[1, 2], [5, 2]], [[5], [5]])
array([[0. ],
[2.5]])
"""
# coefficients must to be a square matrix so we need to check first
rows, columns = np.shape(coefficients)
if rows != columns:
return np.array((), dtype=float)
# augmented matrix
augmented_mat: NDArray[float64] = np.concatenate((coefficients, vector), axis=1)
augmented_mat = augmented_mat.astype("float64")
# scale the matrix leaving it triangular
for row in range(rows - 1):
pivot = augmented_mat[row, row]
for col in range(row + 1, columns):
factor = augmented_mat[col, row] / pivot
augmented_mat[col, :] -= factor * augmented_mat[row, :]
x = retroactive_resolution(
augmented_mat[:, 0:columns], augmented_mat[:, columns : columns + 1]
)
return x
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Jacobi Iteration Method https:en.wikipedia.orgwikiJacobimethod Method to find solution of system of linear equations Jacobi Iteration Method: An iterative algorithm to determine the solutions of strictly diagonally dominant system of linear equations 4x1 x2 x3 2 x1 5x2 2x3 6 x1 2x2 4x3 4 xinit 0.5, 0.5 , 0.5 Examples: coefficient np.array4, 1, 1, 1, 5, 2, 1, 2, 4 constant np.array2, 6, 4 initval 0.5, 0.5, 0.5 iterations 3 jacobiiterationmethodcoefficient, constant, initval, iterations 0.909375, 1.14375, 0.7484375 coefficient np.array4, 1, 1, 1, 5, 2 constant np.array2, 6, 4 initval 0.5, 0.5, 0.5 iterations 3 jacobiiterationmethodcoefficient, constant, initval, iterations Traceback most recent call last: ... ValueError: Coefficient matrix dimensions must be nxn but received 2x3 coefficient np.array4, 1, 1, 1, 5, 2, 1, 2, 4 constant np.array2, 6 initval 0.5, 0.5, 0.5 iterations 3 jacobiiterationmethod ... coefficient, constant, initval, iterations ... doctest: NORMALIZEWHITESPACE Traceback most recent call last: ... ValueError: Coefficient and constant matrices dimensions must be nxn and nx1 but received 3x3 and 2x1 coefficient np.array4, 1, 1, 1, 5, 2, 1, 2, 4 constant np.array2, 6, 4 initval 0.5, 0.5 iterations 3 jacobiiterationmethod ... coefficient, constant, initval, iterations ... doctest: NORMALIZEWHITESPACE Traceback most recent call last: ... ValueError: Number of initial values must be equal to number of rows in coefficient matrix but received 2 and 3 coefficient np.array4, 1, 1, 1, 5, 2, 1, 2, 4 constant np.array2, 6, 4 initval 0.5, 0.5, 0.5 iterations 0 jacobiiterationmethodcoefficient, constant, initval, iterations Traceback most recent call last: ... ValueError: Iterations must be at least 1 Iterates the whole matrix for given number of times for in rangeiterations: newval for row in rangerows: temp 0 for col in rangecols: if col row: denom tablerowcol elif col cols 1: val tablerowcol else: temp 1 tablerowcol initvalcol temp temp val denom newval.appendtemp initval newval denominator a list of values along the diagonal vallast values of the last column of the table array masks boolean mask of all strings without diagonal elements array coefficientmatrix nodiagonals coefficientmatrix array values without diagonal elements Here we get 'icol' these are the column numbers, for each row without diagonal elements, except for the last column. 'icol' is converted to a twodimensional list 'ind', which will be used to make selections from 'initval' 'arr' array see below. Iterates the whole matrix for given number of times Checks if the given matrix is strictly diagonally dominant table np.array4, 1, 1, 2, 1, 5, 2, 6, 1, 2, 4, 4 strictlydiagonallydominanttable True table np.array4, 1, 1, 2, 1, 5, 2, 6, 1, 2, 3, 4 strictlydiagonallydominanttable Traceback most recent call last: ... ValueError: Coefficient matrix is not strictly diagonally dominant Test Cases | from __future__ import annotations
import numpy as np
from numpy import float64
from numpy.typing import NDArray
# Method to find solution of system of linear equations
def jacobi_iteration_method(
coefficient_matrix: NDArray[float64],
constant_matrix: NDArray[float64],
init_val: list[float],
iterations: int,
) -> list[float]:
"""
Jacobi Iteration Method:
An iterative algorithm to determine the solutions of strictly diagonally dominant
system of linear equations
4x1 + x2 + x3 = 2
x1 + 5x2 + 2x3 = -6
x1 + 2x2 + 4x3 = -4
x_init = [0.5, -0.5 , -0.5]
Examples:
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2], [1, 2, 4]])
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
[0.909375, -1.14375, -0.7484375]
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2]])
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
Traceback (most recent call last):
...
ValueError: Coefficient matrix dimensions must be nxn but received 2x3
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2], [1, 2, 4]])
>>> constant = np.array([[2], [-6]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(
... coefficient, constant, init_val, iterations
... ) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: Coefficient and constant matrices dimensions must be nxn and nx1 but
received 3x3 and 2x1
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2], [1, 2, 4]])
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5]
>>> iterations = 3
>>> jacobi_iteration_method(
... coefficient, constant, init_val, iterations
... ) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: Number of initial values must be equal to number of rows in coefficient
matrix but received 2 and 3
>>> coefficient = np.array([[4, 1, 1], [1, 5, 2], [1, 2, 4]])
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 0
>>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
Traceback (most recent call last):
...
ValueError: Iterations must be at least 1
"""
rows1, cols1 = coefficient_matrix.shape
rows2, cols2 = constant_matrix.shape
if rows1 != cols1:
msg = f"Coefficient matrix dimensions must be nxn but received {rows1}x{cols1}"
raise ValueError(msg)
if cols2 != 1:
msg = f"Constant matrix must be nx1 but received {rows2}x{cols2}"
raise ValueError(msg)
if rows1 != rows2:
msg = (
"Coefficient and constant matrices dimensions must be nxn and nx1 but "
f"received {rows1}x{cols1} and {rows2}x{cols2}"
)
raise ValueError(msg)
if len(init_val) != rows1:
msg = (
"Number of initial values must be equal to number of rows in coefficient "
f"matrix but received {len(init_val)} and {rows1}"
)
raise ValueError(msg)
if iterations <= 0:
raise ValueError("Iterations must be at least 1")
table: NDArray[float64] = np.concatenate(
(coefficient_matrix, constant_matrix), axis=1
)
rows, cols = table.shape
strictly_diagonally_dominant(table)
"""
# Iterates the whole matrix for given number of times
for _ in range(iterations):
new_val = []
for row in range(rows):
temp = 0
for col in range(cols):
if col == row:
denom = table[row][col]
elif col == cols - 1:
val = table[row][col]
else:
temp += (-1) * table[row][col] * init_val[col]
temp = (temp + val) / denom
new_val.append(temp)
init_val = new_val
"""
# denominator - a list of values along the diagonal
denominator = np.diag(coefficient_matrix)
# val_last - values of the last column of the table array
val_last = table[:, -1]
# masks - boolean mask of all strings without diagonal
# elements array coefficient_matrix
masks = ~np.eye(coefficient_matrix.shape[0], dtype=bool)
# no_diagonals - coefficient_matrix array values without diagonal elements
no_diagonals = coefficient_matrix[masks].reshape(-1, rows - 1)
# Here we get 'i_col' - these are the column numbers, for each row
# without diagonal elements, except for the last column.
i_row, i_col = np.where(masks)
ind = i_col.reshape(-1, rows - 1)
#'i_col' is converted to a two-dimensional list 'ind', which will be
# used to make selections from 'init_val' ('arr' array see below).
# Iterates the whole matrix for given number of times
for _ in range(iterations):
arr = np.take(init_val, ind)
sum_product_rows = np.sum((-1) * no_diagonals * arr, axis=1)
new_val = (sum_product_rows + val_last) / denominator
init_val = new_val
return new_val.tolist()
# Checks if the given matrix is strictly diagonally dominant
def strictly_diagonally_dominant(table: NDArray[float64]) -> bool:
"""
>>> table = np.array([[4, 1, 1, 2], [1, 5, 2, -6], [1, 2, 4, -4]])
>>> strictly_diagonally_dominant(table)
True
>>> table = np.array([[4, 1, 1, 2], [1, 5, 2, -6], [1, 2, 3, -4]])
>>> strictly_diagonally_dominant(table)
Traceback (most recent call last):
...
ValueError: Coefficient matrix is not strictly diagonally dominant
"""
rows, cols = table.shape
is_diagonally_dominant = True
for i in range(rows):
total = 0
for j in range(cols - 1):
if i == j:
continue
else:
total += table[i][j]
if table[i][i] <= total:
raise ValueError("Coefficient matrix is not strictly diagonally dominant")
return is_diagonally_dominant
# Test Cases
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Lowerupper LU decomposition factors a matrix as a product of a lower triangular matrix and an upper triangular matrix. A square matrix has an LU decomposition under the following conditions: If the matrix is invertible, then it has an LU decomposition if and only if all of its leading principal minors are nonzero see https:en.wikipedia.orgwikiMinorlinearalgebra for an explanation of leading principal minors of a matrix. If the matrix is singular i.e., not invertible and it has a rank of k i.e., it has k linearly independent columns, then it has an LU decomposition if its first k leading principal minors are nonzero. This algorithm will simply attempt to perform LU decomposition on any square matrix and raise an error if no such decomposition exists. Reference: https:en.wikipedia.orgwikiLUdecomposition Perform LU decomposition on a given matrix and raises an error if the matrix isn't square or if no such decomposition exists matrix np.array2, 2, 1, 0, 1, 2, 5, 3, 1 lowermat, uppermat lowerupperdecompositionmatrix lowermat array1. , 0. , 0. , 0. , 1. , 0. , 2.5, 8. , 1. uppermat array 2. , 2. , 1. , 0. , 1. , 2. , 0. , 0. , 17.5 matrix np.array4, 3, 6, 3 lowermat, uppermat lowerupperdecompositionmatrix lowermat array1. , 0. , 1.5, 1. uppermat array 4. , 3. , 0. , 1.5 Matrix is not square matrix np.array2, 2, 1, 0, 1, 2 lowermat, uppermat lowerupperdecompositionmatrix Traceback most recent call last: ... ValueError: 'table' has to be of square shaped array but got a 2x3 array: 2 2 1 0 1 2 Matrix is invertible, but its first leading principal minor is 0 matrix np.array0, 1, 1, 0 lowermat, uppermat lowerupperdecompositionmatrix Traceback most recent call last: ... ArithmeticError: No LU decomposition exists Matrix is singular, but its first leading principal minor is 1 matrix np.array1, 0, 1, 0 lowermat, uppermat lowerupperdecompositionmatrix lowermat array1., 0., 1., 1. uppermat array1., 0., 0., 0. Matrix is singular, but its first leading principal minor is 0 matrix np.array0, 1, 0, 1 lowermat, uppermat lowerupperdecompositionmatrix Traceback most recent call last: ... ArithmeticError: No LU decomposition exists Ensure that table is a square array in 'total', the necessary data is extracted through slices and the sum of the products is obtained. | from __future__ import annotations
import numpy as np
def lower_upper_decomposition(table: np.ndarray) -> tuple[np.ndarray, np.ndarray]:
"""
Perform LU decomposition on a given matrix and raises an error if the matrix
isn't square or if no such decomposition exists
>>> matrix = np.array([[2, -2, 1], [0, 1, 2], [5, 3, 1]])
>>> lower_mat, upper_mat = lower_upper_decomposition(matrix)
>>> lower_mat
array([[1. , 0. , 0. ],
[0. , 1. , 0. ],
[2.5, 8. , 1. ]])
>>> upper_mat
array([[ 2. , -2. , 1. ],
[ 0. , 1. , 2. ],
[ 0. , 0. , -17.5]])
>>> matrix = np.array([[4, 3], [6, 3]])
>>> lower_mat, upper_mat = lower_upper_decomposition(matrix)
>>> lower_mat
array([[1. , 0. ],
[1.5, 1. ]])
>>> upper_mat
array([[ 4. , 3. ],
[ 0. , -1.5]])
# Matrix is not square
>>> matrix = np.array([[2, -2, 1], [0, 1, 2]])
>>> lower_mat, upper_mat = lower_upper_decomposition(matrix)
Traceback (most recent call last):
...
ValueError: 'table' has to be of square shaped array but got a 2x3 array:
[[ 2 -2 1]
[ 0 1 2]]
# Matrix is invertible, but its first leading principal minor is 0
>>> matrix = np.array([[0, 1], [1, 0]])
>>> lower_mat, upper_mat = lower_upper_decomposition(matrix)
Traceback (most recent call last):
...
ArithmeticError: No LU decomposition exists
# Matrix is singular, but its first leading principal minor is 1
>>> matrix = np.array([[1, 0], [1, 0]])
>>> lower_mat, upper_mat = lower_upper_decomposition(matrix)
>>> lower_mat
array([[1., 0.],
[1., 1.]])
>>> upper_mat
array([[1., 0.],
[0., 0.]])
# Matrix is singular, but its first leading principal minor is 0
>>> matrix = np.array([[0, 1], [0, 1]])
>>> lower_mat, upper_mat = lower_upper_decomposition(matrix)
Traceback (most recent call last):
...
ArithmeticError: No LU decomposition exists
"""
# Ensure that table is a square array
rows, columns = np.shape(table)
if rows != columns:
msg = (
"'table' has to be of square shaped array but got a "
f"{rows}x{columns} array:\n{table}"
)
raise ValueError(msg)
lower = np.zeros((rows, columns))
upper = np.zeros((rows, columns))
# in 'total', the necessary data is extracted through slices
# and the sum of the products is obtained.
for i in range(columns):
for j in range(i):
total = np.sum(lower[i, :i] * upper[:i, j])
if upper[j][j] == 0:
raise ArithmeticError("No LU decomposition exists")
lower[i][j] = (table[i][j] - total) / upper[j][j]
lower[i][i] = 1
for j in range(i, columns):
total = np.sum(lower[i, :i] * upper[:i, j])
upper[i][j] = table[i][j] - total
return lower, upper
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Resources: https:en.wikipedia.orgwikiConjugategradientmethod https:en.wikipedia.orgwikiDefinitesymmetricmatrix Returns True if input matrix is symmetric positive definite. Returns False otherwise. For a matrix to be SPD, all eigenvalues must be positive. import numpy as np matrix np.array ... 4.12401784, 5.01453636, 0.63865857, ... 5.01453636, 12.33347422, 3.40493586, ... 0.63865857, 3.40493586, 5.78591885 ismatrixspdmatrix True matrix np.array ... 0.34634879, 1.96165514, 2.18277744, ... 0.74074469, 1.19648894, 1.34223498, ... 0.7687067 , 0.06018373, 1.16315631 ismatrixspdmatrix False Ensure matrix is square. If matrix not symmetric, exit right away. Get eigenvalues and eignevectors for a symmetric matrix. Check sign of all eigenvalues. np.all returns a value of type np.bool Returns a symmetric positive definite matrix given a dimension. Input: dimension gives the square matrix dimension. Output: spdmatrix is an diminesion x dimensions symmetric positive definite SPD matrix. import numpy as np dimension 3 spdmatrix createspdmatrixdimension ismatrixspdspdmatrix True Returns solution to the linear system np.dotspdmatrix, x b. Input: spdmatrix is an NxN Symmetric Positive Definite SPD matrix. loadvector is an Nx1 vector. Output: x is an Nx1 vector that is the solution vector. import numpy as np spdmatrix np.array ... 8.73256573, 5.02034289, 2.68709226, ... 5.02034289, 3.78188322, 0.91980451, ... 2.68709226, 0.91980451, 1.94746467 b np.array ... 5.80872761, ... 3.23807431, ... 1.95381422 conjugategradientspdmatrix, b array0.63114139, 0.01561498, 0.13979294 Ensure proper dimensionality. Initialize solution guess, residual, search direction. Set initial errors in solution guess and residual. Set iteration counter to threshold number of iterations. Save this value so we only calculate the matrixvector product once. The main algorithm. Update search direction magnitude. Update solution guess. Calculate new residual. Calculate new Krylov subspace scale. Calculate new A conjuage search direction. Calculate errors. Update variables. Update number of iterations. testconjugategradient self running tests Create linear system with SPD matrix and known solution xtrue. Numpy solution. Our implementation. Ensure both solutions are close to xtrue and therefore one another. | from typing import Any
import numpy as np
def _is_matrix_spd(matrix: np.ndarray) -> bool:
"""
Returns True if input matrix is symmetric positive definite.
Returns False otherwise.
For a matrix to be SPD, all eigenvalues must be positive.
>>> import numpy as np
>>> matrix = np.array([
... [4.12401784, -5.01453636, -0.63865857],
... [-5.01453636, 12.33347422, -3.40493586],
... [-0.63865857, -3.40493586, 5.78591885]])
>>> _is_matrix_spd(matrix)
True
>>> matrix = np.array([
... [0.34634879, 1.96165514, 2.18277744],
... [0.74074469, -1.19648894, -1.34223498],
... [-0.7687067 , 0.06018373, -1.16315631]])
>>> _is_matrix_spd(matrix)
False
"""
# Ensure matrix is square.
assert np.shape(matrix)[0] == np.shape(matrix)[1]
# If matrix not symmetric, exit right away.
if np.allclose(matrix, matrix.T) is False:
return False
# Get eigenvalues and eignevectors for a symmetric matrix.
eigen_values, _ = np.linalg.eigh(matrix)
# Check sign of all eigenvalues.
# np.all returns a value of type np.bool_
return bool(np.all(eigen_values > 0))
def _create_spd_matrix(dimension: int) -> Any:
"""
Returns a symmetric positive definite matrix given a dimension.
Input:
dimension gives the square matrix dimension.
Output:
spd_matrix is an diminesion x dimensions symmetric positive definite (SPD) matrix.
>>> import numpy as np
>>> dimension = 3
>>> spd_matrix = _create_spd_matrix(dimension)
>>> _is_matrix_spd(spd_matrix)
True
"""
random_matrix = np.random.randn(dimension, dimension)
spd_matrix = np.dot(random_matrix, random_matrix.T)
assert _is_matrix_spd(spd_matrix)
return spd_matrix
def conjugate_gradient(
spd_matrix: np.ndarray,
load_vector: np.ndarray,
max_iterations: int = 1000,
tol: float = 1e-8,
) -> Any:
"""
Returns solution to the linear system np.dot(spd_matrix, x) = b.
Input:
spd_matrix is an NxN Symmetric Positive Definite (SPD) matrix.
load_vector is an Nx1 vector.
Output:
x is an Nx1 vector that is the solution vector.
>>> import numpy as np
>>> spd_matrix = np.array([
... [8.73256573, -5.02034289, -2.68709226],
... [-5.02034289, 3.78188322, 0.91980451],
... [-2.68709226, 0.91980451, 1.94746467]])
>>> b = np.array([
... [-5.80872761],
... [ 3.23807431],
... [ 1.95381422]])
>>> conjugate_gradient(spd_matrix, b)
array([[-0.63114139],
[-0.01561498],
[ 0.13979294]])
"""
# Ensure proper dimensionality.
assert np.shape(spd_matrix)[0] == np.shape(spd_matrix)[1]
assert np.shape(load_vector)[0] == np.shape(spd_matrix)[0]
assert _is_matrix_spd(spd_matrix)
# Initialize solution guess, residual, search direction.
x0 = np.zeros((np.shape(load_vector)[0], 1))
r0 = np.copy(load_vector)
p0 = np.copy(r0)
# Set initial errors in solution guess and residual.
error_residual = 1e9
error_x_solution = 1e9
error = 1e9
# Set iteration counter to threshold number of iterations.
iterations = 0
while error > tol:
# Save this value so we only calculate the matrix-vector product once.
w = np.dot(spd_matrix, p0)
# The main algorithm.
# Update search direction magnitude.
alpha = np.dot(r0.T, r0) / np.dot(p0.T, w)
# Update solution guess.
x = x0 + alpha * p0
# Calculate new residual.
r = r0 - alpha * w
# Calculate new Krylov subspace scale.
beta = np.dot(r.T, r) / np.dot(r0.T, r0)
# Calculate new A conjuage search direction.
p = r + beta * p0
# Calculate errors.
error_residual = np.linalg.norm(r - r0)
error_x_solution = np.linalg.norm(x - x0)
error = np.maximum(error_residual, error_x_solution)
# Update variables.
x0 = np.copy(x)
r0 = np.copy(r)
p0 = np.copy(p)
# Update number of iterations.
iterations += 1
if iterations > max_iterations:
break
return x
def test_conjugate_gradient() -> None:
"""
>>> test_conjugate_gradient() # self running tests
"""
# Create linear system with SPD matrix and known solution x_true.
dimension = 3
spd_matrix = _create_spd_matrix(dimension)
x_true = np.random.randn(dimension, 1)
b = np.dot(spd_matrix, x_true)
# Numpy solution.
x_numpy = np.linalg.solve(spd_matrix, b)
# Our implementation.
x_conjugate_gradient = conjugate_gradient(spd_matrix, b)
# Ensure both solutions are close to x_true (and therefore one another).
assert np.linalg.norm(x_numpy - x_true) <= 1e-6
assert np.linalg.norm(x_conjugate_gradient - x_true) <= 1e-6
if __name__ == "__main__":
import doctest
doctest.testmod()
test_conjugate_gradient()
|
Solve a linear system of equations using Gaussian elimination with partial pivoting Args: matrix: Coefficient matrix with the last column representing the constants. Returns: Solution vector. Raises: ValueError: If the matrix is not correct i.e., singular. https:courses.engr.illinois.educs357su2013lect.htm Lecture 7 Example: A np.array2, 1, 1, 3, 1, 2, 2, 1, 2, dtypefloat B np.array8, 11, 3, dtypefloat solution solvelinearsystemnp.columnstackA, B np.allclosesolution, np.array2., 3., 1. True solvelinearsystemnp.array0, 0, 0, 0, dtypefloat arraynan, nan Lead element search Upper triangular matrix Find x vector Back Substitution Return the solution vector Example usage: | import numpy as np
matrix = np.array(
[
[5.0, -5.0, -3.0, 4.0, -11.0],
[1.0, -4.0, 6.0, -4.0, -10.0],
[-2.0, -5.0, 4.0, -5.0, -12.0],
[-3.0, -3.0, 5.0, -5.0, 8.0],
],
dtype=float,
)
def solve_linear_system(matrix: np.ndarray) -> np.ndarray:
"""
Solve a linear system of equations using Gaussian elimination with partial pivoting
Args:
- matrix: Coefficient matrix with the last column representing the constants.
Returns:
- Solution vector.
Raises:
- ValueError: If the matrix is not correct (i.e., singular).
https://courses.engr.illinois.edu/cs357/su2013/lect.htm Lecture 7
Example:
>>> A = np.array([[2, 1, -1], [-3, -1, 2], [-2, 1, 2]], dtype=float)
>>> B = np.array([8, -11, -3], dtype=float)
>>> solution = solve_linear_system(np.column_stack((A, B)))
>>> np.allclose(solution, np.array([2., 3., -1.]))
True
>>> solve_linear_system(np.array([[0, 0], [0, 0]], dtype=float))
array([nan, nan])
"""
ab = np.copy(matrix)
num_of_rows = ab.shape[0]
num_of_columns = ab.shape[1] - 1
x_lst: list[float] = []
# Lead element search
for column_num in range(num_of_rows):
for i in range(column_num, num_of_columns):
if abs(ab[i][column_num]) > abs(ab[column_num][column_num]):
ab[[column_num, i]] = ab[[i, column_num]]
if ab[column_num, column_num] == 0.0:
raise ValueError("Matrix is not correct")
else:
pass
if column_num != 0:
for i in range(column_num, num_of_rows):
ab[i, :] -= (
ab[i, column_num - 1]
/ ab[column_num - 1, column_num - 1]
* ab[column_num - 1, :]
)
# Upper triangular matrix
for column_num in range(num_of_rows):
for i in range(column_num, num_of_columns):
if abs(ab[i][column_num]) > abs(ab[column_num][column_num]):
ab[[column_num, i]] = ab[[i, column_num]]
if ab[column_num, column_num] == 0.0:
raise ValueError("Matrix is not correct")
else:
pass
if column_num != 0:
for i in range(column_num, num_of_rows):
ab[i, :] -= (
ab[i, column_num - 1]
/ ab[column_num - 1, column_num - 1]
* ab[column_num - 1, :]
)
# Find x vector (Back Substitution)
for column_num in range(num_of_rows - 1, -1, -1):
x = ab[column_num, -1] / ab[column_num, column_num]
x_lst.insert(0, x)
for i in range(column_num - 1, -1, -1):
ab[i, -1] -= ab[i, column_num] * x
# Return the solution vector
return np.asarray(x_lst)
if __name__ == "__main__":
from doctest import testmod
from pathlib import Path
testmod()
file_path = Path(__file__).parent / "matrix.txt"
try:
matrix = np.loadtxt(file_path)
except FileNotFoundError:
print(f"Error: {file_path} not found. Using default matrix instead.")
# Example usage:
print(f"Matrix:\n{matrix}")
print(f"{solve_linear_system(matrix) = }")
|
Created on Mon Feb 26 14:29:11 2018 author: Christian Bender license: MITlicense This module contains some useful classes and functions for dealing with linear algebra in python. Overview: class Vector function zerovectordimension function unitbasisvectordimension, pos function axpyscalar, vector1, vector2 function randomvectorN, a, b class Matrix function squarezeromatrixN function randommatrixW, H, a, b This class represents a vector of arbitrary size. You need to give the vector components. Overview of the methods: initcomponents: Collectionfloat None: init the vector len: gets the size of the vector number of components str: returns a string representation addother: Vector: vector addition subother: Vector: vector subtraction mulother: float: scalar multiplication mulother: Vector: dot product copy: copies this vector and returns it componenti: gets the ith component 0indexed changecomponentpos: int, value: float: changes specified component euclideanlength: returns the euclidean length of the vector angleother: Vector, deg: bool: returns the angle between two vectors TODO: compareoperator input: components or nothing simple constructor for init the vector returns the size of the vector returns a string representation of the vector input: other vector assumes: other vector has the same size returns a new vector that represents the sum. input: other vector assumes: other vector has the same size returns a new vector that represents the difference. mul implements the scalar multiplication and the dotproduct copies this vector and returns it. input: index 0indexed output: the ith component of the vector. input: an index pos and a value changes the specified component pos with the 'value' precondition returns the euclidean length of the vector Vector2, 3, 4.euclideanlength 5.385164807134504 Vector1.euclideanlength 1.0 Vector0, 1, 2, 3, 4, 5, 6.euclideanlength 9.539392014169456 Vector.euclideanlength Traceback most recent call last: ... Exception: Vector is empty find angle between two Vector self, Vector Vector3, 4, 1.angleVector2, 1, 1 1.4906464636572374 Vector3, 4, 1.angleVector2, 1, 1, deg True 85.40775111366095 Vector3, 4, 1.angleVector2, 1 Traceback most recent call last: ... Exception: invalid operand! returns a zerovector of size 'dimension' precondition returns a unit basis vector with a One at index 'pos' indexing at 0 precondition input: a 'scalar' and two vectors 'x' and 'y' output: a vector computes the axpy operation precondition input: size N of the vector. random range a,b output: returns a random vector of size N, with random integer components between 'a' and 'b'. class: Matrix This class represents an arbitrary matrix. Overview of the methods: init: str: returns a string representation addother: Matrix: matrix addition subother: Matrix: matrix subtraction mulother: float: scalar multiplication mulother: Vector: vector multiplication height : returns height width : returns width componentx: int, y: int: returns specified component changecomponentx: int, y: int, value: float: changes specified component minorx: int, y: int: returns minor along x, y cofactorx: int, y: int: returns cofactor along x, y determinant : returns determinant simple constructor for initializing the matrix with components. returns a string representation of this matrix. implements matrix addition. implements matrix subtraction. implements the matrixvector multiplication. implements the matrixscalar multiplication getter for the height getter for the width returns the specified x,y component changes the xy component of this matrix returns the minor along x, y returns the cofactor signed minor along x, y returns the determinant of an nxn matrix using Laplace expansion returns a square zeromatrix of dimension NxN returns a random matrix WxH with integer components between 'a' and 'b' | from __future__ import annotations
import math
import random
from collections.abc import Collection
from typing import overload
class Vector:
"""
This class represents a vector of arbitrary size.
You need to give the vector components.
Overview of the methods:
__init__(components: Collection[float] | None): init the vector
__len__(): gets the size of the vector (number of components)
__str__(): returns a string representation
__add__(other: Vector): vector addition
__sub__(other: Vector): vector subtraction
__mul__(other: float): scalar multiplication
__mul__(other: Vector): dot product
copy(): copies this vector and returns it
component(i): gets the i-th component (0-indexed)
change_component(pos: int, value: float): changes specified component
euclidean_length(): returns the euclidean length of the vector
angle(other: Vector, deg: bool): returns the angle between two vectors
TODO: compare-operator
"""
def __init__(self, components: Collection[float] | None = None) -> None:
"""
input: components or nothing
simple constructor for init the vector
"""
if components is None:
components = []
self.__components = list(components)
def __len__(self) -> int:
"""
returns the size of the vector
"""
return len(self.__components)
def __str__(self) -> str:
"""
returns a string representation of the vector
"""
return "(" + ",".join(map(str, self.__components)) + ")"
def __add__(self, other: Vector) -> Vector:
"""
input: other vector
assumes: other vector has the same size
returns a new vector that represents the sum.
"""
size = len(self)
if size == len(other):
result = [self.__components[i] + other.component(i) for i in range(size)]
return Vector(result)
else:
raise Exception("must have the same size")
def __sub__(self, other: Vector) -> Vector:
"""
input: other vector
assumes: other vector has the same size
returns a new vector that represents the difference.
"""
size = len(self)
if size == len(other):
result = [self.__components[i] - other.component(i) for i in range(size)]
return Vector(result)
else: # error case
raise Exception("must have the same size")
@overload
def __mul__(self, other: float) -> Vector:
...
@overload
def __mul__(self, other: Vector) -> float:
...
def __mul__(self, other: float | Vector) -> float | Vector:
"""
mul implements the scalar multiplication
and the dot-product
"""
if isinstance(other, (float, int)):
ans = [c * other for c in self.__components]
return Vector(ans)
elif isinstance(other, Vector) and len(self) == len(other):
size = len(self)
prods = [self.__components[i] * other.component(i) for i in range(size)]
return sum(prods)
else: # error case
raise Exception("invalid operand!")
def copy(self) -> Vector:
"""
copies this vector and returns it.
"""
return Vector(self.__components)
def component(self, i: int) -> float:
"""
input: index (0-indexed)
output: the i-th component of the vector.
"""
if isinstance(i, int) and -len(self.__components) <= i < len(self.__components):
return self.__components[i]
else:
raise Exception("index out of range")
def change_component(self, pos: int, value: float) -> None:
"""
input: an index (pos) and a value
changes the specified component (pos) with the
'value'
"""
# precondition
assert -len(self.__components) <= pos < len(self.__components)
self.__components[pos] = value
def euclidean_length(self) -> float:
"""
returns the euclidean length of the vector
>>> Vector([2, 3, 4]).euclidean_length()
5.385164807134504
>>> Vector([1]).euclidean_length()
1.0
>>> Vector([0, -1, -2, -3, 4, 5, 6]).euclidean_length()
9.539392014169456
>>> Vector([]).euclidean_length()
Traceback (most recent call last):
...
Exception: Vector is empty
"""
if len(self.__components) == 0:
raise Exception("Vector is empty")
squares = [c**2 for c in self.__components]
return math.sqrt(sum(squares))
def angle(self, other: Vector, deg: bool = False) -> float:
"""
find angle between two Vector (self, Vector)
>>> Vector([3, 4, -1]).angle(Vector([2, -1, 1]))
1.4906464636572374
>>> Vector([3, 4, -1]).angle(Vector([2, -1, 1]), deg = True)
85.40775111366095
>>> Vector([3, 4, -1]).angle(Vector([2, -1]))
Traceback (most recent call last):
...
Exception: invalid operand!
"""
num = self * other
den = self.euclidean_length() * other.euclidean_length()
if deg:
return math.degrees(math.acos(num / den))
else:
return math.acos(num / den)
def zero_vector(dimension: int) -> Vector:
"""
returns a zero-vector of size 'dimension'
"""
# precondition
assert isinstance(dimension, int)
return Vector([0] * dimension)
def unit_basis_vector(dimension: int, pos: int) -> Vector:
"""
returns a unit basis vector with a One
at index 'pos' (indexing at 0)
"""
# precondition
assert isinstance(dimension, int)
assert isinstance(pos, int)
ans = [0] * dimension
ans[pos] = 1
return Vector(ans)
def axpy(scalar: float, x: Vector, y: Vector) -> Vector:
"""
input: a 'scalar' and two vectors 'x' and 'y'
output: a vector
computes the axpy operation
"""
# precondition
assert isinstance(x, Vector)
assert isinstance(y, Vector)
assert isinstance(scalar, (int, float))
return x * scalar + y
def random_vector(n: int, a: int, b: int) -> Vector:
"""
input: size (N) of the vector.
random range (a,b)
output: returns a random vector of size N, with
random integer components between 'a' and 'b'.
"""
random.seed(None)
ans = [random.randint(a, b) for _ in range(n)]
return Vector(ans)
class Matrix:
"""
class: Matrix
This class represents an arbitrary matrix.
Overview of the methods:
__init__():
__str__(): returns a string representation
__add__(other: Matrix): matrix addition
__sub__(other: Matrix): matrix subtraction
__mul__(other: float): scalar multiplication
__mul__(other: Vector): vector multiplication
height() : returns height
width() : returns width
component(x: int, y: int): returns specified component
change_component(x: int, y: int, value: float): changes specified component
minor(x: int, y: int): returns minor along (x, y)
cofactor(x: int, y: int): returns cofactor along (x, y)
determinant() : returns determinant
"""
def __init__(self, matrix: list[list[float]], w: int, h: int) -> None:
"""
simple constructor for initializing the matrix with components.
"""
self.__matrix = matrix
self.__width = w
self.__height = h
def __str__(self) -> str:
"""
returns a string representation of this matrix.
"""
ans = ""
for i in range(self.__height):
ans += "|"
for j in range(self.__width):
if j < self.__width - 1:
ans += str(self.__matrix[i][j]) + ","
else:
ans += str(self.__matrix[i][j]) + "|\n"
return ans
def __add__(self, other: Matrix) -> Matrix:
"""
implements matrix addition.
"""
if self.__width == other.width() and self.__height == other.height():
matrix = []
for i in range(self.__height):
row = [
self.__matrix[i][j] + other.component(i, j)
for j in range(self.__width)
]
matrix.append(row)
return Matrix(matrix, self.__width, self.__height)
else:
raise Exception("matrix must have the same dimension!")
def __sub__(self, other: Matrix) -> Matrix:
"""
implements matrix subtraction.
"""
if self.__width == other.width() and self.__height == other.height():
matrix = []
for i in range(self.__height):
row = [
self.__matrix[i][j] - other.component(i, j)
for j in range(self.__width)
]
matrix.append(row)
return Matrix(matrix, self.__width, self.__height)
else:
raise Exception("matrices must have the same dimension!")
@overload
def __mul__(self, other: float) -> Matrix:
...
@overload
def __mul__(self, other: Vector) -> Vector:
...
def __mul__(self, other: float | Vector) -> Vector | Matrix:
"""
implements the matrix-vector multiplication.
implements the matrix-scalar multiplication
"""
if isinstance(other, Vector): # matrix-vector
if len(other) == self.__width:
ans = zero_vector(self.__height)
for i in range(self.__height):
prods = [
self.__matrix[i][j] * other.component(j)
for j in range(self.__width)
]
ans.change_component(i, sum(prods))
return ans
else:
raise Exception(
"vector must have the same size as the "
"number of columns of the matrix!"
)
elif isinstance(other, (int, float)): # matrix-scalar
matrix = [
[self.__matrix[i][j] * other for j in range(self.__width)]
for i in range(self.__height)
]
return Matrix(matrix, self.__width, self.__height)
return None
def height(self) -> int:
"""
getter for the height
"""
return self.__height
def width(self) -> int:
"""
getter for the width
"""
return self.__width
def component(self, x: int, y: int) -> float:
"""
returns the specified (x,y) component
"""
if 0 <= x < self.__height and 0 <= y < self.__width:
return self.__matrix[x][y]
else:
raise Exception("change_component: indices out of bounds")
def change_component(self, x: int, y: int, value: float) -> None:
"""
changes the x-y component of this matrix
"""
if 0 <= x < self.__height and 0 <= y < self.__width:
self.__matrix[x][y] = value
else:
raise Exception("change_component: indices out of bounds")
def minor(self, x: int, y: int) -> float:
"""
returns the minor along (x, y)
"""
if self.__height != self.__width:
raise Exception("Matrix is not square")
minor = self.__matrix[:x] + self.__matrix[x + 1 :]
for i in range(len(minor)):
minor[i] = minor[i][:y] + minor[i][y + 1 :]
return Matrix(minor, self.__width - 1, self.__height - 1).determinant()
def cofactor(self, x: int, y: int) -> float:
"""
returns the cofactor (signed minor) along (x, y)
"""
if self.__height != self.__width:
raise Exception("Matrix is not square")
if 0 <= x < self.__height and 0 <= y < self.__width:
return (-1) ** (x + y) * self.minor(x, y)
else:
raise Exception("Indices out of bounds")
def determinant(self) -> float:
"""
returns the determinant of an nxn matrix using Laplace expansion
"""
if self.__height != self.__width:
raise Exception("Matrix is not square")
if self.__height < 1:
raise Exception("Matrix has no element")
elif self.__height == 1:
return self.__matrix[0][0]
elif self.__height == 2:
return (
self.__matrix[0][0] * self.__matrix[1][1]
- self.__matrix[0][1] * self.__matrix[1][0]
)
else:
cofactor_prods = [
self.__matrix[0][y] * self.cofactor(0, y) for y in range(self.__width)
]
return sum(cofactor_prods)
def square_zero_matrix(n: int) -> Matrix:
"""
returns a square zero-matrix of dimension NxN
"""
ans: list[list[float]] = [[0] * n for _ in range(n)]
return Matrix(ans, n, n)
def random_matrix(width: int, height: int, a: int, b: int) -> Matrix:
"""
returns a random matrix WxH with integer components
between 'a' and 'b'
"""
random.seed(None)
matrix: list[list[float]] = [
[random.randint(a, b) for _ in range(width)] for _ in range(height)
]
return Matrix(matrix, width, height)
|
coordinates is a two dimensional matrix: x, y, x, y, ... number of points you want to use printpointstopolynomial Traceback most recent call last: ... ValueError: The program cannot work out a fitting polynomial. printpointstopolynomial Traceback most recent call last: ... ValueError: The program cannot work out a fitting polynomial. printpointstopolynomial1, 0, 2, 0, 3, 0 fxx20.0x10.0x00.0 printpointstopolynomial1, 1, 2, 1, 3, 1 fxx20.0x10.0x01.0 printpointstopolynomial1, 3, 2, 3, 3, 3 fxx20.0x10.0x03.0 printpointstopolynomial1, 1, 2, 2, 3, 3 fxx20.0x11.0x00.0 printpointstopolynomial1, 1, 2, 4, 3, 9 fxx21.0x10.0x00.0 printpointstopolynomial1, 3, 2, 6, 3, 11 fxx21.0x10.0x02.0 printpointstopolynomial1, 3, 2, 6, 3, 11 fxx21.0x10.0x02.0 printpointstopolynomial1, 5, 2, 2, 3, 9 fxx25.0x118.0x018.0 put the x and x to the power values in a matrix put the y values into a vector manipulating all the values in the matrix manipulating the values in the vector make solutions | def points_to_polynomial(coordinates: list[list[int]]) -> str:
"""
coordinates is a two dimensional matrix: [[x, y], [x, y], ...]
number of points you want to use
>>> print(points_to_polynomial([]))
Traceback (most recent call last):
...
ValueError: The program cannot work out a fitting polynomial.
>>> print(points_to_polynomial([[]]))
Traceback (most recent call last):
...
ValueError: The program cannot work out a fitting polynomial.
>>> print(points_to_polynomial([[1, 0], [2, 0], [3, 0]]))
f(x)=x^2*0.0+x^1*-0.0+x^0*0.0
>>> print(points_to_polynomial([[1, 1], [2, 1], [3, 1]]))
f(x)=x^2*0.0+x^1*-0.0+x^0*1.0
>>> print(points_to_polynomial([[1, 3], [2, 3], [3, 3]]))
f(x)=x^2*0.0+x^1*-0.0+x^0*3.0
>>> print(points_to_polynomial([[1, 1], [2, 2], [3, 3]]))
f(x)=x^2*0.0+x^1*1.0+x^0*0.0
>>> print(points_to_polynomial([[1, 1], [2, 4], [3, 9]]))
f(x)=x^2*1.0+x^1*-0.0+x^0*0.0
>>> print(points_to_polynomial([[1, 3], [2, 6], [3, 11]]))
f(x)=x^2*1.0+x^1*-0.0+x^0*2.0
>>> print(points_to_polynomial([[1, -3], [2, -6], [3, -11]]))
f(x)=x^2*-1.0+x^1*-0.0+x^0*-2.0
>>> print(points_to_polynomial([[1, 5], [2, 2], [3, 9]]))
f(x)=x^2*5.0+x^1*-18.0+x^0*18.0
"""
if len(coordinates) == 0 or not all(len(pair) == 2 for pair in coordinates):
raise ValueError("The program cannot work out a fitting polynomial.")
if len({tuple(pair) for pair in coordinates}) != len(coordinates):
raise ValueError("The program cannot work out a fitting polynomial.")
set_x = {x for x, _ in coordinates}
if len(set_x) == 1:
return f"x={coordinates[0][0]}"
if len(set_x) != len(coordinates):
raise ValueError("The program cannot work out a fitting polynomial.")
x = len(coordinates)
# put the x and x to the power values in a matrix
matrix: list[list[float]] = [
[
coordinates[count_of_line][0] ** (x - (count_in_line + 1))
for count_in_line in range(x)
]
for count_of_line in range(x)
]
# put the y values into a vector
vector: list[float] = [coordinates[count_of_line][1] for count_of_line in range(x)]
for count in range(x):
for number in range(x):
if count == number:
continue
fraction = matrix[number][count] / matrix[count][count]
for counting_columns, item in enumerate(matrix[count]):
# manipulating all the values in the matrix
matrix[number][counting_columns] -= item * fraction
# manipulating the values in the vector
vector[number] -= vector[count] * fraction
# make solutions
solution: list[str] = [
str(vector[count] / matrix[count][count]) for count in range(x)
]
solved = "f(x)="
for count in range(x):
remove_e: list[str] = solution[count].split("E")
if len(remove_e) > 1:
solution[count] = f"{remove_e[0]}*10^{remove_e[1]}"
solved += f"x^{x - (count + 1)}*{solution[count]}"
if count + 1 != x:
solved += "+"
return solved
if __name__ == "__main__":
print(points_to_polynomial([]))
print(points_to_polynomial([[]]))
print(points_to_polynomial([[1, 0], [2, 0], [3, 0]]))
print(points_to_polynomial([[1, 1], [2, 1], [3, 1]]))
print(points_to_polynomial([[1, 3], [2, 3], [3, 3]]))
print(points_to_polynomial([[1, 1], [2, 2], [3, 3]]))
print(points_to_polynomial([[1, 1], [2, 4], [3, 9]]))
print(points_to_polynomial([[1, 3], [2, 6], [3, 11]]))
print(points_to_polynomial([[1, -3], [2, -6], [3, -11]]))
print(points_to_polynomial([[1, 5], [2, 2], [3, 9]]))
|
Power Iteration. Find the largest eigenvalue and corresponding eigenvector of matrix inputmatrix given a random vector in the same space. Will work so long as vector has component of largest eigenvector. inputmatrix must be either real or Hermitian. Input inputmatrix: input matrix whose largest eigenvalue we will find. Numpy array. np.shapeinputmatrix N,N. vector: random initial vector in same space as matrix. Numpy array. np.shapevector N, or N,1 Output largesteigenvalue: largest eigenvalue of the matrix inputmatrix. Float. Scalar. largesteigenvector: eigenvector corresponding to largesteigenvalue. Numpy array. np.shapelargesteigenvector N, or N,1. import numpy as np inputmatrix np.array ... 41, 4, 20, ... 4, 26, 30, ... 20, 30, 50 ... vector np.array41,4,20 poweriterationinputmatrix,vector 79.66086378788381, array0.44472726, 0.46209842, 0.76725662 Ensure matrix is square. Ensure proper dimensionality. Ensure inputs are either both complex or both real Ensure complex inputmatrix is Hermitian Set convergence to False. Will define convergence when we exceed maxiterations or when we have small changes from one iteration to next. Multiple matrix by the vector. Normalize the resulting output vector. Find rayleigh quotient faster than usual bc we know vector is normalized already Check convergence. testpoweriteration self running tests Our implementation. Numpy implementation. Get eigenvalues and eigenvectors using builtin numpy eigh eigh used for symmetric or hermetian matrices. Last eigenvalue is the maximum one. Last column in this matrix is eigenvector corresponding to largest eigenvalue. Check our implementation and numpy gives close answers. Take absolute values element wise of each eigenvector. as they are only unique to a minus sign. | import numpy as np
def power_iteration(
input_matrix: np.ndarray,
vector: np.ndarray,
error_tol: float = 1e-12,
max_iterations: int = 100,
) -> tuple[float, np.ndarray]:
"""
Power Iteration.
Find the largest eigenvalue and corresponding eigenvector
of matrix input_matrix given a random vector in the same space.
Will work so long as vector has component of largest eigenvector.
input_matrix must be either real or Hermitian.
Input
input_matrix: input matrix whose largest eigenvalue we will find.
Numpy array. np.shape(input_matrix) == (N,N).
vector: random initial vector in same space as matrix.
Numpy array. np.shape(vector) == (N,) or (N,1)
Output
largest_eigenvalue: largest eigenvalue of the matrix input_matrix.
Float. Scalar.
largest_eigenvector: eigenvector corresponding to largest_eigenvalue.
Numpy array. np.shape(largest_eigenvector) == (N,) or (N,1).
>>> import numpy as np
>>> input_matrix = np.array([
... [41, 4, 20],
... [ 4, 26, 30],
... [20, 30, 50]
... ])
>>> vector = np.array([41,4,20])
>>> power_iteration(input_matrix,vector)
(79.66086378788381, array([0.44472726, 0.46209842, 0.76725662]))
"""
# Ensure matrix is square.
assert np.shape(input_matrix)[0] == np.shape(input_matrix)[1]
# Ensure proper dimensionality.
assert np.shape(input_matrix)[0] == np.shape(vector)[0]
# Ensure inputs are either both complex or both real
assert np.iscomplexobj(input_matrix) == np.iscomplexobj(vector)
is_complex = np.iscomplexobj(input_matrix)
if is_complex:
# Ensure complex input_matrix is Hermitian
assert np.array_equal(input_matrix, input_matrix.conj().T)
# Set convergence to False. Will define convergence when we exceed max_iterations
# or when we have small changes from one iteration to next.
convergence = False
lambda_previous = 0
iterations = 0
error = 1e12
while not convergence:
# Multiple matrix by the vector.
w = np.dot(input_matrix, vector)
# Normalize the resulting output vector.
vector = w / np.linalg.norm(w)
# Find rayleigh quotient
# (faster than usual b/c we know vector is normalized already)
vector_h = vector.conj().T if is_complex else vector.T
lambda_ = np.dot(vector_h, np.dot(input_matrix, vector))
# Check convergence.
error = np.abs(lambda_ - lambda_previous) / lambda_
iterations += 1
if error <= error_tol or iterations >= max_iterations:
convergence = True
lambda_previous = lambda_
if is_complex:
lambda_ = np.real(lambda_)
return lambda_, vector
def test_power_iteration() -> None:
"""
>>> test_power_iteration() # self running tests
"""
real_input_matrix = np.array([[41, 4, 20], [4, 26, 30], [20, 30, 50]])
real_vector = np.array([41, 4, 20])
complex_input_matrix = real_input_matrix.astype(np.complex128)
imag_matrix = np.triu(1j * complex_input_matrix, 1)
complex_input_matrix += imag_matrix
complex_input_matrix += -1 * imag_matrix.T
complex_vector = np.array([41, 4, 20]).astype(np.complex128)
for problem_type in ["real", "complex"]:
if problem_type == "real":
input_matrix = real_input_matrix
vector = real_vector
elif problem_type == "complex":
input_matrix = complex_input_matrix
vector = complex_vector
# Our implementation.
eigen_value, eigen_vector = power_iteration(input_matrix, vector)
# Numpy implementation.
# Get eigenvalues and eigenvectors using built-in numpy
# eigh (eigh used for symmetric or hermetian matrices).
eigen_values, eigen_vectors = np.linalg.eigh(input_matrix)
# Last eigenvalue is the maximum one.
eigen_value_max = eigen_values[-1]
# Last column in this matrix is eigenvector corresponding to largest eigenvalue.
eigen_vector_max = eigen_vectors[:, -1]
# Check our implementation and numpy gives close answers.
assert np.abs(eigen_value - eigen_value_max) <= 1e-6
# Take absolute values element wise of each eigenvector.
# as they are only unique to a minus sign.
assert np.linalg.norm(np.abs(eigen_vector) - np.abs(eigen_vector_max)) <= 1e-6
if __name__ == "__main__":
import doctest
doctest.testmod()
test_power_iteration()
|
Calculate the rank of a matrix. See: https:en.wikipedia.orgwikiRanklinearalgebra Finds the rank of a matrix. Args: matrix: The matrix as a list of lists. Returns: The rank of the matrix. Example: matrix1 1, 2, 3, ... 4, 5, 6, ... 7, 8, 9 rankofmatrixmatrix1 2 matrix2 1, 0, 0, ... 0, 1, 0, ... 0, 0, 0 rankofmatrixmatrix2 2 matrix3 1, 2, 3, 4, ... 5, 6, 7, 8, ... 9, 10, 11, 12 rankofmatrixmatrix3 2 rankofmatrix2,3,1,1, ... 1,1,2,4, ... 3,1,3,2, ... 6,3,0,7 4 rankofmatrix2,1,3,6, ... 3,3,1,2, ... 1,1,1,2 3 rankofmatrix2,1,0, ... 1,3,4, ... 4,1,3 3 rankofmatrix3,2,1, ... 6,4,2 1 rankofmatrix, 0 rankofmatrix1 1 rankofmatrix 0 Check if diagonal element is not zero Eliminate all the elements below the diagonal Find a nonzero diagonal element to swap rows Reduce the row pointer by one to stay on the same row | def rank_of_matrix(matrix: list[list[int | float]]) -> int:
"""
Finds the rank of a matrix.
Args:
matrix: The matrix as a list of lists.
Returns:
The rank of the matrix.
Example:
>>> matrix1 = [[1, 2, 3],
... [4, 5, 6],
... [7, 8, 9]]
>>> rank_of_matrix(matrix1)
2
>>> matrix2 = [[1, 0, 0],
... [0, 1, 0],
... [0, 0, 0]]
>>> rank_of_matrix(matrix2)
2
>>> matrix3 = [[1, 2, 3, 4],
... [5, 6, 7, 8],
... [9, 10, 11, 12]]
>>> rank_of_matrix(matrix3)
2
>>> rank_of_matrix([[2,3,-1,-1],
... [1,-1,-2,4],
... [3,1,3,-2],
... [6,3,0,-7]])
4
>>> rank_of_matrix([[2,1,-3,-6],
... [3,-3,1,2],
... [1,1,1,2]])
3
>>> rank_of_matrix([[2,-1,0],
... [1,3,4],
... [4,1,-3]])
3
>>> rank_of_matrix([[3,2,1],
... [-6,-4,-2]])
1
>>> rank_of_matrix([[],[]])
0
>>> rank_of_matrix([[1]])
1
>>> rank_of_matrix([[]])
0
"""
rows = len(matrix)
columns = len(matrix[0])
rank = min(rows, columns)
for row in range(rank):
# Check if diagonal element is not zero
if matrix[row][row] != 0:
# Eliminate all the elements below the diagonal
for col in range(row + 1, rows):
multiplier = matrix[col][row] / matrix[row][row]
for i in range(row, columns):
matrix[col][i] -= multiplier * matrix[row][i]
else:
# Find a non-zero diagonal element to swap rows
reduce = True
for i in range(row + 1, rows):
if matrix[i][row] != 0:
matrix[row], matrix[i] = matrix[i], matrix[row]
reduce = False
break
if reduce:
rank -= 1
for i in range(rows):
matrix[i][row] = matrix[i][rank]
# Reduce the row pointer by one to stay on the same row
row -= 1
return rank
if __name__ == "__main__":
import doctest
doctest.testmod()
|
https:en.wikipedia.orgwikiRayleighquotient Checks if a matrix is Hermitian. import numpy as np A np.array ... 2, 21j, 4, ... 21j, 3, 1j, ... 4, 1j, 1 ishermitianA True A np.array ... 2, 21j, 41j, ... 21j, 3, 1j, ... 4, 1j, 1 ishermitianA False Returns the Rayleigh quotient of a Hermitian matrix A and vector v. import numpy as np A np.array ... 1, 2, 4, ... 2, 3, 1, ... 4, 1, 1 ... v np.array ... 1, ... 2, ... 3 ... rayleighquotientA, v array3. | from typing import Any
import numpy as np
def is_hermitian(matrix: np.ndarray) -> bool:
"""
Checks if a matrix is Hermitian.
>>> import numpy as np
>>> A = np.array([
... [2, 2+1j, 4],
... [2-1j, 3, 1j],
... [4, -1j, 1]])
>>> is_hermitian(A)
True
>>> A = np.array([
... [2, 2+1j, 4+1j],
... [2-1j, 3, 1j],
... [4, -1j, 1]])
>>> is_hermitian(A)
False
"""
return np.array_equal(matrix, matrix.conjugate().T)
def rayleigh_quotient(a: np.ndarray, v: np.ndarray) -> Any:
"""
Returns the Rayleigh quotient of a Hermitian matrix A and
vector v.
>>> import numpy as np
>>> A = np.array([
... [1, 2, 4],
... [2, 3, -1],
... [4, -1, 1]
... ])
>>> v = np.array([
... [1],
... [2],
... [3]
... ])
>>> rayleigh_quotient(A, v)
array([[3.]])
"""
v_star = v.conjugate().T
v_star_dot = v_star.dot(a)
assert isinstance(v_star_dot, np.ndarray)
return (v_star_dot.dot(v)) / (v_star.dot(v))
def tests() -> None:
a = np.array([[2, 2 + 1j, 4], [2 - 1j, 3, 1j], [4, -1j, 1]])
v = np.array([[1], [2], [3]])
assert is_hermitian(a), f"{a} is not hermitian."
print(rayleigh_quotient(a, v))
a = np.array([[1, 2, 4], [2, 3, -1], [4, -1, 1]])
assert is_hermitian(a), f"{a} is not hermitian."
assert rayleigh_quotient(a, v) == float(3)
if __name__ == "__main__":
import doctest
doctest.testmod()
tests()
|
Schur complement of a symmetric matrix X given as a 2x2 block matrix consisting of matrices A, B and C. Matrix A must be quadratic and nonsingular. In case A is singular, a pseudoinverse may be provided using the pseudoinv argument. Link to Wiki: https:en.wikipedia.orgwikiSchurcomplement See also Convex Optimization Boyd and Vandenberghe, A.5.5 import numpy as np a np.array1, 2, 2, 1 b np.array0, 3, 3, 0 c np.array2, 1, 6, 3 schurcomplementa, b, c array 5., 5., 0., 6. | import unittest
import numpy as np
import pytest
def schur_complement(
mat_a: np.ndarray,
mat_b: np.ndarray,
mat_c: np.ndarray,
pseudo_inv: np.ndarray | None = None,
) -> np.ndarray:
"""
Schur complement of a symmetric matrix X given as a 2x2 block matrix
consisting of matrices A, B and C.
Matrix A must be quadratic and non-singular.
In case A is singular, a pseudo-inverse may be provided using
the pseudo_inv argument.
Link to Wiki: https://en.wikipedia.org/wiki/Schur_complement
See also Convex Optimization – Boyd and Vandenberghe, A.5.5
>>> import numpy as np
>>> a = np.array([[1, 2], [2, 1]])
>>> b = np.array([[0, 3], [3, 0]])
>>> c = np.array([[2, 1], [6, 3]])
>>> schur_complement(a, b, c)
array([[ 5., -5.],
[ 0., 6.]])
"""
shape_a = np.shape(mat_a)
shape_b = np.shape(mat_b)
shape_c = np.shape(mat_c)
if shape_a[0] != shape_b[0]:
msg = (
"Expected the same number of rows for A and B. "
f"Instead found A of size {shape_a} and B of size {shape_b}"
)
raise ValueError(msg)
if shape_b[1] != shape_c[1]:
msg = (
"Expected the same number of columns for B and C. "
f"Instead found B of size {shape_b} and C of size {shape_c}"
)
raise ValueError(msg)
a_inv = pseudo_inv
if a_inv is None:
try:
a_inv = np.linalg.inv(mat_a)
except np.linalg.LinAlgError:
raise ValueError(
"Input matrix A is not invertible. Cannot compute Schur complement."
)
return mat_c - mat_b.T @ a_inv @ mat_b
class TestSchurComplement(unittest.TestCase):
def test_schur_complement(self) -> None:
a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]])
b = np.array([[0, 3], [3, 0], [2, 3]])
c = np.array([[2, 1], [6, 3]])
s = schur_complement(a, b, c)
input_matrix = np.block([[a, b], [b.T, c]])
det_x = np.linalg.det(input_matrix)
det_a = np.linalg.det(a)
det_s = np.linalg.det(s)
assert np.is_close(det_x, det_a * det_s)
def test_improper_a_b_dimensions(self) -> None:
a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]])
b = np.array([[0, 3], [3, 0], [2, 3]])
c = np.array([[2, 1], [6, 3]])
with pytest.raises(ValueError):
schur_complement(a, b, c)
def test_improper_b_c_dimensions(self) -> None:
a = np.array([[1, 2, 1], [2, 1, 2], [3, 2, 4]])
b = np.array([[0, 3], [3, 0], [2, 3]])
c = np.array([[2, 1, 3], [6, 3, 5]])
with pytest.raises(ValueError):
schur_complement(a, b, c)
if __name__ == "__main__":
import doctest
doctest.testmod()
unittest.main()
|
Created on Mon Feb 26 15:40:07 2018 author: Christian Bender license: MITlicense This file contains the testsuite for the linear algebra library. test for method component test for method toString test for method size test for method euclideanlength test for operator test for operator test for operator test for global function zerovector test for global function unitbasisvector test for global function axpy operation test for method copy test for method changecomponent test for Matrix method str test for Matrix method minor test for Matrix method cofactor test for Matrix method determinant test for Matrix operator test for Matrix method changecomponent test for Matrix method component test for Matrix operator test for Matrix operator test for global function squarezeromatrix | import unittest
import pytest
from .lib import (
Matrix,
Vector,
axpy,
square_zero_matrix,
unit_basis_vector,
zero_vector,
)
class Test(unittest.TestCase):
def test_component(self) -> None:
"""
test for method component()
"""
x = Vector([1, 2, 3])
assert x.component(0) == 1
assert x.component(2) == 3
_ = Vector()
def test_str(self) -> None:
"""
test for method toString()
"""
x = Vector([0, 0, 0, 0, 0, 1])
assert str(x) == "(0,0,0,0,0,1)"
def test_size(self) -> None:
"""
test for method size()
"""
x = Vector([1, 2, 3, 4])
assert len(x) == 4
def test_euclidean_length(self) -> None:
"""
test for method euclidean_length()
"""
x = Vector([1, 2])
y = Vector([1, 2, 3, 4, 5])
z = Vector([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
w = Vector([1, -1, 1, -1, 2, -3, 4, -5])
assert x.euclidean_length() == pytest.approx(2.236, abs=1e-3)
assert y.euclidean_length() == pytest.approx(7.416, abs=1e-3)
assert z.euclidean_length() == 0
assert w.euclidean_length() == pytest.approx(7.616, abs=1e-3)
def test_add(self) -> None:
"""
test for + operator
"""
x = Vector([1, 2, 3])
y = Vector([1, 1, 1])
assert (x + y).component(0) == 2
assert (x + y).component(1) == 3
assert (x + y).component(2) == 4
def test_sub(self) -> None:
"""
test for - operator
"""
x = Vector([1, 2, 3])
y = Vector([1, 1, 1])
assert (x - y).component(0) == 0
assert (x - y).component(1) == 1
assert (x - y).component(2) == 2
def test_mul(self) -> None:
"""
test for * operator
"""
x = Vector([1, 2, 3])
a = Vector([2, -1, 4]) # for test of dot product
b = Vector([1, -2, -1])
assert str(x * 3.0) == "(3.0,6.0,9.0)"
assert a * b == 0
def test_zero_vector(self) -> None:
"""
test for global function zero_vector()
"""
assert str(zero_vector(10)).count("0") == 10
def test_unit_basis_vector(self) -> None:
"""
test for global function unit_basis_vector()
"""
assert str(unit_basis_vector(3, 1)) == "(0,1,0)"
def test_axpy(self) -> None:
"""
test for global function axpy() (operation)
"""
x = Vector([1, 2, 3])
y = Vector([1, 0, 1])
assert str(axpy(2, x, y)) == "(3,4,7)"
def test_copy(self) -> None:
"""
test for method copy()
"""
x = Vector([1, 0, 0, 0, 0, 0])
y = x.copy()
assert str(x) == str(y)
def test_change_component(self) -> None:
"""
test for method change_component()
"""
x = Vector([1, 0, 0])
x.change_component(0, 0)
x.change_component(1, 1)
assert str(x) == "(0,1,0)"
def test_str_matrix(self) -> None:
"""
test for Matrix method str()
"""
a = Matrix([[1, 2, 3], [2, 4, 5], [6, 7, 8]], 3, 3)
assert str(a) == "|1,2,3|\n|2,4,5|\n|6,7,8|\n"
def test_minor(self) -> None:
"""
test for Matrix method minor()
"""
a = Matrix([[1, 2, 3], [2, 4, 5], [6, 7, 8]], 3, 3)
minors = [[-3, -14, -10], [-5, -10, -5], [-2, -1, 0]]
for x in range(a.height()):
for y in range(a.width()):
assert minors[x][y] == a.minor(x, y)
def test_cofactor(self) -> None:
"""
test for Matrix method cofactor()
"""
a = Matrix([[1, 2, 3], [2, 4, 5], [6, 7, 8]], 3, 3)
cofactors = [[-3, 14, -10], [5, -10, 5], [-2, 1, 0]]
for x in range(a.height()):
for y in range(a.width()):
assert cofactors[x][y] == a.cofactor(x, y)
def test_determinant(self) -> None:
"""
test for Matrix method determinant()
"""
a = Matrix([[1, 2, 3], [2, 4, 5], [6, 7, 8]], 3, 3)
assert a.determinant() == -5
def test__mul__matrix(self) -> None:
"""
test for Matrix * operator
"""
a = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]], 3, 3)
x = Vector([1, 2, 3])
assert str(a * x) == "(14,32,50)"
assert str(a * 2) == "|2,4,6|\n|8,10,12|\n|14,16,18|\n"
def test_change_component_matrix(self) -> None:
"""
test for Matrix method change_component()
"""
a = Matrix([[1, 2, 3], [2, 4, 5], [6, 7, 8]], 3, 3)
a.change_component(0, 2, 5)
assert str(a) == "|1,2,5|\n|2,4,5|\n|6,7,8|\n"
def test_component_matrix(self) -> None:
"""
test for Matrix method component()
"""
a = Matrix([[1, 2, 3], [2, 4, 5], [6, 7, 8]], 3, 3)
assert a.component(2, 1) == 7, 0.01
def test__add__matrix(self) -> None:
"""
test for Matrix + operator
"""
a = Matrix([[1, 2, 3], [2, 4, 5], [6, 7, 8]], 3, 3)
b = Matrix([[1, 2, 7], [2, 4, 5], [6, 7, 10]], 3, 3)
assert str(a + b) == "|2,4,10|\n|4,8,10|\n|12,14,18|\n"
def test__sub__matrix(self) -> None:
"""
test for Matrix - operator
"""
a = Matrix([[1, 2, 3], [2, 4, 5], [6, 7, 8]], 3, 3)
b = Matrix([[1, 2, 7], [2, 4, 5], [6, 7, 10]], 3, 3)
assert str(a - b) == "|0,0,-4|\n|0,0,0|\n|0,0,-2|\n"
def test_square_zero_matrix(self) -> None:
"""
test for global function square_zero_matrix()
"""
assert str(square_zero_matrix(5)) == (
"|0,0,0,0,0|\n|0,0,0,0,0|\n|0,0,0,0,0|\n|0,0,0,0,0|\n|0,0,0,0,0|\n"
)
if __name__ == "__main__":
unittest.main()
|
2D Transformations are regularly used in Linear Algebra. I have added the codes for reflection, projection, scaling and rotation 2D matrices. scaling5 5.0, 0.0, 0.0, 5.0 rotation45 0.5253219888177297, 0.8509035245341184, 0.8509035245341184, 0.5253219888177297 projection45 0.27596319193541496, 0.446998331800279, 0.446998331800279, 0.7240368080645851 reflection45 0.05064397763545947, 0.893996663600558, 0.893996663600558, 0.7018070490682369 scaling5 5.0, 0.0, 0.0, 5.0 rotation45 doctest: NORMALIZEWHITESPACE 0.5253219888177297, 0.8509035245341184, 0.8509035245341184, 0.5253219888177297 projection45 doctest: NORMALIZEWHITESPACE 0.27596319193541496, 0.446998331800279, 0.446998331800279, 0.7240368080645851 reflection45 doctest: NORMALIZEWHITESPACE 0.05064397763545947, 0.893996663600558, 0.893996663600558, 0.7018070490682369 | from math import cos, sin
def scaling(scaling_factor: float) -> list[list[float]]:
"""
>>> scaling(5)
[[5.0, 0.0], [0.0, 5.0]]
"""
scaling_factor = float(scaling_factor)
return [[scaling_factor * int(x == y) for x in range(2)] for y in range(2)]
def rotation(angle: float) -> list[list[float]]:
"""
>>> rotation(45) # doctest: +NORMALIZE_WHITESPACE
[[0.5253219888177297, -0.8509035245341184],
[0.8509035245341184, 0.5253219888177297]]
"""
c, s = cos(angle), sin(angle)
return [[c, -s], [s, c]]
def projection(angle: float) -> list[list[float]]:
"""
>>> projection(45) # doctest: +NORMALIZE_WHITESPACE
[[0.27596319193541496, 0.446998331800279],
[0.446998331800279, 0.7240368080645851]]
"""
c, s = cos(angle), sin(angle)
cs = c * s
return [[c * c, cs], [cs, s * s]]
def reflection(angle: float) -> list[list[float]]:
"""
>>> reflection(45) # doctest: +NORMALIZE_WHITESPACE
[[0.05064397763545947, 0.893996663600558],
[0.893996663600558, 0.7018070490682369]]
"""
c, s = cos(angle), sin(angle)
cs = c * s
return [[2 * c - 1, 2 * cs], [2 * cs, 2 * s - 1]]
print(f" {scaling(5) = }")
print(f" {rotation(45) = }")
print(f"{projection(45) = }")
print(f"{reflection(45) = }")
|
Python implementation of the simplex algorithm for solving linear programs in tabular form with , , and constraints and each variable x1, x2, ... 0. See https:gist.github.comimengusf9619a568f7da5bc74eaf20169a24d98 for how to convert linear programs to simplex tableaus, and the steps taken in the simplex algorithm. Resources: https:en.wikipedia.orgwikiSimplexalgorithm https:tinyurl.comsimplex4beginners Operate on simplex tableaus Tableaunp.array1,1,0,0,1,1,3,1,0,4,3,1,0,1,4, 2, 2 Traceback most recent call last: ... TypeError: Tableau must have type float64 Tableaunp.array1,1,0,0,1,1,3,1,0,4,3,1,0,1,4., 2, 2 Traceback most recent call last: ... ValueError: RHS must be 0 Tableaunp.array1,1,0,0,1,1,3,1,0,4,3,1,0,1,4., 2, 2 Traceback most recent call last: ... ValueError: number of artificial variables must be a natural number Max iteration number to prevent cycling Check if RHS is negative Number of decision variables x1, x2, x3... 2 if there are or constraints nonstandard, 1 otherwise std Number of slack variables added to make inequalities into equalities Objectives for each stage In two stage simplex, first minimise then maximise Index of current pivot row and column Does objective row only contain nonnegative values? Generate column titles for tableau of specific dimensions Tableaunp.array1,1,0,0,1,1,3,1,0,4,3,1,0,1,4., ... 2, 0.generatecoltitles 'x1', 'x2', 's1', 's2', 'RHS' Tableaunp.array1,1,0,0,1,1,3,1,0,4,3,1,0,1,4., ... 2, 2.generatecoltitles 'x1', 'x2', 'RHS' decision slack Finds the pivot row and column. Tableaunp.array2,1,0,0,0, 3,1,1,0,6, 1,2,0,1,7., ... 2, 0.findpivot 1, 0 Find entries of highest magnitude in objective rows Choice is only valid if below 0 for maximise, and above for minimise Pivot row is chosen as having the lowest quotient when elements of the pivot column divide the righthand side Slice excluding the objective rows RHS Elements of pivot column within slice Array filled with nans If element in pivot column is greater than zero, return quotient or nan otherwise Arg of minimum quotient excluding the nan values. nstages is added to compensate for earlier exclusion of objective columns Pivots on value on the intersection of pivot row and column. Tableaunp.array2,3,0,0,0,1,3,1,0,4,3,1,0,1,4., ... 2, 2.pivot1, 0.tolist ... doctest: NORMALIZEWHITESPACE 0.0, 3.0, 2.0, 0.0, 8.0, 1.0, 3.0, 1.0, 0.0, 4.0, 0.0, 8.0, 3.0, 1.0, 8.0 Avoid changes to original tableau Entry becomes 1 Variable in pivot column becomes basic, ie the only nonzero entry Exits first phase of the twostage method by deleting artificial rows and columns, or completes the algorithm if exiting the standard case. Tableaunp.array ... 3, 3, 1, 1, 0, 0, 4, ... 2, 1, 0, 0, 0, 0, 0., ... 1, 2, 1, 0, 1, 0, 2, ... 2, 1, 0, 1, 0, 1, 2 ... , 2, 2.changestage.tolist ... doctest: NORMALIZEWHITESPACE 2.0, 1.0, 0.0, 0.0, 0.0, 1.0, 2.0, 1.0, 0.0, 2.0, 2.0, 1.0, 0.0, 1.0, 2.0 Objective of original objective row remains Slice containing ids for artificial columns Delete the artificial variable columns Delete the objective row of the first stage Operate on tableau until objective function cannot be improved further. Standard linear program: Max: x1 x2 ST: x1 3x2 4 3x1 x2 4 Tableaunp.array1,1,0,0,0,1,3,1,0,4,3,1,0,1,4., ... 2, 0.runsimplex 'P': 2.0, 'x1': 1.0, 'x2': 1.0 Standard linear program with 3 variables: Max: 3x1 x2 3x3 ST: 2x1 x2 x3 2 x1 2x2 3x3 5 2x1 2x2 x3 6 Tableaunp.array ... 3,1,3,0,0,0,0, ... 2,1,1,1,0,0,2, ... 1,2,3,0,1,0,5, ... 2,2,1,0,0,1,6. ... ,3,0.runsimplex doctest: ELLIPSIS 'P': 5.4, 'x1': 0.199..., 'x3': 1.6 Optimal tableau input: Tableaunp.array ... 0, 0, 0.25, 0.25, 2, ... 0, 1, 0.375, 0.125, 1, ... 1, 0, 0.125, 0.375, 1 ... , 2, 0.runsimplex 'P': 2.0, 'x1': 1.0, 'x2': 1.0 Nonstandard: constraints Max: 2x1 3x2 x3 ST: x1 x2 x3 40 2x1 x2 x3 10 x2 x3 10 Tableaunp.array ... 2, 0, 0, 0, 1, 1, 0, 0, 20, ... 2, 3, 1, 0, 0, 0, 0, 0, 0, ... 1, 1, 1, 1, 0, 0, 0, 0, 40, ... 2, 1, 1, 0, 1, 0, 1, 0, 10, ... 0, 1, 1, 0, 0, 1, 0, 1, 10. ... , 3, 2.runsimplex 'P': 70.0, 'x1': 10.0, 'x2': 10.0, 'x3': 20.0 Non standard: minimisation and equalities Min: x1 x2 ST: 2x1 x2 12 6x1 5x2 40 Tableaunp.array ... 8, 6, 0, 0, 52, ... 1, 1, 0, 0, 0, ... 2, 1, 1, 0, 12, ... 6, 5, 0, 1, 40., ... , 2, 2.runsimplex 'P': 7.0, 'x1': 5.0, 'x2': 2.0 Pivot on slack variables Max: 8x1 6x2 ST: x1 3x2 33 4x1 2x2 48 2x1 4x2 48 x1 x2 10 x1 2 Tableaunp.array ... 2, 1, 0, 0, 0, 1, 1, 0, 0, 12.0, ... 8, 6, 0, 0, 0, 0, 0, 0, 0, 0.0, ... 1, 3, 1, 0, 0, 0, 0, 0, 0, 33.0, ... 4, 2, 0, 1, 0, 0, 0, 0, 0, 60.0, ... 2, 4, 0, 0, 1, 0, 0, 0, 0, 48.0, ... 1, 1, 0, 0, 0, 1, 0, 1, 0, 10.0, ... 1, 0, 0, 0, 0, 0, 1, 0, 1, 2.0 ... , 2, 2.runsimplex doctest: ELLIPSIS 'P': 132.0, 'x1': 12.000... 'x2': 5.999... Stop simplex algorithm from cycling. Completion of each stage removes an objective. If both stages are complete, then no objectives are left Find the values of each variable at optimal solution If there are no more negative values in objective row Delete artificial variable columns and rows. Update attributes Given the final tableau, add the corresponding values of the basic decision variables to the outputdict Tableaunp.array ... 0,0,0.875,0.375,5, ... 0,1,0.375,0.125,1, ... 1,0,0.125,0.375,1 ... ,2, 0.interprettableau 'P': 5.0, 'x1': 1.0, 'x2': 1.0 P RHS of final tableau Gives indices of nonzero entries in the ith column First entry in the nonzero indices If there is only one nonzero value in column, which is one | from typing import Any
import numpy as np
class Tableau:
"""Operate on simplex tableaus
>>> Tableau(np.array([[-1,-1,0,0,1],[1,3,1,0,4],[3,1,0,1,4]]), 2, 2)
Traceback (most recent call last):
...
TypeError: Tableau must have type float64
>>> Tableau(np.array([[-1,-1,0,0,-1],[1,3,1,0,4],[3,1,0,1,4.]]), 2, 2)
Traceback (most recent call last):
...
ValueError: RHS must be > 0
>>> Tableau(np.array([[-1,-1,0,0,1],[1,3,1,0,4],[3,1,0,1,4.]]), -2, 2)
Traceback (most recent call last):
...
ValueError: number of (artificial) variables must be a natural number
"""
# Max iteration number to prevent cycling
maxiter = 100
def __init__(
self, tableau: np.ndarray, n_vars: int, n_artificial_vars: int
) -> None:
if tableau.dtype != "float64":
raise TypeError("Tableau must have type float64")
# Check if RHS is negative
if not (tableau[:, -1] >= 0).all():
raise ValueError("RHS must be > 0")
if n_vars < 2 or n_artificial_vars < 0:
raise ValueError(
"number of (artificial) variables must be a natural number"
)
self.tableau = tableau
self.n_rows, n_cols = tableau.shape
# Number of decision variables x1, x2, x3...
self.n_vars, self.n_artificial_vars = n_vars, n_artificial_vars
# 2 if there are >= or == constraints (nonstandard), 1 otherwise (std)
self.n_stages = (self.n_artificial_vars > 0) + 1
# Number of slack variables added to make inequalities into equalities
self.n_slack = n_cols - self.n_vars - self.n_artificial_vars - 1
# Objectives for each stage
self.objectives = ["max"]
# In two stage simplex, first minimise then maximise
if self.n_artificial_vars:
self.objectives.append("min")
self.col_titles = self.generate_col_titles()
# Index of current pivot row and column
self.row_idx = None
self.col_idx = None
# Does objective row only contain (non)-negative values?
self.stop_iter = False
def generate_col_titles(self) -> list[str]:
"""Generate column titles for tableau of specific dimensions
>>> Tableau(np.array([[-1,-1,0,0,1],[1,3,1,0,4],[3,1,0,1,4.]]),
... 2, 0).generate_col_titles()
['x1', 'x2', 's1', 's2', 'RHS']
>>> Tableau(np.array([[-1,-1,0,0,1],[1,3,1,0,4],[3,1,0,1,4.]]),
... 2, 2).generate_col_titles()
['x1', 'x2', 'RHS']
"""
args = (self.n_vars, self.n_slack)
# decision | slack
string_starts = ["x", "s"]
titles = []
for i in range(2):
for j in range(args[i]):
titles.append(string_starts[i] + str(j + 1))
titles.append("RHS")
return titles
def find_pivot(self) -> tuple[Any, Any]:
"""Finds the pivot row and column.
>>> Tableau(np.array([[-2,1,0,0,0], [3,1,1,0,6], [1,2,0,1,7.]]),
... 2, 0).find_pivot()
(1, 0)
"""
objective = self.objectives[-1]
# Find entries of highest magnitude in objective rows
sign = (objective == "min") - (objective == "max")
col_idx = np.argmax(sign * self.tableau[0, :-1])
# Choice is only valid if below 0 for maximise, and above for minimise
if sign * self.tableau[0, col_idx] <= 0:
self.stop_iter = True
return 0, 0
# Pivot row is chosen as having the lowest quotient when elements of
# the pivot column divide the right-hand side
# Slice excluding the objective rows
s = slice(self.n_stages, self.n_rows)
# RHS
dividend = self.tableau[s, -1]
# Elements of pivot column within slice
divisor = self.tableau[s, col_idx]
# Array filled with nans
nans = np.full(self.n_rows - self.n_stages, np.nan)
# If element in pivot column is greater than zero, return
# quotient or nan otherwise
quotients = np.divide(dividend, divisor, out=nans, where=divisor > 0)
# Arg of minimum quotient excluding the nan values. n_stages is added
# to compensate for earlier exclusion of objective columns
row_idx = np.nanargmin(quotients) + self.n_stages
return row_idx, col_idx
def pivot(self, row_idx: int, col_idx: int) -> np.ndarray:
"""Pivots on value on the intersection of pivot row and column.
>>> Tableau(np.array([[-2,-3,0,0,0],[1,3,1,0,4],[3,1,0,1,4.]]),
... 2, 2).pivot(1, 0).tolist()
... # doctest: +NORMALIZE_WHITESPACE
[[0.0, 3.0, 2.0, 0.0, 8.0],
[1.0, 3.0, 1.0, 0.0, 4.0],
[0.0, -8.0, -3.0, 1.0, -8.0]]
"""
# Avoid changes to original tableau
piv_row = self.tableau[row_idx].copy()
piv_val = piv_row[col_idx]
# Entry becomes 1
piv_row *= 1 / piv_val
# Variable in pivot column becomes basic, ie the only non-zero entry
for idx, coeff in enumerate(self.tableau[:, col_idx]):
self.tableau[idx] += -coeff * piv_row
self.tableau[row_idx] = piv_row
return self.tableau
def change_stage(self) -> np.ndarray:
"""Exits first phase of the two-stage method by deleting artificial
rows and columns, or completes the algorithm if exiting the standard
case.
>>> Tableau(np.array([
... [3, 3, -1, -1, 0, 0, 4],
... [2, 1, 0, 0, 0, 0, 0.],
... [1, 2, -1, 0, 1, 0, 2],
... [2, 1, 0, -1, 0, 1, 2]
... ]), 2, 2).change_stage().tolist()
... # doctest: +NORMALIZE_WHITESPACE
[[2.0, 1.0, 0.0, 0.0, 0.0],
[1.0, 2.0, -1.0, 0.0, 2.0],
[2.0, 1.0, 0.0, -1.0, 2.0]]
"""
# Objective of original objective row remains
self.objectives.pop()
if not self.objectives:
return self.tableau
# Slice containing ids for artificial columns
s = slice(-self.n_artificial_vars - 1, -1)
# Delete the artificial variable columns
self.tableau = np.delete(self.tableau, s, axis=1)
# Delete the objective row of the first stage
self.tableau = np.delete(self.tableau, 0, axis=0)
self.n_stages = 1
self.n_rows -= 1
self.n_artificial_vars = 0
self.stop_iter = False
return self.tableau
def run_simplex(self) -> dict[Any, Any]:
"""Operate on tableau until objective function cannot be
improved further.
# Standard linear program:
Max: x1 + x2
ST: x1 + 3x2 <= 4
3x1 + x2 <= 4
>>> Tableau(np.array([[-1,-1,0,0,0],[1,3,1,0,4],[3,1,0,1,4.]]),
... 2, 0).run_simplex()
{'P': 2.0, 'x1': 1.0, 'x2': 1.0}
# Standard linear program with 3 variables:
Max: 3x1 + x2 + 3x3
ST: 2x1 + x2 + x3 ≤ 2
x1 + 2x2 + 3x3 ≤ 5
2x1 + 2x2 + x3 ≤ 6
>>> Tableau(np.array([
... [-3,-1,-3,0,0,0,0],
... [2,1,1,1,0,0,2],
... [1,2,3,0,1,0,5],
... [2,2,1,0,0,1,6.]
... ]),3,0).run_simplex() # doctest: +ELLIPSIS
{'P': 5.4, 'x1': 0.199..., 'x3': 1.6}
# Optimal tableau input:
>>> Tableau(np.array([
... [0, 0, 0.25, 0.25, 2],
... [0, 1, 0.375, -0.125, 1],
... [1, 0, -0.125, 0.375, 1]
... ]), 2, 0).run_simplex()
{'P': 2.0, 'x1': 1.0, 'x2': 1.0}
# Non-standard: >= constraints
Max: 2x1 + 3x2 + x3
ST: x1 + x2 + x3 <= 40
2x1 + x2 - x3 >= 10
- x2 + x3 >= 10
>>> Tableau(np.array([
... [2, 0, 0, 0, -1, -1, 0, 0, 20],
... [-2, -3, -1, 0, 0, 0, 0, 0, 0],
... [1, 1, 1, 1, 0, 0, 0, 0, 40],
... [2, 1, -1, 0, -1, 0, 1, 0, 10],
... [0, -1, 1, 0, 0, -1, 0, 1, 10.]
... ]), 3, 2).run_simplex()
{'P': 70.0, 'x1': 10.0, 'x2': 10.0, 'x3': 20.0}
# Non standard: minimisation and equalities
Min: x1 + x2
ST: 2x1 + x2 = 12
6x1 + 5x2 = 40
>>> Tableau(np.array([
... [8, 6, 0, 0, 52],
... [1, 1, 0, 0, 0],
... [2, 1, 1, 0, 12],
... [6, 5, 0, 1, 40.],
... ]), 2, 2).run_simplex()
{'P': 7.0, 'x1': 5.0, 'x2': 2.0}
# Pivot on slack variables
Max: 8x1 + 6x2
ST: x1 + 3x2 <= 33
4x1 + 2x2 <= 48
2x1 + 4x2 <= 48
x1 + x2 >= 10
x1 >= 2
>>> Tableau(np.array([
... [2, 1, 0, 0, 0, -1, -1, 0, 0, 12.0],
... [-8, -6, 0, 0, 0, 0, 0, 0, 0, 0.0],
... [1, 3, 1, 0, 0, 0, 0, 0, 0, 33.0],
... [4, 2, 0, 1, 0, 0, 0, 0, 0, 60.0],
... [2, 4, 0, 0, 1, 0, 0, 0, 0, 48.0],
... [1, 1, 0, 0, 0, -1, 0, 1, 0, 10.0],
... [1, 0, 0, 0, 0, 0, -1, 0, 1, 2.0]
... ]), 2, 2).run_simplex() # doctest: +ELLIPSIS
{'P': 132.0, 'x1': 12.000... 'x2': 5.999...}
"""
# Stop simplex algorithm from cycling.
for _ in range(Tableau.maxiter):
# Completion of each stage removes an objective. If both stages
# are complete, then no objectives are left
if not self.objectives:
# Find the values of each variable at optimal solution
return self.interpret_tableau()
row_idx, col_idx = self.find_pivot()
# If there are no more negative values in objective row
if self.stop_iter:
# Delete artificial variable columns and rows. Update attributes
self.tableau = self.change_stage()
else:
self.tableau = self.pivot(row_idx, col_idx)
return {}
def interpret_tableau(self) -> dict[str, float]:
"""Given the final tableau, add the corresponding values of the basic
decision variables to the `output_dict`
>>> Tableau(np.array([
... [0,0,0.875,0.375,5],
... [0,1,0.375,-0.125,1],
... [1,0,-0.125,0.375,1]
... ]),2, 0).interpret_tableau()
{'P': 5.0, 'x1': 1.0, 'x2': 1.0}
"""
# P = RHS of final tableau
output_dict = {"P": abs(self.tableau[0, -1])}
for i in range(self.n_vars):
# Gives indices of nonzero entries in the ith column
nonzero = np.nonzero(self.tableau[:, i])
n_nonzero = len(nonzero[0])
# First entry in the nonzero indices
nonzero_rowidx = nonzero[0][0]
nonzero_val = self.tableau[nonzero_rowidx, i]
# If there is only one nonzero value in column, which is one
if n_nonzero == 1 and nonzero_val == 1:
rhs_val = self.tableau[nonzero_rowidx, -1]
output_dict[self.col_titles[i]] = rhs_val
return output_dict
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Apriori Algorithm is a Association rule mining technique, also known as market basket analysis, aims to discover interesting relationships or associations among a set of items in a transactional or relational database. For example, Apriori Algorithm states: If a customer buys item A and item B, then they are likely to buy item C. This rule suggests a relationship between items A, B, and C, indicating that customers who purchased A and B are more likely to also purchase item C. WIKI: https:en.wikipedia.orgwikiApriorialgorithm Examples: https:www.kaggle.comcodeearthianaprioriassociationrulesmining Returns a sample transaction dataset. loaddata 'milk', 'milk', 'butter', 'milk', 'bread', 'milk', 'bread', 'chips' Prune candidate itemsets that are not frequent. The goal of pruning is to filter out candidate itemsets that are not frequent. This is done by checking if all the k1 subsets of a candidate itemset are present in the frequent itemsets of the previous iteration valid subsequences of the frequent itemsets from the previous iteration. Prunes candidate itemsets that are not frequent. itemset 'X', 'Y', 'Z' candidates 'X', 'Y', 'X', 'Z', 'Y', 'Z' pruneitemset, candidates, 2 'X', 'Y', 'X', 'Z', 'Y', 'Z' itemset '1', '2', '3', '4' candidates '1', '2', '4' pruneitemset, candidates, 3 Returns a list of frequent itemsets and their support counts. data 'A', 'B', 'C', 'A', 'B', 'A', 'C', 'A', 'D', 'B', 'C' aprioridata, 2 'A', 'B', 1, 'A', 'C', 2, 'B', 'C', 2 data '1', '2', '3', '1', '2', '1', '3', '1', '4', '2', '3' aprioridata, 3 Count itemset support Prune infrequent itemsets Append frequent itemsets as a list to maintain order Apriori algorithm for finding frequent itemsets. Args: data: A list of transactions, where each transaction is a list of items. minsupport: The minimum support threshold for frequent itemsets. Returns: A list of frequent itemsets along with their support counts. userdefined threshold or minimum support level | from itertools import combinations
def load_data() -> list[list[str]]:
"""
Returns a sample transaction dataset.
>>> load_data()
[['milk'], ['milk', 'butter'], ['milk', 'bread'], ['milk', 'bread', 'chips']]
"""
return [["milk"], ["milk", "butter"], ["milk", "bread"], ["milk", "bread", "chips"]]
def prune(itemset: list, candidates: list, length: int) -> list:
"""
Prune candidate itemsets that are not frequent.
The goal of pruning is to filter out candidate itemsets that are not frequent. This
is done by checking if all the (k-1) subsets of a candidate itemset are present in
the frequent itemsets of the previous iteration (valid subsequences of the frequent
itemsets from the previous iteration).
Prunes candidate itemsets that are not frequent.
>>> itemset = ['X', 'Y', 'Z']
>>> candidates = [['X', 'Y'], ['X', 'Z'], ['Y', 'Z']]
>>> prune(itemset, candidates, 2)
[['X', 'Y'], ['X', 'Z'], ['Y', 'Z']]
>>> itemset = ['1', '2', '3', '4']
>>> candidates = ['1', '2', '4']
>>> prune(itemset, candidates, 3)
[]
"""
pruned = []
for candidate in candidates:
is_subsequence = True
for item in candidate:
if item not in itemset or itemset.count(item) < length - 1:
is_subsequence = False
break
if is_subsequence:
pruned.append(candidate)
return pruned
def apriori(data: list[list[str]], min_support: int) -> list[tuple[list[str], int]]:
"""
Returns a list of frequent itemsets and their support counts.
>>> data = [['A', 'B', 'C'], ['A', 'B'], ['A', 'C'], ['A', 'D'], ['B', 'C']]
>>> apriori(data, 2)
[(['A', 'B'], 1), (['A', 'C'], 2), (['B', 'C'], 2)]
>>> data = [['1', '2', '3'], ['1', '2'], ['1', '3'], ['1', '4'], ['2', '3']]
>>> apriori(data, 3)
[]
"""
itemset = [list(transaction) for transaction in data]
frequent_itemsets = []
length = 1
while itemset:
# Count itemset support
counts = [0] * len(itemset)
for transaction in data:
for j, candidate in enumerate(itemset):
if all(item in transaction for item in candidate):
counts[j] += 1
# Prune infrequent itemsets
itemset = [item for i, item in enumerate(itemset) if counts[i] >= min_support]
# Append frequent itemsets (as a list to maintain order)
for i, item in enumerate(itemset):
frequent_itemsets.append((sorted(item), counts[i]))
length += 1
itemset = prune(itemset, list(combinations(itemset, length)), length)
return frequent_itemsets
if __name__ == "__main__":
"""
Apriori algorithm for finding frequent itemsets.
Args:
data: A list of transactions, where each transaction is a list of items.
min_support: The minimum support threshold for frequent itemsets.
Returns:
A list of frequent itemsets along with their support counts.
"""
import doctest
doctest.testmod()
# user-defined threshold or minimum support level
frequent_itemsets = apriori(data=load_data(), min_support=2)
print("\n".join(f"{itemset}: {support}" for itemset, support in frequent_itemsets))
|
The A algorithm combines features of uniformcost search and pure heuristic search to efficiently compute optimal solutions. The A algorithm is a bestfirst search algorithm in which the cost associated with a node is fn gn hn, where gn is the cost of the path from the initial state to node n and hn is the heuristic estimate or the cost or a path from node n to a goal. The A algorithm introduces a heuristic into a regular graphsearching algorithm, essentially planning ahead at each step so a more optimal decision is made. For this reason, A is known as an algorithm with brains. https:en.wikipedia.orgwikiAsearchalgorithm Class cell represents a cell in the world which have the properties: position: represented by tuple of x and y coordinates initially set to 0,0. parent: Contains the parent cell object visited before we arrived at this cell. g, h, f: Parameters used when calling our heuristic function. Overrides equals method because otherwise cell assign will give wrong results. Gridworld class represents the external world here a grid MM matrix. worldsize: create a numpy array with the given worldsize default is 5. Return the neighbours of cell Implementation of a start algorithm. world : Object of the world object. start : Object of the cell as start position. stop : Object of the cell as goal position. p Gridworld start Cell start.position 0,0 goal Cell goal.position 4,4 astarp, start, goal 0, 0, 1, 1, 2, 2, 3, 3, 4, 4 Start position and goal Just for visual reasons. | import numpy as np
class Cell:
"""
Class cell represents a cell in the world which have the properties:
position: represented by tuple of x and y coordinates initially set to (0,0).
parent: Contains the parent cell object visited before we arrived at this cell.
g, h, f: Parameters used when calling our heuristic function.
"""
def __init__(self):
self.position = (0, 0)
self.parent = None
self.g = 0
self.h = 0
self.f = 0
"""
Overrides equals method because otherwise cell assign will give
wrong results.
"""
def __eq__(self, cell):
return self.position == cell.position
def showcell(self):
print(self.position)
class Gridworld:
"""
Gridworld class represents the external world here a grid M*M
matrix.
world_size: create a numpy array with the given world_size default is 5.
"""
def __init__(self, world_size=(5, 5)):
self.w = np.zeros(world_size)
self.world_x_limit = world_size[0]
self.world_y_limit = world_size[1]
def show(self):
print(self.w)
def get_neigbours(self, cell):
"""
Return the neighbours of cell
"""
neughbour_cord = [
(-1, -1),
(-1, 0),
(-1, 1),
(0, -1),
(0, 1),
(1, -1),
(1, 0),
(1, 1),
]
current_x = cell.position[0]
current_y = cell.position[1]
neighbours = []
for n in neughbour_cord:
x = current_x + n[0]
y = current_y + n[1]
if 0 <= x < self.world_x_limit and 0 <= y < self.world_y_limit:
c = Cell()
c.position = (x, y)
c.parent = cell
neighbours.append(c)
return neighbours
def astar(world, start, goal):
"""
Implementation of a start algorithm.
world : Object of the world object.
start : Object of the cell as start position.
stop : Object of the cell as goal position.
>>> p = Gridworld()
>>> start = Cell()
>>> start.position = (0,0)
>>> goal = Cell()
>>> goal.position = (4,4)
>>> astar(p, start, goal)
[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)]
"""
_open = []
_closed = []
_open.append(start)
while _open:
min_f = np.argmin([n.f for n in _open])
current = _open[min_f]
_closed.append(_open.pop(min_f))
if current == goal:
break
for n in world.get_neigbours(current):
for c in _closed:
if c == n:
continue
n.g = current.g + 1
x1, y1 = n.position
x2, y2 = goal.position
n.h = (y2 - y1) ** 2 + (x2 - x1) ** 2
n.f = n.h + n.g
for c in _open:
if c == n and c.f < n.f:
continue
_open.append(n)
path = []
while current.parent is not None:
path.append(current.position)
current = current.parent
path.append(current.position)
return path[::-1]
if __name__ == "__main__":
world = Gridworld()
# Start position and goal
start = Cell()
start.position = (0, 0)
goal = Cell()
goal.position = (4, 4)
print(f"path from {start.position} to {goal.position}")
s = astar(world, start, goal)
# Just for visual reasons.
for i in s:
world.w[i] = 1
print(world.w)
|
Demonstration of the Automatic Differentiation Reverse mode. Reference: https:en.wikipedia.orgwikiAutomaticdifferentiation Author: Poojan Smart Email: smrtpoojangmail.com Class represents list of supported operations on Variable for gradient calculation. Class represents ndimensional object which is used to wrap numpy array on which operations will be performed and the gradient will be calculated. Examples: Variable5.0 Variable5.0 Variable5.0, 2.9 Variable5. 2.9 Variable5.0, 2.9 Variable1.0, 5.5 Variable6. 8.4 Variable8.0, 10.0 Variable 8. 10. pointers to the operations to which the Variable is input pointer to the operation of which the Variable is output of if tracker is enabled, computation graph will be updated if tracker is enabled, computation graph will be updated if tracker is enabled, computation graph will be updated if tracker is enabled, computation graph will be updated if tracker is enabled, computation graph will be updated if tracker is enabled, computation graph will be updated Class represents operation between single or two Variable objects. Operation objects contains type of operation, pointers to input Variable objects and pointer to resulting Variable from the operation. Class contains methods to compute partial derivatives of Variable based on the computation graph. Examples: with GradientTracker as tracker: ... a Variable2.0, 5.0 ... b Variable1.0, 2.0 ... m Variable1.0, 2.0 ... c a b ... d a b ... e c d tracker.gradiente, a array0.25, 0.04 tracker.gradiente, b array1. , 0.25 tracker.gradiente, m is None True with GradientTracker as tracker: ... a Variable2.0, 5.0 ... b Variable1.0, 2.0 ... c a b tracker.gradientc, a array1., 2. tracker.gradientc, b array2., 5. with GradientTracker as tracker: ... a Variable2.0, 5.0 ... b a 3 tracker.gradientb, a array12., 75. Executes at the creation of class object and returns if object is already created. This class follows singleton design pattern. Adds Operation object to the related Variable objects for creating computational graph for calculating gradients. Args: optype: Operation type params: Input parameters to the operation output: Output variable of the operation Reverse accumulation of partial derivatives to calculate gradients of target variable with respect to source variable. Args: target: target variable for which gradients are calculated. source: source variable with respect to which the gradients are calculated. Returns: Gradient of the source variable with respect to the target variable partial derivatives with respect to target iterating through each operations in the computation graph as per the chain rule, multiplying partial derivatives of variables with respect to the target Compute the derivative of given operationfunction Args: param: variable to be differentiated operation: function performed on the input variable Returns: Derivative of input variable with respect to the output of the operation | from __future__ import annotations
from collections import defaultdict
from enum import Enum
from types import TracebackType
from typing import Any
import numpy as np
from typing_extensions import Self # noqa: UP035
class OpType(Enum):
"""
Class represents list of supported operations on Variable for gradient calculation.
"""
ADD = 0
SUB = 1
MUL = 2
DIV = 3
MATMUL = 4
POWER = 5
NOOP = 6
class Variable:
"""
Class represents n-dimensional object which is used to wrap numpy array on which
operations will be performed and the gradient will be calculated.
Examples:
>>> Variable(5.0)
Variable(5.0)
>>> Variable([5.0, 2.9])
Variable([5. 2.9])
>>> Variable([5.0, 2.9]) + Variable([1.0, 5.5])
Variable([6. 8.4])
>>> Variable([[8.0, 10.0]])
Variable([[ 8. 10.]])
"""
def __init__(self, value: Any) -> None:
self.value = np.array(value)
# pointers to the operations to which the Variable is input
self.param_to: list[Operation] = []
# pointer to the operation of which the Variable is output of
self.result_of: Operation = Operation(OpType.NOOP)
def __repr__(self) -> str:
return f"Variable({self.value})"
def to_ndarray(self) -> np.ndarray:
return self.value
def __add__(self, other: Variable) -> Variable:
result = Variable(self.value + other.value)
with GradientTracker() as tracker:
# if tracker is enabled, computation graph will be updated
if tracker.enabled:
tracker.append(OpType.ADD, params=[self, other], output=result)
return result
def __sub__(self, other: Variable) -> Variable:
result = Variable(self.value - other.value)
with GradientTracker() as tracker:
# if tracker is enabled, computation graph will be updated
if tracker.enabled:
tracker.append(OpType.SUB, params=[self, other], output=result)
return result
def __mul__(self, other: Variable) -> Variable:
result = Variable(self.value * other.value)
with GradientTracker() as tracker:
# if tracker is enabled, computation graph will be updated
if tracker.enabled:
tracker.append(OpType.MUL, params=[self, other], output=result)
return result
def __truediv__(self, other: Variable) -> Variable:
result = Variable(self.value / other.value)
with GradientTracker() as tracker:
# if tracker is enabled, computation graph will be updated
if tracker.enabled:
tracker.append(OpType.DIV, params=[self, other], output=result)
return result
def __matmul__(self, other: Variable) -> Variable:
result = Variable(self.value @ other.value)
with GradientTracker() as tracker:
# if tracker is enabled, computation graph will be updated
if tracker.enabled:
tracker.append(OpType.MATMUL, params=[self, other], output=result)
return result
def __pow__(self, power: int) -> Variable:
result = Variable(self.value**power)
with GradientTracker() as tracker:
# if tracker is enabled, computation graph will be updated
if tracker.enabled:
tracker.append(
OpType.POWER,
params=[self],
output=result,
other_params={"power": power},
)
return result
def add_param_to(self, param_to: Operation) -> None:
self.param_to.append(param_to)
def add_result_of(self, result_of: Operation) -> None:
self.result_of = result_of
class Operation:
"""
Class represents operation between single or two Variable objects.
Operation objects contains type of operation, pointers to input Variable
objects and pointer to resulting Variable from the operation.
"""
def __init__(
self,
op_type: OpType,
other_params: dict | None = None,
) -> None:
self.op_type = op_type
self.other_params = {} if other_params is None else other_params
def add_params(self, params: list[Variable]) -> None:
self.params = params
def add_output(self, output: Variable) -> None:
self.output = output
def __eq__(self, value) -> bool:
return self.op_type == value if isinstance(value, OpType) else False
class GradientTracker:
"""
Class contains methods to compute partial derivatives of Variable
based on the computation graph.
Examples:
>>> with GradientTracker() as tracker:
... a = Variable([2.0, 5.0])
... b = Variable([1.0, 2.0])
... m = Variable([1.0, 2.0])
... c = a + b
... d = a * b
... e = c / d
>>> tracker.gradient(e, a)
array([-0.25, -0.04])
>>> tracker.gradient(e, b)
array([-1. , -0.25])
>>> tracker.gradient(e, m) is None
True
>>> with GradientTracker() as tracker:
... a = Variable([[2.0, 5.0]])
... b = Variable([[1.0], [2.0]])
... c = a @ b
>>> tracker.gradient(c, a)
array([[1., 2.]])
>>> tracker.gradient(c, b)
array([[2.],
[5.]])
>>> with GradientTracker() as tracker:
... a = Variable([[2.0, 5.0]])
... b = a ** 3
>>> tracker.gradient(b, a)
array([[12., 75.]])
"""
instance = None
def __new__(cls) -> Self:
"""
Executes at the creation of class object and returns if
object is already created. This class follows singleton
design pattern.
"""
if cls.instance is None:
cls.instance = super().__new__(cls)
return cls.instance
def __init__(self) -> None:
self.enabled = False
def __enter__(self) -> Self:
self.enabled = True
return self
def __exit__(
self,
exc_type: type[BaseException] | None,
exc: BaseException | None,
traceback: TracebackType | None,
) -> None:
self.enabled = False
def append(
self,
op_type: OpType,
params: list[Variable],
output: Variable,
other_params: dict | None = None,
) -> None:
"""
Adds Operation object to the related Variable objects for
creating computational graph for calculating gradients.
Args:
op_type: Operation type
params: Input parameters to the operation
output: Output variable of the operation
"""
operation = Operation(op_type, other_params=other_params)
param_nodes = []
for param in params:
param.add_param_to(operation)
param_nodes.append(param)
output.add_result_of(operation)
operation.add_params(param_nodes)
operation.add_output(output)
def gradient(self, target: Variable, source: Variable) -> np.ndarray | None:
"""
Reverse accumulation of partial derivatives to calculate gradients
of target variable with respect to source variable.
Args:
target: target variable for which gradients are calculated.
source: source variable with respect to which the gradients are
calculated.
Returns:
Gradient of the source variable with respect to the target variable
"""
# partial derivatives with respect to target
partial_deriv = defaultdict(lambda: 0)
partial_deriv[target] = np.ones_like(target.to_ndarray())
# iterating through each operations in the computation graph
operation_queue = [target.result_of]
while len(operation_queue) > 0:
operation = operation_queue.pop()
for param in operation.params:
# as per the chain rule, multiplying partial derivatives
# of variables with respect to the target
dparam_doutput = self.derivative(param, operation)
dparam_dtarget = dparam_doutput * partial_deriv[operation.output]
partial_deriv[param] += dparam_dtarget
if param.result_of and param.result_of != OpType.NOOP:
operation_queue.append(param.result_of)
return partial_deriv.get(source)
def derivative(self, param: Variable, operation: Operation) -> np.ndarray:
"""
Compute the derivative of given operation/function
Args:
param: variable to be differentiated
operation: function performed on the input variable
Returns:
Derivative of input variable with respect to the output of
the operation
"""
params = operation.params
if operation == OpType.ADD:
return np.ones_like(params[0].to_ndarray(), dtype=np.float64)
if operation == OpType.SUB:
if params[0] == param:
return np.ones_like(params[0].to_ndarray(), dtype=np.float64)
return -np.ones_like(params[1].to_ndarray(), dtype=np.float64)
if operation == OpType.MUL:
return (
params[1].to_ndarray().T
if params[0] == param
else params[0].to_ndarray().T
)
if operation == OpType.DIV:
if params[0] == param:
return 1 / params[1].to_ndarray()
return -params[0].to_ndarray() / (params[1].to_ndarray() ** 2)
if operation == OpType.MATMUL:
return (
params[1].to_ndarray().T
if params[0] == param
else params[0].to_ndarray().T
)
if operation == OpType.POWER:
power = operation.other_params["power"]
return power * (params[0].to_ndarray() ** (power - 1))
err_msg = f"invalid operation type: {operation.op_type}"
raise ValueError(err_msg)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Normalization. Wikipedia: https:en.wikipedia.orgwikiNormalization Normalization is the process of converting numerical data to a standard range of values. This range is typically between 0, 1 or 1, 1. The equation for normalization is xnorm x xminxmax xmin where xnorm is the normalized value, x is the value, xmin is the minimum value within the column or list of data, and xmax is the maximum value within the column or list of data. Normalization is used to speed up the training of data and put all of the data on a similar scale. This is useful because variance in the range of values of a dataset can heavily impact optimization particularly Gradient Descent. Standardization Wikipedia: https:en.wikipedia.orgwikiStandardization Standardization is the process of converting numerical data to a normally distributed range of values. This range will have a mean of 0 and standard deviation of 1. This is also known as zscore normalization. The equation for standardization is xstd x musigma where mu is the mean of the column or list of values and sigma is the standard deviation of the column or list of values. Choosing between Normalization Standardization is more of an art of a science, but it is often recommended to run experiments with both to see which performs better. Additionally, a few rules of thumb are: 1. gaussian normal distributions work better with standardization 2. nongaussian nonnormal distributions work better with normalization 3. If a column or list of values has extreme values outliers, use standardization Return a normalized list of values. params: data, a list of values to normalize returns: a list of normalized values rounded to ndigits decimal places examples: normalization2, 7, 10, 20, 30, 50 0.0, 0.104, 0.167, 0.375, 0.583, 1.0 normalization5, 10, 15, 20, 25 0.0, 0.25, 0.5, 0.75, 1.0 variables for calculation normalize data Return a standardized list of values. params: data, a list of values to standardize returns: a list of standardized values rounded to ndigits decimal places examples: standardization2, 7, 10, 20, 30, 50 0.999, 0.719, 0.551, 0.009, 0.57, 1.69 standardization5, 10, 15, 20, 25 1.265, 0.632, 0.0, 0.632, 1.265 variables for calculation standardize data | from statistics import mean, stdev
def normalization(data: list, ndigits: int = 3) -> list:
"""
Return a normalized list of values.
@params: data, a list of values to normalize
@returns: a list of normalized values (rounded to ndigits decimal places)
@examples:
>>> normalization([2, 7, 10, 20, 30, 50])
[0.0, 0.104, 0.167, 0.375, 0.583, 1.0]
>>> normalization([5, 10, 15, 20, 25])
[0.0, 0.25, 0.5, 0.75, 1.0]
"""
# variables for calculation
x_min = min(data)
x_max = max(data)
# normalize data
return [round((x - x_min) / (x_max - x_min), ndigits) for x in data]
def standardization(data: list, ndigits: int = 3) -> list:
"""
Return a standardized list of values.
@params: data, a list of values to standardize
@returns: a list of standardized values (rounded to ndigits decimal places)
@examples:
>>> standardization([2, 7, 10, 20, 30, 50])
[-0.999, -0.719, -0.551, 0.009, 0.57, 1.69]
>>> standardization([5, 10, 15, 20, 25])
[-1.265, -0.632, 0.0, 0.632, 1.265]
"""
# variables for calculation
mu = mean(data)
sigma = stdev(data)
# standardize data
return [round((x - mu) / (sigma), ndigits) for x in data]
|
Implementation of a basic regression decision tree. Input data set: The input data set must be 1dimensional with continuous labels. Output: The decision tree maps a real number input to a real number output. meansquarederror: param labels: a onedimensional numpy array param prediction: a floating point value return value: meansquarederror calculates the error if prediction is used to estimate the labels tester DecisionTree testlabels np.array1,2,3,4,5,6,7,8,9,10 testprediction float6 tester.meansquarederrortestlabels, testprediction ... TestDecisionTree.helpermeansquarederrortesttestlabels, ... testprediction True testlabels np.array1,2,3 testprediction float2 tester.meansquarederrortestlabels, testprediction ... TestDecisionTree.helpermeansquarederrortesttestlabels, ... testprediction True train: param x: a onedimensional numpy array param y: a onedimensional numpy array. The contents of y are the labels for the corresponding X values train does not have a return value Examples: 1. Try to train when x y are of same length 1 dimensions No errors dt DecisionTree dt.trainnp.array10,20,30,40,50,np.array0,0,0,1,1 2. Try to train when x is 2 dimensions dt DecisionTree dt.trainnp.array1,2,3,4,5,1,2,3,4,5,np.array0,0,0,1,1 Traceback most recent call last: ... ValueError: Input data set must be onedimensional 3. Try to train when x and y are not of the same length dt DecisionTree dt.trainnp.array1,2,3,4,5,np.array0,0,0,1,1,0,0,0,1,1 Traceback most recent call last: ... ValueError: x and y have different lengths 4. Try to train when x y are of the same length but different dimensions dt DecisionTree dt.trainnp.array1,2,3,4,5,np.array1,2,3,4,5 Traceback most recent call last: ... ValueError: Data set labels must be onedimensional This section is to check that the inputs conform to our dimensionality constraints loop over all possible splits for the decision tree. find the best split. if no split exists that is less than 2 error for the entire array then the data set is not split and the average for the entire array is used as the predictor predict: param x: a floating point value to predict the label of the prediction function works by recursively calling the predict function of the appropriate subtrees based on the tree's decision boundary Decision Tres test class staticmethod def helpermeansquarederrortestlabels, prediction: squarederrorsum float0 for label in labels: squarederrorsum label prediction 2 return floatsquarederrorsum labels.size def main: x np.arange1.0, 1.0, 0.005 y np.sinx tree DecisionTreedepth10, minleafsize10 tree.trainx, y testcases np.random.rand10 2 1 predictions np.arraytree.predictx for x in testcases avgerror np.meanpredictions testcases 2 printTest values: strtestcases printPredictions: strpredictions printAverage error: stravgerror if name main: main import doctest doctest.testmodnamemeansquarrederror, verboseTrue | import numpy as np
class DecisionTree:
def __init__(self, depth=5, min_leaf_size=5):
self.depth = depth
self.decision_boundary = 0
self.left = None
self.right = None
self.min_leaf_size = min_leaf_size
self.prediction = None
def mean_squared_error(self, labels, prediction):
"""
mean_squared_error:
@param labels: a one-dimensional numpy array
@param prediction: a floating point value
return value: mean_squared_error calculates the error if prediction is used to
estimate the labels
>>> tester = DecisionTree()
>>> test_labels = np.array([1,2,3,4,5,6,7,8,9,10])
>>> test_prediction = float(6)
>>> tester.mean_squared_error(test_labels, test_prediction) == (
... TestDecisionTree.helper_mean_squared_error_test(test_labels,
... test_prediction))
True
>>> test_labels = np.array([1,2,3])
>>> test_prediction = float(2)
>>> tester.mean_squared_error(test_labels, test_prediction) == (
... TestDecisionTree.helper_mean_squared_error_test(test_labels,
... test_prediction))
True
"""
if labels.ndim != 1:
print("Error: Input labels must be one dimensional")
return np.mean((labels - prediction) ** 2)
def train(self, x, y):
"""
train:
@param x: a one-dimensional numpy array
@param y: a one-dimensional numpy array.
The contents of y are the labels for the corresponding X values
train() does not have a return value
Examples:
1. Try to train when x & y are of same length & 1 dimensions (No errors)
>>> dt = DecisionTree()
>>> dt.train(np.array([10,20,30,40,50]),np.array([0,0,0,1,1]))
2. Try to train when x is 2 dimensions
>>> dt = DecisionTree()
>>> dt.train(np.array([[1,2,3,4,5],[1,2,3,4,5]]),np.array([0,0,0,1,1]))
Traceback (most recent call last):
...
ValueError: Input data set must be one-dimensional
3. Try to train when x and y are not of the same length
>>> dt = DecisionTree()
>>> dt.train(np.array([1,2,3,4,5]),np.array([[0,0,0,1,1],[0,0,0,1,1]]))
Traceback (most recent call last):
...
ValueError: x and y have different lengths
4. Try to train when x & y are of the same length but different dimensions
>>> dt = DecisionTree()
>>> dt.train(np.array([1,2,3,4,5]),np.array([[1],[2],[3],[4],[5]]))
Traceback (most recent call last):
...
ValueError: Data set labels must be one-dimensional
This section is to check that the inputs conform to our dimensionality
constraints
"""
if x.ndim != 1:
raise ValueError("Input data set must be one-dimensional")
if len(x) != len(y):
raise ValueError("x and y have different lengths")
if y.ndim != 1:
raise ValueError("Data set labels must be one-dimensional")
if len(x) < 2 * self.min_leaf_size:
self.prediction = np.mean(y)
return
if self.depth == 1:
self.prediction = np.mean(y)
return
best_split = 0
min_error = self.mean_squared_error(x, np.mean(y)) * 2
"""
loop over all possible splits for the decision tree. find the best split.
if no split exists that is less than 2 * error for the entire array
then the data set is not split and the average for the entire array is used as
the predictor
"""
for i in range(len(x)):
if len(x[:i]) < self.min_leaf_size:
continue
elif len(x[i:]) < self.min_leaf_size:
continue
else:
error_left = self.mean_squared_error(x[:i], np.mean(y[:i]))
error_right = self.mean_squared_error(x[i:], np.mean(y[i:]))
error = error_left + error_right
if error < min_error:
best_split = i
min_error = error
if best_split != 0:
left_x = x[:best_split]
left_y = y[:best_split]
right_x = x[best_split:]
right_y = y[best_split:]
self.decision_boundary = x[best_split]
self.left = DecisionTree(
depth=self.depth - 1, min_leaf_size=self.min_leaf_size
)
self.right = DecisionTree(
depth=self.depth - 1, min_leaf_size=self.min_leaf_size
)
self.left.train(left_x, left_y)
self.right.train(right_x, right_y)
else:
self.prediction = np.mean(y)
return
def predict(self, x):
"""
predict:
@param x: a floating point value to predict the label of
the prediction function works by recursively calling the predict function
of the appropriate subtrees based on the tree's decision boundary
"""
if self.prediction is not None:
return self.prediction
elif self.left or self.right is not None:
if x >= self.decision_boundary:
return self.right.predict(x)
else:
return self.left.predict(x)
else:
print("Error: Decision tree not yet trained")
return None
class TestDecisionTree:
"""Decision Tres test class"""
@staticmethod
def helper_mean_squared_error_test(labels, prediction):
"""
helper_mean_squared_error_test:
@param labels: a one dimensional numpy array
@param prediction: a floating point value
return value: helper_mean_squared_error_test calculates the mean squared error
"""
squared_error_sum = float(0)
for label in labels:
squared_error_sum += (label - prediction) ** 2
return float(squared_error_sum / labels.size)
def main():
"""
In this demonstration we're generating a sample data set from the sin function in
numpy. We then train a decision tree on the data set and use the decision tree to
predict the label of 10 different test values. Then the mean squared error over
this test is displayed.
"""
x = np.arange(-1.0, 1.0, 0.005)
y = np.sin(x)
tree = DecisionTree(depth=10, min_leaf_size=10)
tree.train(x, y)
test_cases = (np.random.rand(10) * 2) - 1
predictions = np.array([tree.predict(x) for x in test_cases])
avg_error = np.mean((predictions - test_cases) ** 2)
print("Test values: " + str(test_cases))
print("Predictions: " + str(predictions))
print("Average error: " + str(avg_error))
if __name__ == "__main__":
main()
import doctest
doctest.testmod(name="mean_squarred_error", verbose=True)
|
Copyright c 2023 Diego Gasco diego.gasco99gmail.com, Diegomangasco on GitHub Requirements: numpy version 1.21 scipy version 1.3.3 Notes: Each column of the features matrix corresponds to a class item Function to reshape a row Numpy array into a column Numpy array inputarray np.array1, 2, 3 columnreshapeinputarray array1, 2, 3 Function to compute the covariance matrix inside each class. features np.array1, 2, 3, 4, 5, 6, 7, 8, 9 labels np.array0, 1, 0 covariancewithinclassesfeatures, labels, 2 array0.66666667, 0.66666667, 0.66666667, 0.66666667, 0.66666667, 0.66666667, 0.66666667, 0.66666667, 0.66666667 Centralize the data of class i If covariancesum is not None If covariancesum is np.nan i.e. first loop Function to compute the covariance matrix between multiple classes features np.array9, 2, 3, 4, 3, 6, 1, 8, 9 labels np.array0, 1, 0 covariancebetweenclassesfeatures, labels, 2 array 3.55555556, 1.77777778, 2.66666667, 1.77777778, 0.88888889, 1.33333333, 2.66666667, 1.33333333, 2. If covariancesum is not None If covariancesum is np.nan i.e. first loop Principal Component Analysis. For more details, see: https:en.wikipedia.orgwikiPrincipalcomponentanalysis. Parameters: features: the features extracted from the dataset dimensions: to filter the projected data for the desired dimension testprincipalcomponentanalysis Check if the features have been loaded Center the dataset Take all the columns in the reverse order 1, and then takes only the first Project the database on the new space Linear Discriminant Analysis. For more details, see: https:en.wikipedia.orgwikiLineardiscriminantanalysis. Parameters: features: the features extracted from the dataset labels: the class labels of the features classes: the number of classes present in the dataset dimensions: to filter the projected data for the desired dimension testlineardiscriminantanalysis Check if the dimension desired is less than the number of classes Check if features have been already loaded Create dummy dataset with 2 classes and 3 features Assert that the function raises an AssertionError if dimensions classes | # Copyright (c) 2023 Diego Gasco (diego.gasco99@gmail.com), Diegomangasco on GitHub
"""
Requirements:
- numpy version 1.21
- scipy version 1.3.3
Notes:
- Each column of the features matrix corresponds to a class item
"""
import logging
import numpy as np
import pytest
from scipy.linalg import eigh
logging.basicConfig(level=logging.INFO, format="%(message)s")
def column_reshape(input_array: np.ndarray) -> np.ndarray:
"""Function to reshape a row Numpy array into a column Numpy array
>>> input_array = np.array([1, 2, 3])
>>> column_reshape(input_array)
array([[1],
[2],
[3]])
"""
return input_array.reshape((input_array.size, 1))
def covariance_within_classes(
features: np.ndarray, labels: np.ndarray, classes: int
) -> np.ndarray:
"""Function to compute the covariance matrix inside each class.
>>> features = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> labels = np.array([0, 1, 0])
>>> covariance_within_classes(features, labels, 2)
array([[0.66666667, 0.66666667, 0.66666667],
[0.66666667, 0.66666667, 0.66666667],
[0.66666667, 0.66666667, 0.66666667]])
"""
covariance_sum = np.nan
for i in range(classes):
data = features[:, labels == i]
data_mean = data.mean(1)
# Centralize the data of class i
centered_data = data - column_reshape(data_mean)
if i > 0:
# If covariance_sum is not None
covariance_sum += np.dot(centered_data, centered_data.T)
else:
# If covariance_sum is np.nan (i.e. first loop)
covariance_sum = np.dot(centered_data, centered_data.T)
return covariance_sum / features.shape[1]
def covariance_between_classes(
features: np.ndarray, labels: np.ndarray, classes: int
) -> np.ndarray:
"""Function to compute the covariance matrix between multiple classes
>>> features = np.array([[9, 2, 3], [4, 3, 6], [1, 8, 9]])
>>> labels = np.array([0, 1, 0])
>>> covariance_between_classes(features, labels, 2)
array([[ 3.55555556, 1.77777778, -2.66666667],
[ 1.77777778, 0.88888889, -1.33333333],
[-2.66666667, -1.33333333, 2. ]])
"""
general_data_mean = features.mean(1)
covariance_sum = np.nan
for i in range(classes):
data = features[:, labels == i]
device_data = data.shape[1]
data_mean = data.mean(1)
if i > 0:
# If covariance_sum is not None
covariance_sum += device_data * np.dot(
column_reshape(data_mean) - column_reshape(general_data_mean),
(column_reshape(data_mean) - column_reshape(general_data_mean)).T,
)
else:
# If covariance_sum is np.nan (i.e. first loop)
covariance_sum = device_data * np.dot(
column_reshape(data_mean) - column_reshape(general_data_mean),
(column_reshape(data_mean) - column_reshape(general_data_mean)).T,
)
return covariance_sum / features.shape[1]
def principal_component_analysis(features: np.ndarray, dimensions: int) -> np.ndarray:
"""
Principal Component Analysis.
For more details, see: https://en.wikipedia.org/wiki/Principal_component_analysis.
Parameters:
* features: the features extracted from the dataset
* dimensions: to filter the projected data for the desired dimension
>>> test_principal_component_analysis()
"""
# Check if the features have been loaded
if features.any():
data_mean = features.mean(1)
# Center the dataset
centered_data = features - np.reshape(data_mean, (data_mean.size, 1))
covariance_matrix = np.dot(centered_data, centered_data.T) / features.shape[1]
_, eigenvectors = np.linalg.eigh(covariance_matrix)
# Take all the columns in the reverse order (-1), and then takes only the first
filtered_eigenvectors = eigenvectors[:, ::-1][:, 0:dimensions]
# Project the database on the new space
projected_data = np.dot(filtered_eigenvectors.T, features)
logging.info("Principal Component Analysis computed")
return projected_data
else:
logging.basicConfig(level=logging.ERROR, format="%(message)s", force=True)
logging.error("Dataset empty")
raise AssertionError
def linear_discriminant_analysis(
features: np.ndarray, labels: np.ndarray, classes: int, dimensions: int
) -> np.ndarray:
"""
Linear Discriminant Analysis.
For more details, see: https://en.wikipedia.org/wiki/Linear_discriminant_analysis.
Parameters:
* features: the features extracted from the dataset
* labels: the class labels of the features
* classes: the number of classes present in the dataset
* dimensions: to filter the projected data for the desired dimension
>>> test_linear_discriminant_analysis()
"""
# Check if the dimension desired is less than the number of classes
assert classes > dimensions
# Check if features have been already loaded
if features.any:
_, eigenvectors = eigh(
covariance_between_classes(features, labels, classes),
covariance_within_classes(features, labels, classes),
)
filtered_eigenvectors = eigenvectors[:, ::-1][:, :dimensions]
svd_matrix, _, _ = np.linalg.svd(filtered_eigenvectors)
filtered_svd_matrix = svd_matrix[:, 0:dimensions]
projected_data = np.dot(filtered_svd_matrix.T, features)
logging.info("Linear Discriminant Analysis computed")
return projected_data
else:
logging.basicConfig(level=logging.ERROR, format="%(message)s", force=True)
logging.error("Dataset empty")
raise AssertionError
def test_linear_discriminant_analysis() -> None:
# Create dummy dataset with 2 classes and 3 features
features = np.array([[1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7]])
labels = np.array([0, 0, 0, 1, 1])
classes = 2
dimensions = 2
# Assert that the function raises an AssertionError if dimensions > classes
with pytest.raises(AssertionError) as error_info: # noqa: PT012
projected_data = linear_discriminant_analysis(
features, labels, classes, dimensions
)
if isinstance(projected_data, np.ndarray):
raise AssertionError(
"Did not raise AssertionError for dimensions > classes"
)
assert error_info.type is AssertionError
def test_principal_component_analysis() -> None:
features = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dimensions = 2
expected_output = np.array([[6.92820323, 8.66025404, 10.39230485], [3.0, 3.0, 3.0]])
with pytest.raises(AssertionError) as error_info: # noqa: PT012
output = principal_component_analysis(features, dimensions)
if not np.allclose(expected_output, output):
raise AssertionError
assert error_info.type is AssertionError
if __name__ == "__main__":
import doctest
doctest.testmod()
|
this is code for forecasting but I modified it and used it for safety checker of data for ex: you have an online shop and for some reason some data are missing the amount of data that u expected are not supposed to be then we can use it ps : 1. ofc we can use normal statistic method but in this case the data is quite absurd and only a little 2. ofc u can use this and modified it for forecasting purpose for the next 3 months sales or something, u can just adjust it for ur own purpose First method: linear regression input : training data date, totaluser, totalevent in list of float output : list of total user prediction in float n linearregressionprediction2,3,4,5, 5,3,4,6, 3,1,2,4, 2,1, 2,2 absn 5.0 1e6 Checking precision because of floating point errors True second method: Sarimax sarimax is a statistic method which using previous input and learn its pattern to predict future data input : training data totaluser, with exog data totalevent in list of float output : list of total user prediction in float sarimaxpredictor4,2,6,8, 3,1,2,4, 2 6.6666671111109626 Suppress the User Warning raised by SARIMAX due to insufficient observations Third method: Support vector regressor svr is quite the same with svmsupport vector machine it uses the same principles as the SVM for classification, with only a few minor differences and the only different is that it suits better for regression purpose input : training data date, totaluser, totalevent in list of float where x list of set date and total event output : list of total user prediction in float supportvectorregressor5,2,1,5,6,2, 3,2, 2,1,4 1.634932078116079 Optional method: interquatile range input : list of total user in float output : low limit of input in float this method can be used to check whether some data is outlier or not interquartilerangechecker1,2,3,4,5,6,7,8,9,10 2.8 Used to review all the votes list result prediction and compare it to the actual result. input : list of predictions output : print whether it's safe or not datasafetychecker2, 3, 4, 5.0 False data column total user in a day, how much online event held in one day, what day is thatsundaysaturday start normalization split data for svr input variable total date and total match for linear regression sarimax voting system with forecasting check the safety of today's data | from warnings import simplefilter
import numpy as np
import pandas as pd
from sklearn.preprocessing import Normalizer
from sklearn.svm import SVR
from statsmodels.tsa.statespace.sarimax import SARIMAX
def linear_regression_prediction(
train_dt: list, train_usr: list, train_mtch: list, test_dt: list, test_mtch: list
) -> float:
"""
First method: linear regression
input : training data (date, total_user, total_event) in list of float
output : list of total user prediction in float
>>> n = linear_regression_prediction([2,3,4,5], [5,3,4,6], [3,1,2,4], [2,1], [2,2])
>>> abs(n - 5.0) < 1e-6 # Checking precision because of floating point errors
True
"""
x = np.array([[1, item, train_mtch[i]] for i, item in enumerate(train_dt)])
y = np.array(train_usr)
beta = np.dot(np.dot(np.linalg.inv(np.dot(x.transpose(), x)), x.transpose()), y)
return abs(beta[0] + test_dt[0] * beta[1] + test_mtch[0] + beta[2])
def sarimax_predictor(train_user: list, train_match: list, test_match: list) -> float:
"""
second method: Sarimax
sarimax is a statistic method which using previous input
and learn its pattern to predict future data
input : training data (total_user, with exog data = total_event) in list of float
output : list of total user prediction in float
>>> sarimax_predictor([4,2,6,8], [3,1,2,4], [2])
6.6666671111109626
"""
# Suppress the User Warning raised by SARIMAX due to insufficient observations
simplefilter("ignore", UserWarning)
order = (1, 2, 1)
seasonal_order = (1, 1, 1, 7)
model = SARIMAX(
train_user, exog=train_match, order=order, seasonal_order=seasonal_order
)
model_fit = model.fit(disp=False, maxiter=600, method="nm")
result = model_fit.predict(1, len(test_match), exog=[test_match])
return result[0]
def support_vector_regressor(x_train: list, x_test: list, train_user: list) -> float:
"""
Third method: Support vector regressor
svr is quite the same with svm(support vector machine)
it uses the same principles as the SVM for classification,
with only a few minor differences and the only different is that
it suits better for regression purpose
input : training data (date, total_user, total_event) in list of float
where x = list of set (date and total event)
output : list of total user prediction in float
>>> support_vector_regressor([[5,2],[1,5],[6,2]], [[3,2]], [2,1,4])
1.634932078116079
"""
regressor = SVR(kernel="rbf", C=1, gamma=0.1, epsilon=0.1)
regressor.fit(x_train, train_user)
y_pred = regressor.predict(x_test)
return y_pred[0]
def interquartile_range_checker(train_user: list) -> float:
"""
Optional method: interquatile range
input : list of total user in float
output : low limit of input in float
this method can be used to check whether some data is outlier or not
>>> interquartile_range_checker([1,2,3,4,5,6,7,8,9,10])
2.8
"""
train_user.sort()
q1 = np.percentile(train_user, 25)
q3 = np.percentile(train_user, 75)
iqr = q3 - q1
low_lim = q1 - (iqr * 0.1)
return low_lim
def data_safety_checker(list_vote: list, actual_result: float) -> bool:
"""
Used to review all the votes (list result prediction)
and compare it to the actual result.
input : list of predictions
output : print whether it's safe or not
>>> data_safety_checker([2, 3, 4], 5.0)
False
"""
safe = 0
not_safe = 0
if not isinstance(actual_result, float):
raise TypeError("Actual result should be float. Value passed is a list")
for i in list_vote:
if i > actual_result:
safe = not_safe + 1
else:
if abs(abs(i) - abs(actual_result)) <= 0.1:
safe += 1
else:
not_safe += 1
return safe > not_safe
if __name__ == "__main__":
"""
data column = total user in a day, how much online event held in one day,
what day is that(sunday-saturday)
"""
data_input_df = pd.read_csv("ex_data.csv")
# start normalization
normalize_df = Normalizer().fit_transform(data_input_df.values)
# split data
total_date = normalize_df[:, 2].tolist()
total_user = normalize_df[:, 0].tolist()
total_match = normalize_df[:, 1].tolist()
# for svr (input variable = total date and total match)
x = normalize_df[:, [1, 2]].tolist()
x_train = x[: len(x) - 1]
x_test = x[len(x) - 1 :]
# for linear regression & sarimax
train_date = total_date[: len(total_date) - 1]
train_user = total_user[: len(total_user) - 1]
train_match = total_match[: len(total_match) - 1]
test_date = total_date[len(total_date) - 1 :]
test_user = total_user[len(total_user) - 1 :]
test_match = total_match[len(total_match) - 1 :]
# voting system with forecasting
res_vote = [
linear_regression_prediction(
train_date, train_user, train_match, test_date, test_match
),
sarimax_predictor(train_user, train_match, test_match),
support_vector_regressor(x_train, x_test, train_user),
]
# check the safety of today's data
not_str = "" if data_safety_checker(res_vote, test_user[0]) else "not "
print(f"Today's data is {not_str}safe.")
|
The Frequent Pattern Growth algorithm FPGrowth is a widely used data mining technique for discovering frequent itemsets in large transaction databases. It overcomes some of the limitations of traditional methods such as Apriori by efficiently constructing the FPTree WIKI: https:athena.ecs.csus.edumeiassociationcwFpGrowth.html Examples: https:www.javatpoint.comfpgrowthalgorithmindatamining A node in a Frequent Pattern tree. Args: name: The name of this node. numoccur: The number of occurrences of the node. parentnode: The parent node. Example: parent TreeNodeParent, 1, None child TreeNodeChild, 2, parent child.name 'Child' child.count 2 Create Frequent Pattern tree Args: dataset: A list of transactions, where each transaction is a list of items. minsup: The minimum support threshold. Items with support less than this will be pruned. Default is 1. Returns: The root of the FPTree. headertable: The header table dictionary with item information. Example: dataset ... 'A', 'B', 'C', ... 'A', 'C', ... 'A', 'B', 'E', ... 'A', 'B', 'C', 'E', ... 'B', 'E' ... minsup 2 fptree, headertable createtreedataset, minsup fptree TreeNode'Null Set', 1, None lenheadertable 4 headertableA 4, None, TreeNode'A', 4, TreeNode'Null Set', 1, None headertableE1 doctest: NORMALIZEWHITESPACE TreeNode'E', 1, TreeNode'B', 3, TreeNode'A', 4, TreeNode'Null Set', 1, None sortedheadertable 'A', 'B', 'C', 'E' fptree.name 'Null Set' sortedfptree.children 'A', 'B' fptree.children'A'.name 'A' sortedfptree.children'A'.children 'B', 'C' Update the FPTree with a transaction. Args: items: List of items in the transaction. intree: The current node in the FPTree. headertable: The header table dictionary with item information. count: The count of the transaction. Example: dataset ... 'A', 'B', 'C', ... 'A', 'C', ... 'A', 'B', 'E', ... 'A', 'B', 'C', 'E', ... 'B', 'E' ... minsup 2 fptree, headertable createtreedataset, minsup fptree TreeNode'Null Set', 1, None transaction 'A', 'B', 'E' updatetreetransaction, fptree, headertable, 1 fptree TreeNode'Null Set', 1, None fptree.children'A'.children'B'.children'E'.children fptree.children'A'.children'B'.children'E'.count 2 headertable'E'1.name 'E' Update the header table with a node link. Args: nodetotest: The node to be updated in the header table. targetnode: The node to link to. Example: dataset ... 'A', 'B', 'C', ... 'A', 'C', ... 'A', 'B', 'E', ... 'A', 'B', 'C', 'E', ... 'B', 'E' ... minsup 2 fptree, headertable createtreedataset, minsup fptree TreeNode'Null Set', 1, None node1 TreeNodeA, 3, None node2 TreeNodeB, 4, None node1 TreeNode'A', 3, None node1 updateheadernode1, node2 node1 TreeNode'A', 3, None node1.nodelink TreeNode'B', 4, None node2.nodelink is None True Return the updated node Ascend the FPTree from a leaf node to its root, adding item names to the prefix path. Args: leafnode: The leaf node to start ascending from. prefixpath: A list to store the item as they are ascended. Example: dataset ... 'A', 'B', 'C', ... 'A', 'C', ... 'A', 'B', 'E', ... 'A', 'B', 'C', 'E', ... 'B', 'E' ... minsup 2 fptree, headertable createtreedataset, minsup path ascendtreefptree.children'A', path path ascending from a leaf node 'A' 'A' Find the conditional pattern base for a given base pattern. Args: basepat: The base pattern for which to find the conditional pattern base. treenode: The node in the FPTree. Example: dataset ... 'A', 'B', 'C', ... 'A', 'C', ... 'A', 'B', 'E', ... 'A', 'B', 'C', 'E', ... 'B', 'E' ... minsup 2 fptree, headertable createtreedataset, minsup fptree TreeNode'Null Set', 1, None lenheadertable 4 basepattern frozenset'A' sortedfindprefixpathbasepattern, fptree.children'A' Mine the FPTree recursively to discover frequent itemsets. Args: intree: The FPTree to mine. headertable: The header table dictionary with item information. minsup: The minimum support threshold. prefix: A set of items as a prefix for the itemsets being mined. freqitemlist: A list to store the frequent itemsets. Example: dataset ... 'A', 'B', 'C', ... 'A', 'C', ... 'A', 'B', 'E', ... 'A', 'B', 'C', 'E', ... 'B', 'E' ... minsup 2 fptree, headertable createtreedataset, minsup fptree TreeNode'Null Set', 1, None frequentitemsets minetreefptree, headertable, minsup, set, frequentitemsets expeitm 'C', 'C', 'A', 'E', 'A', 'E', 'E', 'B', 'A', 'B' allexpected in frequentitemsets for expected in expeitm True Pass headertablebasepat1 as nodetotest to updateheader | from __future__ import annotations
from dataclasses import dataclass, field
@dataclass
class TreeNode:
"""
A node in a Frequent Pattern tree.
Args:
name: The name of this node.
num_occur: The number of occurrences of the node.
parent_node: The parent node.
Example:
>>> parent = TreeNode("Parent", 1, None)
>>> child = TreeNode("Child", 2, parent)
>>> child.name
'Child'
>>> child.count
2
"""
name: str
count: int
parent: TreeNode | None = None
children: dict[str, TreeNode] = field(default_factory=dict)
node_link: TreeNode | None = None
def __repr__(self) -> str:
return f"TreeNode({self.name!r}, {self.count!r}, {self.parent!r})"
def inc(self, num_occur: int) -> None:
self.count += num_occur
def disp(self, ind: int = 1) -> None:
print(f"{' ' * ind} {self.name} {self.count}")
for child in self.children.values():
child.disp(ind + 1)
def create_tree(data_set: list, min_sup: int = 1) -> tuple[TreeNode, dict]:
"""
Create Frequent Pattern tree
Args:
data_set: A list of transactions, where each transaction is a list of items.
min_sup: The minimum support threshold.
Items with support less than this will be pruned. Default is 1.
Returns:
The root of the FP-Tree.
header_table: The header table dictionary with item information.
Example:
>>> data_set = [
... ['A', 'B', 'C'],
... ['A', 'C'],
... ['A', 'B', 'E'],
... ['A', 'B', 'C', 'E'],
... ['B', 'E']
... ]
>>> min_sup = 2
>>> fp_tree, header_table = create_tree(data_set, min_sup)
>>> fp_tree
TreeNode('Null Set', 1, None)
>>> len(header_table)
4
>>> header_table["A"]
[[4, None], TreeNode('A', 4, TreeNode('Null Set', 1, None))]
>>> header_table["E"][1] # doctest: +NORMALIZE_WHITESPACE
TreeNode('E', 1, TreeNode('B', 3, TreeNode('A', 4, TreeNode('Null Set', 1, None))))
>>> sorted(header_table)
['A', 'B', 'C', 'E']
>>> fp_tree.name
'Null Set'
>>> sorted(fp_tree.children)
['A', 'B']
>>> fp_tree.children['A'].name
'A'
>>> sorted(fp_tree.children['A'].children)
['B', 'C']
"""
header_table: dict = {}
for trans in data_set:
for item in trans:
header_table[item] = header_table.get(item, [0, None])
header_table[item][0] += 1
for k in list(header_table):
if header_table[k][0] < min_sup:
del header_table[k]
if not (freq_item_set := set(header_table)):
return TreeNode("Null Set", 1, None), {}
for k in header_table:
header_table[k] = [header_table[k], None]
fp_tree = TreeNode("Null Set", 1, None) # Parent is None for the root node
for tran_set in data_set:
local_d = {
item: header_table[item][0] for item in tran_set if item in freq_item_set
}
if local_d:
sorted_items = sorted(
local_d.items(), key=lambda item_info: item_info[1], reverse=True
)
ordered_items = [item[0] for item in sorted_items]
update_tree(ordered_items, fp_tree, header_table, 1)
return fp_tree, header_table
def update_tree(items: list, in_tree: TreeNode, header_table: dict, count: int) -> None:
"""
Update the FP-Tree with a transaction.
Args:
items: List of items in the transaction.
in_tree: The current node in the FP-Tree.
header_table: The header table dictionary with item information.
count: The count of the transaction.
Example:
>>> data_set = [
... ['A', 'B', 'C'],
... ['A', 'C'],
... ['A', 'B', 'E'],
... ['A', 'B', 'C', 'E'],
... ['B', 'E']
... ]
>>> min_sup = 2
>>> fp_tree, header_table = create_tree(data_set, min_sup)
>>> fp_tree
TreeNode('Null Set', 1, None)
>>> transaction = ['A', 'B', 'E']
>>> update_tree(transaction, fp_tree, header_table, 1)
>>> fp_tree
TreeNode('Null Set', 1, None)
>>> fp_tree.children['A'].children['B'].children['E'].children
{}
>>> fp_tree.children['A'].children['B'].children['E'].count
2
>>> header_table['E'][1].name
'E'
"""
if items[0] in in_tree.children:
in_tree.children[items[0]].inc(count)
else:
in_tree.children[items[0]] = TreeNode(items[0], count, in_tree)
if header_table[items[0]][1] is None:
header_table[items[0]][1] = in_tree.children[items[0]]
else:
update_header(header_table[items[0]][1], in_tree.children[items[0]])
if len(items) > 1:
update_tree(items[1:], in_tree.children[items[0]], header_table, count)
def update_header(node_to_test: TreeNode, target_node: TreeNode) -> TreeNode:
"""
Update the header table with a node link.
Args:
node_to_test: The node to be updated in the header table.
target_node: The node to link to.
Example:
>>> data_set = [
... ['A', 'B', 'C'],
... ['A', 'C'],
... ['A', 'B', 'E'],
... ['A', 'B', 'C', 'E'],
... ['B', 'E']
... ]
>>> min_sup = 2
>>> fp_tree, header_table = create_tree(data_set, min_sup)
>>> fp_tree
TreeNode('Null Set', 1, None)
>>> node1 = TreeNode("A", 3, None)
>>> node2 = TreeNode("B", 4, None)
>>> node1
TreeNode('A', 3, None)
>>> node1 = update_header(node1, node2)
>>> node1
TreeNode('A', 3, None)
>>> node1.node_link
TreeNode('B', 4, None)
>>> node2.node_link is None
True
"""
while node_to_test.node_link is not None:
node_to_test = node_to_test.node_link
if node_to_test.node_link is None:
node_to_test.node_link = target_node
# Return the updated node
return node_to_test
def ascend_tree(leaf_node: TreeNode, prefix_path: list[str]) -> None:
"""
Ascend the FP-Tree from a leaf node to its root, adding item names to the prefix
path.
Args:
leaf_node: The leaf node to start ascending from.
prefix_path: A list to store the item as they are ascended.
Example:
>>> data_set = [
... ['A', 'B', 'C'],
... ['A', 'C'],
... ['A', 'B', 'E'],
... ['A', 'B', 'C', 'E'],
... ['B', 'E']
... ]
>>> min_sup = 2
>>> fp_tree, header_table = create_tree(data_set, min_sup)
>>> path = []
>>> ascend_tree(fp_tree.children['A'], path)
>>> path # ascending from a leaf node 'A'
['A']
"""
if leaf_node.parent is not None:
prefix_path.append(leaf_node.name)
ascend_tree(leaf_node.parent, prefix_path)
def find_prefix_path(base_pat: frozenset, tree_node: TreeNode | None) -> dict:
"""
Find the conditional pattern base for a given base pattern.
Args:
base_pat: The base pattern for which to find the conditional pattern base.
tree_node: The node in the FP-Tree.
Example:
>>> data_set = [
... ['A', 'B', 'C'],
... ['A', 'C'],
... ['A', 'B', 'E'],
... ['A', 'B', 'C', 'E'],
... ['B', 'E']
... ]
>>> min_sup = 2
>>> fp_tree, header_table = create_tree(data_set, min_sup)
>>> fp_tree
TreeNode('Null Set', 1, None)
>>> len(header_table)
4
>>> base_pattern = frozenset(['A'])
>>> sorted(find_prefix_path(base_pattern, fp_tree.children['A']))
[]
"""
cond_pats: dict = {}
while tree_node is not None:
prefix_path: list = []
ascend_tree(tree_node, prefix_path)
if len(prefix_path) > 1:
cond_pats[frozenset(prefix_path[1:])] = tree_node.count
tree_node = tree_node.node_link
return cond_pats
def mine_tree(
in_tree: TreeNode,
header_table: dict,
min_sup: int,
pre_fix: set,
freq_item_list: list,
) -> None:
"""
Mine the FP-Tree recursively to discover frequent itemsets.
Args:
in_tree: The FP-Tree to mine.
header_table: The header table dictionary with item information.
min_sup: The minimum support threshold.
pre_fix: A set of items as a prefix for the itemsets being mined.
freq_item_list: A list to store the frequent itemsets.
Example:
>>> data_set = [
... ['A', 'B', 'C'],
... ['A', 'C'],
... ['A', 'B', 'E'],
... ['A', 'B', 'C', 'E'],
... ['B', 'E']
... ]
>>> min_sup = 2
>>> fp_tree, header_table = create_tree(data_set, min_sup)
>>> fp_tree
TreeNode('Null Set', 1, None)
>>> frequent_itemsets = []
>>> mine_tree(fp_tree, header_table, min_sup, set([]), frequent_itemsets)
>>> expe_itm = [{'C'}, {'C', 'A'}, {'E'}, {'A', 'E'}, {'E', 'B'}, {'A'}, {'B'}]
>>> all(expected in frequent_itemsets for expected in expe_itm)
True
"""
sorted_items = sorted(header_table.items(), key=lambda item_info: item_info[1][0])
big_l = [item[0] for item in sorted_items]
for base_pat in big_l:
new_freq_set = pre_fix.copy()
new_freq_set.add(base_pat)
freq_item_list.append(new_freq_set)
cond_patt_bases = find_prefix_path(base_pat, header_table[base_pat][1])
my_cond_tree, my_head = create_tree(list(cond_patt_bases), min_sup)
if my_head is not None:
# Pass header_table[base_pat][1] as node_to_test to update_header
header_table[base_pat][1] = update_header(
header_table[base_pat][1], my_cond_tree
)
mine_tree(my_cond_tree, my_head, min_sup, new_freq_set, freq_item_list)
if __name__ == "__main__":
from doctest import testmod
testmod()
data_set: list[frozenset] = [
frozenset(["bread", "milk", "cheese"]),
frozenset(["bread", "milk"]),
frozenset(["bread", "diapers"]),
frozenset(["bread", "milk", "diapers"]),
frozenset(["milk", "diapers"]),
frozenset(["milk", "cheese"]),
frozenset(["diapers", "cheese"]),
frozenset(["bread", "milk", "cheese", "diapers"]),
]
print(f"{len(data_set) = }")
fp_tree, header_table = create_tree(data_set, min_sup=3)
print(f"{fp_tree = }")
print(f"{len(header_table) = }")
freq_items: list = []
mine_tree(fp_tree, header_table, 3, set(), freq_items)
print(f"{freq_items = }")
|
Initialize a GradientBoostingClassifier. Parameters: nestimators int: The number of weak learners to train. learningrate float: The learning rate for updating the model. Attributes: nestimators int: The number of weak learners. learningrate float: The learning rate. models list: A list to store the trained weak learners. Fit the GradientBoostingClassifier to the training data. Parameters: features np.ndarray: The training features. target np.ndarray: The target values. Returns: None import numpy as np from sklearn.datasets import loadiris clf GradientBoostingClassifiernestimators100, learningrate0.1 iris loadiris X, y iris.data, iris.target clf.fitX, y Check if the model is trained lenclf.models 100 True Calculate the pseudoresiduals Fit a weak learner e.g., decision tree to the residuals Update the model by adding the weak learner with a learning rate Make predictions on input data. Parameters: features np.ndarray: The input data for making predictions. Returns: np.ndarray: An array of binary predictions 1 or 1. import numpy as np from sklearn.datasets import loadiris clf GradientBoostingClassifiernestimators100, learningrate0.1 iris loadiris X, y iris.data, iris.target clf.fitX, y ypred clf.predictX Check if the predictions have the correct shape ypred.shape y.shape True Initialize predictions with zeros Calculate the negative gradient pseudoresiduals for logistic loss. Parameters: target np.ndarray: The target values. ypred np.ndarray: The predicted values. Returns: np.ndarray: An array of pseudoresiduals. import numpy as np clf GradientBoostingClassifiernestimators100, learningrate0.1 target np.array0, 1, 0, 1 ypred np.array0.2, 0.8, 0.3, 0.7 residuals clf.gradienttarget, ypred Check if residuals have the correct shape residuals.shape target.shape True | import numpy as np
from sklearn.datasets import load_iris
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
class GradientBoostingClassifier:
def __init__(self, n_estimators: int = 100, learning_rate: float = 0.1) -> None:
"""
Initialize a GradientBoostingClassifier.
Parameters:
- n_estimators (int): The number of weak learners to train.
- learning_rate (float): The learning rate for updating the model.
Attributes:
- n_estimators (int): The number of weak learners.
- learning_rate (float): The learning rate.
- models (list): A list to store the trained weak learners.
"""
self.n_estimators = n_estimators
self.learning_rate = learning_rate
self.models: list[tuple[DecisionTreeRegressor, float]] = []
def fit(self, features: np.ndarray, target: np.ndarray) -> None:
"""
Fit the GradientBoostingClassifier to the training data.
Parameters:
- features (np.ndarray): The training features.
- target (np.ndarray): The target values.
Returns:
None
>>> import numpy as np
>>> from sklearn.datasets import load_iris
>>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1)
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> clf.fit(X, y)
>>> # Check if the model is trained
>>> len(clf.models) == 100
True
"""
for _ in range(self.n_estimators):
# Calculate the pseudo-residuals
residuals = -self.gradient(target, self.predict(features))
# Fit a weak learner (e.g., decision tree) to the residuals
model = DecisionTreeRegressor(max_depth=1)
model.fit(features, residuals)
# Update the model by adding the weak learner with a learning rate
self.models.append((model, self.learning_rate))
def predict(self, features: np.ndarray) -> np.ndarray:
"""
Make predictions on input data.
Parameters:
- features (np.ndarray): The input data for making predictions.
Returns:
- np.ndarray: An array of binary predictions (-1 or 1).
>>> import numpy as np
>>> from sklearn.datasets import load_iris
>>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1)
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> clf.fit(X, y)
>>> y_pred = clf.predict(X)
>>> # Check if the predictions have the correct shape
>>> y_pred.shape == y.shape
True
"""
# Initialize predictions with zeros
predictions = np.zeros(features.shape[0])
for model, learning_rate in self.models:
predictions += learning_rate * model.predict(features)
return np.sign(predictions) # Convert to binary predictions (-1 or 1)
def gradient(self, target: np.ndarray, y_pred: np.ndarray) -> np.ndarray:
"""
Calculate the negative gradient (pseudo-residuals) for logistic loss.
Parameters:
- target (np.ndarray): The target values.
- y_pred (np.ndarray): The predicted values.
Returns:
- np.ndarray: An array of pseudo-residuals.
>>> import numpy as np
>>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1)
>>> target = np.array([0, 1, 0, 1])
>>> y_pred = np.array([0.2, 0.8, 0.3, 0.7])
>>> residuals = clf.gradient(target, y_pred)
>>> # Check if residuals have the correct shape
>>> residuals.shape == target.shape
True
"""
return -target / (1 + np.exp(target * y_pred))
if __name__ == "__main__":
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
clf = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Accuracy: {accuracy:.2f}")
|
Implementation of gradient descent algorithm for minimizing cost of a linear hypothesis function. List of input, output pairs :param dataset: train data or test data :param exampleno: example number whose error has to be checked :return: error in example pointed by example number. Calculates hypothesis function value for a given input :param datainputtuple: Input tuple of a particular example :return: Value of hypothesis function at that point. Note that there is an 'biased input' whose value is fixed as 1. It is not explicitly mentioned in input data.. But, ML hypothesis functions use it. So, we have to take care of it separately. Line 36 takes care of it. :param dataset: test data or train data :param exampleno: example whose output is to be fetched :return: output for that example Calculates hypothesis value for a given example :param dataset: test data or traindata :param exampleno: example whose hypothesis value is to be calculated :return: hypothesis value for that example Calculates the sum of cost function derivative :param index: index wrt derivative is being calculated :param end: value where summation ends, default is m, number of examples :return: Returns the summation of cost derivative Note: If index is 1, this means we are calculating summation wrt to biased parameter. :param index: index of the parameter vector wrt to derivative is to be calculated :return: derivative wrt to that index Note: If index is 1, this means we are calculating summation wrt to biased parameter. Tune these values to set a tolerance value for predicted output | import numpy
# List of input, output pairs
train_data = (
((5, 2, 3), 15),
((6, 5, 9), 25),
((11, 12, 13), 41),
((1, 1, 1), 8),
((11, 12, 13), 41),
)
test_data = (((515, 22, 13), 555), ((61, 35, 49), 150))
parameter_vector = [2, 4, 1, 5]
m = len(train_data)
LEARNING_RATE = 0.009
def _error(example_no, data_set="train"):
"""
:param data_set: train data or test data
:param example_no: example number whose error has to be checked
:return: error in example pointed by example number.
"""
return calculate_hypothesis_value(example_no, data_set) - output(
example_no, data_set
)
def _hypothesis_value(data_input_tuple):
"""
Calculates hypothesis function value for a given input
:param data_input_tuple: Input tuple of a particular example
:return: Value of hypothesis function at that point.
Note that there is an 'biased input' whose value is fixed as 1.
It is not explicitly mentioned in input data.. But, ML hypothesis functions use it.
So, we have to take care of it separately. Line 36 takes care of it.
"""
hyp_val = 0
for i in range(len(parameter_vector) - 1):
hyp_val += data_input_tuple[i] * parameter_vector[i + 1]
hyp_val += parameter_vector[0]
return hyp_val
def output(example_no, data_set):
"""
:param data_set: test data or train data
:param example_no: example whose output is to be fetched
:return: output for that example
"""
if data_set == "train":
return train_data[example_no][1]
elif data_set == "test":
return test_data[example_no][1]
return None
def calculate_hypothesis_value(example_no, data_set):
"""
Calculates hypothesis value for a given example
:param data_set: test data or train_data
:param example_no: example whose hypothesis value is to be calculated
:return: hypothesis value for that example
"""
if data_set == "train":
return _hypothesis_value(train_data[example_no][0])
elif data_set == "test":
return _hypothesis_value(test_data[example_no][0])
return None
def summation_of_cost_derivative(index, end=m):
"""
Calculates the sum of cost function derivative
:param index: index wrt derivative is being calculated
:param end: value where summation ends, default is m, number of examples
:return: Returns the summation of cost derivative
Note: If index is -1, this means we are calculating summation wrt to biased
parameter.
"""
summation_value = 0
for i in range(end):
if index == -1:
summation_value += _error(i)
else:
summation_value += _error(i) * train_data[i][0][index]
return summation_value
def get_cost_derivative(index):
"""
:param index: index of the parameter vector wrt to derivative is to be calculated
:return: derivative wrt to that index
Note: If index is -1, this means we are calculating summation wrt to biased
parameter.
"""
cost_derivative_value = summation_of_cost_derivative(index, m) / m
return cost_derivative_value
def run_gradient_descent():
global parameter_vector
# Tune these values to set a tolerance value for predicted output
absolute_error_limit = 0.000002
relative_error_limit = 0
j = 0
while True:
j += 1
temp_parameter_vector = [0, 0, 0, 0]
for i in range(len(parameter_vector)):
cost_derivative = get_cost_derivative(i - 1)
temp_parameter_vector[i] = (
parameter_vector[i] - LEARNING_RATE * cost_derivative
)
if numpy.allclose(
parameter_vector,
temp_parameter_vector,
atol=absolute_error_limit,
rtol=relative_error_limit,
):
break
parameter_vector = temp_parameter_vector
print(("Number of iterations:", j))
def test_gradient_descent():
for i in range(len(test_data)):
print(("Actual output value:", output(i, "test")))
print(("Hypothesis output:", calculate_hypothesis_value(i, "test")))
if __name__ == "__main__":
run_gradient_descent()
print("\nTesting gradient descent for a linear hypothesis function.\n")
test_gradient_descent()
|
README, Author Anurag Kumarmailto:anuragkumarak95gmail.com Requirements: sklearn numpy matplotlib Python: 3.5 Inputs: X , a 2D numpy array of features. k , number of clusters to create. initialcentroids , initial centroid values generated by utility functionmentioned in usage. maxiter , maximum number of iterations to process. heterogeneity , empty list that will be filled with heterogeneity values if passed to kmeans func. Usage: 1. define 'k' value, 'X' features array and 'heterogeneity' empty list 2. create initialcentroids, initialcentroids getinitialcentroids X, k, seed0 seed value for initial centroid generation, None for randomnessdefaultNone 3. find centroids and clusters using kmeans function. centroids, clusterassignment kmeans X, k, initialcentroids, maxiter400, recordheterogeneityheterogeneity, verboseTrue whether to print logs in console or not.defaultFalse 4. Plot the loss function and heterogeneity values for every iteration saved in heterogeneity list. plotheterogeneity heterogeneity, k 5. Transfers Dataframe into excel format it must have feature called 'Clust' with k means clustering numbers in it. Randomly choose k data points as initial centroids if seed is not None: useful for obtaining consistent results np.random.seedseed n data.shape0 number of data points Pick K indices from range 0, N. randindices np.random.randint0, n, k Keep centroids as dense format, as many entries will be nonzero due to averaging. As long as at least one document in a cluster contains a word, it will carry a nonzero weight in the TFIDF vector of the centroid. centroids datarandindices, : return centroids def centroidpairwisedistx, centroids: return pairwisedistancesx, centroids, metriceuclidean def assignclustersdata, centroids: Compute distances between each data point and the set of centroids: Fill in the blank RHS only distancesfromcentroids centroidpairwisedistdata, centroids Compute cluster assignments for each data point: Fill in the blank RHS only clusterassignment np.argmindistancesfromcentroids, axis1 return clusterassignment def revisecentroidsdata, k, clusterassignment: newcentroids for i in rangek: Select all data points that belong to cluster i. Fill in the blank RHS only memberdatapoints dataclusterassignment i Compute the mean of the data points. Fill in the blank RHS only centroid memberdatapoints.meanaxis0 newcentroids.appendcentroid newcentroids np.arraynewcentroids return newcentroids def computeheterogeneitydata, k, centroids, clusterassignment: heterogeneity 0.0 for i in rangek: Select all data points that belong to cluster i. Fill in the blank RHS only memberdatapoints dataclusterassignment i, : if memberdatapoints.shape0 0: check if ith cluster is nonempty Compute distances from centroid to data points RHS only distances pairwisedistances memberdatapoints, centroidsi, metriceuclidean squareddistances distances2 heterogeneity np.sumsquareddistances return heterogeneity def plotheterogeneityheterogeneity, k: plt.figurefigsize7, 4 plt.plotheterogeneity, linewidth4 plt.xlabel Iterations plt.ylabelHeterogeneity plt.titlefHeterogeneity of clustering over time, Kk:d plt.rcParams.updatefont.size: 16 plt.show def kmeans data, k, initialcentroids, maxiter500, recordheterogeneityNone, verboseFalse : 1. Make cluster assignments using nearest centroids 2. Compute a new centroid for each of the k clusters, averaging all data points assigned to that cluster. Check for convergence: if none of the assignments changed, stop Print number of new assignments Record heterogeneity convergence metric YOUR CODE HERE Mock test below Generate a clustering report given these two arguments: predicted dataframe with predicted cluster column fillmissingreport dictionary of rules on how we are going to fill in missing values for final generated report not included in modelling; predicted pd.DataFrame predicted'numbers' 1, 2, 3 predicted'col1' 0.5, 2.5, 4.5 predicted'col2' 100, 200, 300 predicted'col3' 10, 20, 30 predicted'Cluster' 1, 1, 2 reportgeneratorpredicted, 'col1', 'col2', 0 Features Type Mark 1 2 0 of Customers ClusterSize False 2.000000 1.000000 1 of Customers ClusterProportion False 0.666667 0.333333 2 col1 meanwithzeros True 1.500000 4.500000 3 col2 meanwithzeros True 150.000000 300.000000 4 numbers meanwithzeros False 1.500000 3.000000 .. ... ... ... ... ... 99 dummy 5 False 1.000000 1.000000 100 dummy 95 False 1.000000 1.000000 101 dummy stdev False 0.000000 NaN 102 dummy mode False 1.000000 1.000000 103 dummy median False 1.000000 1.000000 BLANKLINE 104 rows x 5 columns Fill missing values with given rules calculate the size of clustercount of clientID's avoid SettingWithCopyWarning rename created predicted cluster to match report column names calculating the proportion of cluster rename created predicted cluster to match report column names generating dataframe with count of nan values filling values in order to match report drop count values except for cluster size concat report with cluster size and nan values | import warnings
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.metrics import pairwise_distances
warnings.filterwarnings("ignore")
TAG = "K-MEANS-CLUST/ "
def get_initial_centroids(data, k, seed=None):
"""Randomly choose k data points as initial centroids"""
if seed is not None: # useful for obtaining consistent results
np.random.seed(seed)
n = data.shape[0] # number of data points
# Pick K indices from range [0, N).
rand_indices = np.random.randint(0, n, k)
# Keep centroids as dense format, as many entries will be nonzero due to averaging.
# As long as at least one document in a cluster contains a word,
# it will carry a nonzero weight in the TF-IDF vector of the centroid.
centroids = data[rand_indices, :]
return centroids
def centroid_pairwise_dist(x, centroids):
return pairwise_distances(x, centroids, metric="euclidean")
def assign_clusters(data, centroids):
# Compute distances between each data point and the set of centroids:
# Fill in the blank (RHS only)
distances_from_centroids = centroid_pairwise_dist(data, centroids)
# Compute cluster assignments for each data point:
# Fill in the blank (RHS only)
cluster_assignment = np.argmin(distances_from_centroids, axis=1)
return cluster_assignment
def revise_centroids(data, k, cluster_assignment):
new_centroids = []
for i in range(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment == i]
# Compute the mean of the data points. Fill in the blank (RHS only)
centroid = member_data_points.mean(axis=0)
new_centroids.append(centroid)
new_centroids = np.array(new_centroids)
return new_centroids
def compute_heterogeneity(data, k, centroids, cluster_assignment):
heterogeneity = 0.0
for i in range(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment == i, :]
if member_data_points.shape[0] > 0: # check if i-th cluster is non-empty
# Compute distances from centroid to data points (RHS only)
distances = pairwise_distances(
member_data_points, [centroids[i]], metric="euclidean"
)
squared_distances = distances**2
heterogeneity += np.sum(squared_distances)
return heterogeneity
def plot_heterogeneity(heterogeneity, k):
plt.figure(figsize=(7, 4))
plt.plot(heterogeneity, linewidth=4)
plt.xlabel("# Iterations")
plt.ylabel("Heterogeneity")
plt.title(f"Heterogeneity of clustering over time, K={k:d}")
plt.rcParams.update({"font.size": 16})
plt.show()
def kmeans(
data, k, initial_centroids, maxiter=500, record_heterogeneity=None, verbose=False
):
"""Runs k-means on given data and initial set of centroids.
maxiter: maximum number of iterations to run.(default=500)
record_heterogeneity: (optional) a list, to store the history of heterogeneity
as function of iterations
if None, do not store the history.
verbose: if True, print how many data points changed their cluster labels in
each iteration"""
centroids = initial_centroids[:]
prev_cluster_assignment = None
for itr in range(maxiter):
if verbose:
print(itr, end="")
# 1. Make cluster assignments using nearest centroids
cluster_assignment = assign_clusters(data, centroids)
# 2. Compute a new centroid for each of the k clusters, averaging all data
# points assigned to that cluster.
centroids = revise_centroids(data, k, cluster_assignment)
# Check for convergence: if none of the assignments changed, stop
if (
prev_cluster_assignment is not None
and (prev_cluster_assignment == cluster_assignment).all()
):
break
# Print number of new assignments
if prev_cluster_assignment is not None:
num_changed = np.sum(prev_cluster_assignment != cluster_assignment)
if verbose:
print(
f" {num_changed:5d} elements changed their cluster assignment."
)
# Record heterogeneity convergence metric
if record_heterogeneity is not None:
# YOUR CODE HERE
score = compute_heterogeneity(data, k, centroids, cluster_assignment)
record_heterogeneity.append(score)
prev_cluster_assignment = cluster_assignment[:]
return centroids, cluster_assignment
# Mock test below
if False: # change to true to run this test case.
from sklearn import datasets as ds
dataset = ds.load_iris()
k = 3
heterogeneity = []
initial_centroids = get_initial_centroids(dataset["data"], k, seed=0)
centroids, cluster_assignment = kmeans(
dataset["data"],
k,
initial_centroids,
maxiter=400,
record_heterogeneity=heterogeneity,
verbose=True,
)
plot_heterogeneity(heterogeneity, k)
def report_generator(
predicted: pd.DataFrame, clustering_variables: np.ndarray, fill_missing_report=None
) -> pd.DataFrame:
"""
Generate a clustering report given these two arguments:
predicted - dataframe with predicted cluster column
fill_missing_report - dictionary of rules on how we are going to fill in missing
values for final generated report (not included in modelling);
>>> predicted = pd.DataFrame()
>>> predicted['numbers'] = [1, 2, 3]
>>> predicted['col1'] = [0.5, 2.5, 4.5]
>>> predicted['col2'] = [100, 200, 300]
>>> predicted['col3'] = [10, 20, 30]
>>> predicted['Cluster'] = [1, 1, 2]
>>> report_generator(predicted, ['col1', 'col2'], 0)
Features Type Mark 1 2
0 # of Customers ClusterSize False 2.000000 1.000000
1 % of Customers ClusterProportion False 0.666667 0.333333
2 col1 mean_with_zeros True 1.500000 4.500000
3 col2 mean_with_zeros True 150.000000 300.000000
4 numbers mean_with_zeros False 1.500000 3.000000
.. ... ... ... ... ...
99 dummy 5% False 1.000000 1.000000
100 dummy 95% False 1.000000 1.000000
101 dummy stdev False 0.000000 NaN
102 dummy mode False 1.000000 1.000000
103 dummy median False 1.000000 1.000000
<BLANKLINE>
[104 rows x 5 columns]
"""
# Fill missing values with given rules
if fill_missing_report:
predicted = predicted.fillna(value=fill_missing_report)
predicted["dummy"] = 1
numeric_cols = predicted.select_dtypes(np.number).columns
report = (
predicted.groupby(["Cluster"])[ # construct report dataframe
numeric_cols
] # group by cluster number
.agg(
[
("sum", "sum"),
("mean_with_zeros", lambda x: np.mean(np.nan_to_num(x))),
("mean_without_zeros", lambda x: x.replace(0, np.NaN).mean()),
(
"mean_25-75",
lambda x: np.mean(
np.nan_to_num(
sorted(x)[
round(len(x) * 25 / 100) : round(len(x) * 75 / 100)
]
)
),
),
("mean_with_na", "mean"),
("min", lambda x: x.min()),
("5%", lambda x: x.quantile(0.05)),
("25%", lambda x: x.quantile(0.25)),
("50%", lambda x: x.quantile(0.50)),
("75%", lambda x: x.quantile(0.75)),
("95%", lambda x: x.quantile(0.95)),
("max", lambda x: x.max()),
("count", lambda x: x.count()),
("stdev", lambda x: x.std()),
("mode", lambda x: x.mode()[0]),
("median", lambda x: x.median()),
("# > 0", lambda x: (x > 0).sum()),
]
)
.T.reset_index()
.rename(index=str, columns={"level_0": "Features", "level_1": "Type"})
) # rename columns
# calculate the size of cluster(count of clientID's)
# avoid SettingWithCopyWarning
clustersize = report[
(report["Features"] == "dummy") & (report["Type"] == "count")
].copy()
# rename created predicted cluster to match report column names
clustersize.Type = "ClusterSize"
clustersize.Features = "# of Customers"
# calculating the proportion of cluster
clusterproportion = pd.DataFrame(
clustersize.iloc[:, 2:].to_numpy() / clustersize.iloc[:, 2:].to_numpy().sum()
)
# rename created predicted cluster to match report column names
clusterproportion["Type"] = "% of Customers"
clusterproportion["Features"] = "ClusterProportion"
cols = clusterproportion.columns.tolist()
cols = cols[-2:] + cols[:-2]
clusterproportion = clusterproportion[cols] # rearrange columns to match report
clusterproportion.columns = report.columns
# generating dataframe with count of nan values
a = pd.DataFrame(
abs(
report[report["Type"] == "count"].iloc[:, 2:].to_numpy()
- clustersize.iloc[:, 2:].to_numpy()
)
)
a["Features"] = 0
a["Type"] = "# of nan"
# filling values in order to match report
a.Features = report[report["Type"] == "count"].Features.tolist()
cols = a.columns.tolist()
cols = cols[-2:] + cols[:-2]
a = a[cols] # rearrange columns to match report
a.columns = report.columns # rename columns to match report
# drop count values except for cluster size
report = report.drop(report[report.Type == "count"].index)
# concat report with cluster size and nan values
report = pd.concat([report, a, clustersize, clusterproportion], axis=0)
report["Mark"] = report["Features"].isin(clustering_variables)
cols = report.columns.tolist()
cols = cols[0:2] + cols[-1:] + cols[2:-1]
report = report[cols]
sorter1 = {
"ClusterSize": 9,
"ClusterProportion": 8,
"mean_with_zeros": 7,
"mean_with_na": 6,
"max": 5,
"50%": 4,
"min": 3,
"25%": 2,
"75%": 1,
"# of nan": 0,
"# > 0": -1,
"sum_with_na": -2,
}
report = (
report.assign(
Sorter1=lambda x: x.Type.map(sorter1),
Sorter2=lambda x: list(reversed(range(len(x)))),
)
.sort_values(["Sorter1", "Mark", "Sorter2"], ascending=False)
.drop(["Sorter1", "Sorter2"], axis=1)
)
report.columns.name = ""
report = report.reset_index()
report = report.drop(columns=["index"])
return report
if __name__ == "__main__":
import doctest
doctest.testmod()
|
kNearest Neighbours kNN is a simple nonparametric supervised learning algorithm used for classification. Given some labelled training data, a given point is classified using its k nearest neighbours according to some distance metric. The most commonly occurring label among the neighbours becomes the label of the given point. In effect, the label of the given point is decided by a majority vote. This implementation uses the commonly used Euclidean distance metric, but other distance metrics can also be used. Reference: https:en.wikipedia.orgwikiKnearestneighborsalgorithm Create a kNN classifier using the given training data and class labels Calculate the Euclidean distance between two points KNN.euclideandistancenp.array0, 0, np.array3, 4 5.0 KNN.euclideandistancenp.array1, 2, 3, np.array1, 8, 11 10.0 Classify a given point using the kNN algorithm trainX np.array ... 0, 0, 1, 0, 0, 1, 0.5, 0.5, 3, 3, 2, 3, 3, 2 ... trainy np.array0, 0, 0, 0, 1, 1, 1 classes 'A', 'B' knn KNNtrainX, trainy, classes point np.array1.2, 1.2 knn.classifypoint 'A' Distances of all points from the point to be classified Choosing k points with the shortest distances Most commonly occurring class is the one into which the point is classified | from collections import Counter
from heapq import nsmallest
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
class KNN:
def __init__(
self,
train_data: np.ndarray[float],
train_target: np.ndarray[int],
class_labels: list[str],
) -> None:
"""
Create a kNN classifier using the given training data and class labels
"""
self.data = zip(train_data, train_target)
self.labels = class_labels
@staticmethod
def _euclidean_distance(a: np.ndarray[float], b: np.ndarray[float]) -> float:
"""
Calculate the Euclidean distance between two points
>>> KNN._euclidean_distance(np.array([0, 0]), np.array([3, 4]))
5.0
>>> KNN._euclidean_distance(np.array([1, 2, 3]), np.array([1, 8, 11]))
10.0
"""
return np.linalg.norm(a - b)
def classify(self, pred_point: np.ndarray[float], k: int = 5) -> str:
"""
Classify a given point using the kNN algorithm
>>> train_X = np.array(
... [[0, 0], [1, 0], [0, 1], [0.5, 0.5], [3, 3], [2, 3], [3, 2]]
... )
>>> train_y = np.array([0, 0, 0, 0, 1, 1, 1])
>>> classes = ['A', 'B']
>>> knn = KNN(train_X, train_y, classes)
>>> point = np.array([1.2, 1.2])
>>> knn.classify(point)
'A'
"""
# Distances of all points from the point to be classified
distances = (
(self._euclidean_distance(data_point[0], pred_point), data_point[1])
for data_point in self.data
)
# Choosing k points with the shortest distances
votes = (i[1] for i in nsmallest(k, distances))
# Most commonly occurring class is the one into which the point is classified
result = Counter(votes).most_common(1)[0][0]
return self.labels[result]
if __name__ == "__main__":
import doctest
doctest.testmod()
iris = datasets.load_iris()
X = np.array(iris["data"])
y = np.array(iris["target"])
iris_classes = iris["target_names"]
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
iris_point = np.array([4.4, 3.1, 1.3, 1.4])
classifier = KNN(X_train, y_train, iris_classes)
print(classifier.classify(iris_point, k=3))
|
Linear Discriminant Analysis Assumptions About Data : 1. The input variables has a gaussian distribution. 2. The variance calculated for each input variables by class grouping is the same. 3. The mix of classes in your training set is representative of the problem. Learning The Model : The LDA model requires the estimation of statistics from the training data : 1. Mean of each input value for each class. 2. Probability of an instance belong to each class. 3. Covariance for the input data for each class Calculate the class means : meanx 1n for i 1 to i n sumxi Calculate the class probabilities : Py 0 county 0 county 0 county 1 Py 1 county 1 county 0 county 1 Calculate the variance : We can calculate the variance for dataset in two steps : 1. Calculate the squared difference for each input variable from the group mean. 2. Calculate the mean of the squared difference. SquaredDifference x meank 2 Variance 1 countx countclasses for i 1 to i n sumSquaredDifferencexi Making Predictions : discriminantx x mean variance mean 2 2 variance Lnprobability After calculating the discriminant value for each class, the class with the largest discriminant value is taken as the prediction. Author: EverLookNeverSee Make a training dataset drawn from a gaussian distribution Generate gaussian distribution instances basedon given mean and standard deviation :param mean: mean value of class :param stddev: value of standard deviation entered by usr or default value of it :param instancecount: instance number of class :return: a list containing generated values basedon given mean, stddev and instancecount gaussiandistribution5.0, 1.0, 20 doctest: NORMALIZEWHITESPACE 6.288184753155463, 6.4494456086997705, 5.066335808938262, 4.235456349028368, 3.9078267848958586, 5.031334516831717, 3.977896829989127, 3.56317055489747, 5.199311976483754, 5.133374604658605, 5.546468300338232, 4.086029056264687, 5.005005283626573, 4.935258239627312, 3.494170998739258, 5.537997178661033, 5.320711100998849, 7.3891120432406865, 5.202969177309964, 4.855297691835079 Make corresponding Y flags to detecting classes Generate y values for corresponding classes :param classcount: Number of classesdata groupings in dataset :param instancecount: number of instances in class :return: corresponding values for data groupings in dataset ygenerator1, 10 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ygenerator2, 5, 10 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ygenerator4, 10, 5, 15, 20 doctest: NORMALIZEWHITESPACE 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3 Calculate the class means Calculate given class mean :param instancecount: Number of instances in class :param items: items that related to specific classdata grouping :return: calculated actual mean of considered class items gaussiandistribution5.0, 1.0, 20 calculatemeanlenitems, items 5.011267842911003 the sum of all items divided by number of instances Calculate the class probabilities Calculate the probability that a given instance will belong to which class :param instancecount: number of instances in class :param totalcount: the number of all instances :return: value of probability for considered class calculateprobabilities20, 60 0.3333333333333333 calculateprobabilities30, 100 0.3 number of instances in specific class divided by number of all instances Calculate the variance Calculate the variance :param items: a list containing all itemsgaussian distribution of all classes :param means: a list containing real mean values of each class :param totalcount: the number of all instances :return: calculated variance for considered dataset items gaussiandistribution5.0, 1.0, 20 means 5.011267842911003 totalcount 20 calculatevarianceitems, means, totalcount 0.9618530973487491 iterate over number of elements in items for loop iterates over number of elements in inner layer of items appending squared differences to 'squareddiff' list one divided by the number of all instances number of classes multiplied by sum of all squared differences Making predictions This function predicts new indexesgroups for our data :param xitems: a list containing all itemsgaussian distribution of all classes :param means: a list containing real mean values of each class :param variance: calculated value of variance by calculatevariance function :param probabilities: a list containing all probabilities of classes :return: a list containing predicted Y values xitems 6.288184753155463, 6.4494456086997705, 5.066335808938262, ... 4.235456349028368, 3.9078267848958586, 5.031334516831717, ... 3.977896829989127, 3.56317055489747, 5.199311976483754, ... 5.133374604658605, 5.546468300338232, 4.086029056264687, ... 5.005005283626573, 4.935258239627312, 3.494170998739258, ... 5.537997178661033, 5.320711100998849, 7.3891120432406865, ... 5.202969177309964, 4.855297691835079, 11.288184753155463, ... 11.44944560869977, 10.066335808938263, 9.235456349028368, ... 8.907826784895859, 10.031334516831716, 8.977896829989128, ... 8.56317055489747, 10.199311976483754, 10.133374604658606, ... 10.546468300338232, 9.086029056264687, 10.005005283626572, ... 9.935258239627313, 8.494170998739259, 10.537997178661033, ... 10.320711100998848, 12.389112043240686, 10.202969177309964, ... 9.85529769183508, 16.288184753155463, 16.449445608699772, ... 15.066335808938263, 14.235456349028368, 13.907826784895859, ... 15.031334516831716, 13.977896829989128, 13.56317055489747, ... 15.199311976483754, 15.133374604658606, 15.546468300338232, ... 14.086029056264687, 15.005005283626572, 14.935258239627313, ... 13.494170998739259, 15.537997178661033, 15.320711100998848, ... 17.389112043240686, 15.202969177309964, 14.85529769183508 means 5.011267842911003, 10.011267842911003, 15.011267842911002 variance 0.9618530973487494 probabilities 0.3333333333333333, 0.3333333333333333, 0.3333333333333333 predictyvaluesxitems, means, variance, ... probabilities doctest: NORMALIZEWHITESPACE 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 An empty list to store generated discriminant values of all items in dataset for each class for loop iterates over number of elements in list for loop iterates over number of inner items of each element for loop iterates over number of classes we have in our dataset appending values of discriminants for each class to 'temp' list appending discriminant values of each item to 'results' list Calculating Accuracy Calculate the value of accuracy basedon predictions :param actualy:a list containing initial Y values generated by 'ygenerator' function :param predictedy: a list containing predicted Y values generated by 'predictyvalues' function :return: percentage of accuracy actualy 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, ... 1, 1 ,1 ,1 ,1 ,1 ,1 predictedy 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, ... 0, 0, 1, 1, 1, 0, 1, 1, 1 accuracyactualy, predictedy 50.0 actualy 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 predictedy 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 accuracyactualy, predictedy 100.0 iterate over one element of each list at a time zip mode prediction is correct if actual Y value equals to predicted Y value percentage of accuracy equals to number of correct predictions divided by number of all data and multiplied by 100 Ask for user value and validate that it fulfill a condition. :inputtype: user input expected type of value :inputmsg: message to show user in the screen :errmsg: message to show in the screen in case of error :condition: function that represents the condition that user input is valid. :default: Default value in case the user does not type anything :return: user's input Main Function This function starts execution phase while True: print Linear Discriminant Analysis .center50, print 50, n printFirst of all we should specify the number of classes that printwe want to generate as training dataset Trying to get number of classes nclasses validinput inputtypeint, conditionlambda x: x 0, inputmsgEnter the number of classes Data Groupings: , errmsgNumber of classes should be positive!, print 100 Trying to get the value of standard deviation stddev validinput inputtypefloat, conditionlambda x: x 0, inputmsg Enter the value of standard deviation Default value is 1.0 for all classes: , errmsgStandard deviation should not be negative!, default1.0, print 100 Trying to get number of instances in classes and theirs means to generate dataset counts An empty list to store instance counts of classes in dataset for i in rangenclasses: usercount validinput inputtypeint, conditionlambda x: x 0, inputmsgfEnter The number of instances for classi1: , errmsgNumber of instances should be positive!, counts.appendusercount print 100 An empty list to store values of userentered means of classes usermeans for a in rangenclasses: usermean validinput inputtypefloat, inputmsgfEnter the value of mean for classa1: , errmsgThis is an invalid value., usermeans.appendusermean print 100 printStandard deviation: , stddev print out the number of instances in classes in separated line for i, count in enumeratecounts, 1: printfNumber of instances in classi is: count print 100 print out mean values of classes separated line for i, usermean in enumerateusermeans, 1: printfMean of classi is: usermean print 100 Generating training dataset drawn from gaussian distribution x gaussiandistributionusermeansj, stddev, countsj for j in rangenclasses printGenerated Normal Distribution: n, x print 100 Generating Ys to detecting corresponding classes y ygeneratornclasses, counts printGenerated Corresponding Ys: n, y print 100 Calculating the value of actual mean for each class actualmeans calculatemeancountsk, xk for k in rangenclasses for loop iterates over number of elements in 'actualmeans' list and print out them in separated line for i, actualmean in enumerateactualmeans, 1: printfActualReal mean of classi is: actualmean print 100 Calculating the value of probabilities for each class probabilities calculateprobabilitiescountsi, sumcounts for i in rangenclasses for loop iterates over number of elements in 'probabilities' list and print out them in separated line for i, probability in enumerateprobabilities, 1: printfProbability of classi is: probability print 100 Calculating the values of variance for each class variance calculatevariancex, actualmeans, sumcounts printVariance: , variance print 100 Predicting Y values storing predicted Y values in 'preindexes' variable preindexes predictyvaluesx, actualmeans, variance, probabilities print 100 Calculating Accuracy of the model printfAccuracy: accuracyy, preindexes print 100 print DONE .center100, if inputPress any key to restart or 'q' for quit: .strip.lower q: printn GoodBye!.center100, n break systemcls if name nt else clear noqa: S605 if name main: main | from collections.abc import Callable
from math import log
from os import name, system
from random import gauss, seed
from typing import TypeVar
# Make a training dataset drawn from a gaussian distribution
def gaussian_distribution(mean: float, std_dev: float, instance_count: int) -> list:
"""
Generate gaussian distribution instances based-on given mean and standard deviation
:param mean: mean value of class
:param std_dev: value of standard deviation entered by usr or default value of it
:param instance_count: instance number of class
:return: a list containing generated values based-on given mean, std_dev and
instance_count
>>> gaussian_distribution(5.0, 1.0, 20) # doctest: +NORMALIZE_WHITESPACE
[6.288184753155463, 6.4494456086997705, 5.066335808938262, 4.235456349028368,
3.9078267848958586, 5.031334516831717, 3.977896829989127, 3.56317055489747,
5.199311976483754, 5.133374604658605, 5.546468300338232, 4.086029056264687,
5.005005283626573, 4.935258239627312, 3.494170998739258, 5.537997178661033,
5.320711100998849, 7.3891120432406865, 5.202969177309964, 4.855297691835079]
"""
seed(1)
return [gauss(mean, std_dev) for _ in range(instance_count)]
# Make corresponding Y flags to detecting classes
def y_generator(class_count: int, instance_count: list) -> list:
"""
Generate y values for corresponding classes
:param class_count: Number of classes(data groupings) in dataset
:param instance_count: number of instances in class
:return: corresponding values for data groupings in dataset
>>> y_generator(1, [10])
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> y_generator(2, [5, 10])
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> y_generator(4, [10, 5, 15, 20]) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]
"""
return [k for k in range(class_count) for _ in range(instance_count[k])]
# Calculate the class means
def calculate_mean(instance_count: int, items: list) -> float:
"""
Calculate given class mean
:param instance_count: Number of instances in class
:param items: items that related to specific class(data grouping)
:return: calculated actual mean of considered class
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> calculate_mean(len(items), items)
5.011267842911003
"""
# the sum of all items divided by number of instances
return sum(items) / instance_count
# Calculate the class probabilities
def calculate_probabilities(instance_count: int, total_count: int) -> float:
"""
Calculate the probability that a given instance will belong to which class
:param instance_count: number of instances in class
:param total_count: the number of all instances
:return: value of probability for considered class
>>> calculate_probabilities(20, 60)
0.3333333333333333
>>> calculate_probabilities(30, 100)
0.3
"""
# number of instances in specific class divided by number of all instances
return instance_count / total_count
# Calculate the variance
def calculate_variance(items: list, means: list, total_count: int) -> float:
"""
Calculate the variance
:param items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param total_count: the number of all instances
:return: calculated variance for considered dataset
>>> items = gaussian_distribution(5.0, 1.0, 20)
>>> means = [5.011267842911003]
>>> total_count = 20
>>> calculate_variance([items], means, total_count)
0.9618530973487491
"""
squared_diff = [] # An empty list to store all squared differences
# iterate over number of elements in items
for i in range(len(items)):
# for loop iterates over number of elements in inner layer of items
for j in range(len(items[i])):
# appending squared differences to 'squared_diff' list
squared_diff.append((items[i][j] - means[i]) ** 2)
# one divided by (the number of all instances - number of classes) multiplied by
# sum of all squared differences
n_classes = len(means) # Number of classes in dataset
return 1 / (total_count - n_classes) * sum(squared_diff)
# Making predictions
def predict_y_values(
x_items: list, means: list, variance: float, probabilities: list
) -> list:
"""This function predicts new indexes(groups for our data)
:param x_items: a list containing all items(gaussian distribution of all classes)
:param means: a list containing real mean values of each class
:param variance: calculated value of variance by calculate_variance function
:param probabilities: a list containing all probabilities of classes
:return: a list containing predicted Y values
>>> x_items = [[6.288184753155463, 6.4494456086997705, 5.066335808938262,
... 4.235456349028368, 3.9078267848958586, 5.031334516831717,
... 3.977896829989127, 3.56317055489747, 5.199311976483754,
... 5.133374604658605, 5.546468300338232, 4.086029056264687,
... 5.005005283626573, 4.935258239627312, 3.494170998739258,
... 5.537997178661033, 5.320711100998849, 7.3891120432406865,
... 5.202969177309964, 4.855297691835079], [11.288184753155463,
... 11.44944560869977, 10.066335808938263, 9.235456349028368,
... 8.907826784895859, 10.031334516831716, 8.977896829989128,
... 8.56317055489747, 10.199311976483754, 10.133374604658606,
... 10.546468300338232, 9.086029056264687, 10.005005283626572,
... 9.935258239627313, 8.494170998739259, 10.537997178661033,
... 10.320711100998848, 12.389112043240686, 10.202969177309964,
... 9.85529769183508], [16.288184753155463, 16.449445608699772,
... 15.066335808938263, 14.235456349028368, 13.907826784895859,
... 15.031334516831716, 13.977896829989128, 13.56317055489747,
... 15.199311976483754, 15.133374604658606, 15.546468300338232,
... 14.086029056264687, 15.005005283626572, 14.935258239627313,
... 13.494170998739259, 15.537997178661033, 15.320711100998848,
... 17.389112043240686, 15.202969177309964, 14.85529769183508]]
>>> means = [5.011267842911003, 10.011267842911003, 15.011267842911002]
>>> variance = 0.9618530973487494
>>> probabilities = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333]
>>> predict_y_values(x_items, means, variance,
... probabilities) # doctest: +NORMALIZE_WHITESPACE
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2]
"""
# An empty list to store generated discriminant values of all items in dataset for
# each class
results = []
# for loop iterates over number of elements in list
for i in range(len(x_items)):
# for loop iterates over number of inner items of each element
for j in range(len(x_items[i])):
temp = [] # to store all discriminant values of each item as a list
# for loop iterates over number of classes we have in our dataset
for k in range(len(x_items)):
# appending values of discriminants for each class to 'temp' list
temp.append(
x_items[i][j] * (means[k] / variance)
- (means[k] ** 2 / (2 * variance))
+ log(probabilities[k])
)
# appending discriminant values of each item to 'results' list
results.append(temp)
return [result.index(max(result)) for result in results]
# Calculating Accuracy
def accuracy(actual_y: list, predicted_y: list) -> float:
"""
Calculate the value of accuracy based-on predictions
:param actual_y:a list containing initial Y values generated by 'y_generator'
function
:param predicted_y: a list containing predicted Y values generated by
'predict_y_values' function
:return: percentage of accuracy
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,
... 1, 1 ,1 ,1 ,1 ,1 ,1]
>>> predicted_y = [0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0,
... 0, 0, 1, 1, 1, 0, 1, 1, 1]
>>> accuracy(actual_y, predicted_y)
50.0
>>> actual_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> predicted_y = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
... 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> accuracy(actual_y, predicted_y)
100.0
"""
# iterate over one element of each list at a time (zip mode)
# prediction is correct if actual Y value equals to predicted Y value
correct = sum(1 for i, j in zip(actual_y, predicted_y) if i == j)
# percentage of accuracy equals to number of correct predictions divided by number
# of all data and multiplied by 100
return (correct / len(actual_y)) * 100
num = TypeVar("num")
def valid_input(
input_type: Callable[[object], num], # Usually float or int
input_msg: str,
err_msg: str,
condition: Callable[[num], bool] = lambda x: True,
default: str | None = None,
) -> num:
"""
Ask for user value and validate that it fulfill a condition.
:input_type: user input expected type of value
:input_msg: message to show user in the screen
:err_msg: message to show in the screen in case of error
:condition: function that represents the condition that user input is valid.
:default: Default value in case the user does not type anything
:return: user's input
"""
while True:
try:
user_input = input_type(input(input_msg).strip() or default)
if condition(user_input):
return user_input
else:
print(f"{user_input}: {err_msg}")
continue
except ValueError:
print(
f"{user_input}: Incorrect input type, expected {input_type.__name__!r}"
)
# Main Function
def main():
"""This function starts execution phase"""
while True:
print(" Linear Discriminant Analysis ".center(50, "*"))
print("*" * 50, "\n")
print("First of all we should specify the number of classes that")
print("we want to generate as training dataset")
# Trying to get number of classes
n_classes = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg="Enter the number of classes (Data Groupings): ",
err_msg="Number of classes should be positive!",
)
print("-" * 100)
# Trying to get the value of standard deviation
std_dev = valid_input(
input_type=float,
condition=lambda x: x >= 0,
input_msg=(
"Enter the value of standard deviation"
"(Default value is 1.0 for all classes): "
),
err_msg="Standard deviation should not be negative!",
default="1.0",
)
print("-" * 100)
# Trying to get number of instances in classes and theirs means to generate
# dataset
counts = [] # An empty list to store instance counts of classes in dataset
for i in range(n_classes):
user_count = valid_input(
input_type=int,
condition=lambda x: x > 0,
input_msg=(f"Enter The number of instances for class_{i+1}: "),
err_msg="Number of instances should be positive!",
)
counts.append(user_count)
print("-" * 100)
# An empty list to store values of user-entered means of classes
user_means = []
for a in range(n_classes):
user_mean = valid_input(
input_type=float,
input_msg=(f"Enter the value of mean for class_{a+1}: "),
err_msg="This is an invalid value.",
)
user_means.append(user_mean)
print("-" * 100)
print("Standard deviation: ", std_dev)
# print out the number of instances in classes in separated line
for i, count in enumerate(counts, 1):
print(f"Number of instances in class_{i} is: {count}")
print("-" * 100)
# print out mean values of classes separated line
for i, user_mean in enumerate(user_means, 1):
print(f"Mean of class_{i} is: {user_mean}")
print("-" * 100)
# Generating training dataset drawn from gaussian distribution
x = [
gaussian_distribution(user_means[j], std_dev, counts[j])
for j in range(n_classes)
]
print("Generated Normal Distribution: \n", x)
print("-" * 100)
# Generating Ys to detecting corresponding classes
y = y_generator(n_classes, counts)
print("Generated Corresponding Ys: \n", y)
print("-" * 100)
# Calculating the value of actual mean for each class
actual_means = [calculate_mean(counts[k], x[k]) for k in range(n_classes)]
# for loop iterates over number of elements in 'actual_means' list and print
# out them in separated line
for i, actual_mean in enumerate(actual_means, 1):
print(f"Actual(Real) mean of class_{i} is: {actual_mean}")
print("-" * 100)
# Calculating the value of probabilities for each class
probabilities = [
calculate_probabilities(counts[i], sum(counts)) for i in range(n_classes)
]
# for loop iterates over number of elements in 'probabilities' list and print
# out them in separated line
for i, probability in enumerate(probabilities, 1):
print(f"Probability of class_{i} is: {probability}")
print("-" * 100)
# Calculating the values of variance for each class
variance = calculate_variance(x, actual_means, sum(counts))
print("Variance: ", variance)
print("-" * 100)
# Predicting Y values
# storing predicted Y values in 'pre_indexes' variable
pre_indexes = predict_y_values(x, actual_means, variance, probabilities)
print("-" * 100)
# Calculating Accuracy of the model
print(f"Accuracy: {accuracy(y, pre_indexes)}")
print("-" * 100)
print(" DONE ".center(100, "+"))
if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q":
print("\n" + "GoodBye!".center(100, "-") + "\n")
break
system("cls" if name == "nt" else "clear") # noqa: S605
if __name__ == "__main__":
main()
|
Linear regression is the most basic type of regression commonly used for predictive analysis. The idea is pretty simple: we have a dataset and we have features associated with it. Features should be chosen very cautiously as they determine how much our model will be able to make future predictions. We try to set the weight of these features, over many iterations, so that they best fit our dataset. In this particular code, I had used a CSGO dataset ADR vs Rating. We try to best fit a line through dataset and estimate the parameters. Collect dataset of CSGO The dataset contains ADR vs Rating of a Player :return : dataset obtained from the link, as matrix Run steep gradient descent and updates the Feature vector accordingly :param datax : contains the dataset :param datay : contains the output associated with each dataentry :param lendata : length of the data :param alpha : Learning rate of the model :param theta : Feature vector weight's for our model ;param return : Updated Feature's, using currfeatures alpha gradientw.r.t. feature Return sum of square error for error calculation :param datax : contains our dataset :param datay : contains the output result vector :param lendata : len of the dataset :param theta : contains the feature vector :return : sum of square error computed from given feature's Implement Linear regression over the dataset :param datax : contains our dataset :param datay : contains the output result vector :return : feature for line of best fit Feature vector Return sum of square error for error calculation :param predictedy : contains the output of prediction result vector :param originaly : contains values of expected outcome :return : mean absolute error computed from given feature's Driver function data collectdataset lendata data.shape0 datax np.cnp.oneslendata, data:, :1.astypefloat datay data:, 1.astypefloat theta runlinearregressiondatax, datay lenresult theta.shape1 printResultant Feature vector : for i in rangelenresult: printftheta0, i:.5f if name main: main | import numpy as np
import requests
def collect_dataset():
"""Collect dataset of CSGO
The dataset contains ADR vs Rating of a Player
:return : dataset obtained from the link, as matrix
"""
response = requests.get(
"https://raw.githubusercontent.com/yashLadha/The_Math_of_Intelligence/"
"master/Week1/ADRvsRating.csv"
)
lines = response.text.splitlines()
data = []
for item in lines:
item = item.split(",")
data.append(item)
data.pop(0) # This is for removing the labels from the list
dataset = np.matrix(data)
return dataset
def run_steep_gradient_descent(data_x, data_y, len_data, alpha, theta):
"""Run steep gradient descent and updates the Feature vector accordingly_
:param data_x : contains the dataset
:param data_y : contains the output associated with each data-entry
:param len_data : length of the data_
:param alpha : Learning rate of the model
:param theta : Feature vector (weight's for our model)
;param return : Updated Feature's, using
curr_features - alpha_ * gradient(w.r.t. feature)
"""
n = len_data
prod = np.dot(theta, data_x.transpose())
prod -= data_y.transpose()
sum_grad = np.dot(prod, data_x)
theta = theta - (alpha / n) * sum_grad
return theta
def sum_of_square_error(data_x, data_y, len_data, theta):
"""Return sum of square error for error calculation
:param data_x : contains our dataset
:param data_y : contains the output (result vector)
:param len_data : len of the dataset
:param theta : contains the feature vector
:return : sum of square error computed from given feature's
"""
prod = np.dot(theta, data_x.transpose())
prod -= data_y.transpose()
sum_elem = np.sum(np.square(prod))
error = sum_elem / (2 * len_data)
return error
def run_linear_regression(data_x, data_y):
"""Implement Linear regression over the dataset
:param data_x : contains our dataset
:param data_y : contains the output (result vector)
:return : feature for line of best fit (Feature vector)
"""
iterations = 100000
alpha = 0.0001550
no_features = data_x.shape[1]
len_data = data_x.shape[0] - 1
theta = np.zeros((1, no_features))
for i in range(iterations):
theta = run_steep_gradient_descent(data_x, data_y, len_data, alpha, theta)
error = sum_of_square_error(data_x, data_y, len_data, theta)
print(f"At Iteration {i + 1} - Error is {error:.5f}")
return theta
def mean_absolute_error(predicted_y, original_y):
"""Return sum of square error for error calculation
:param predicted_y : contains the output of prediction (result vector)
:param original_y : contains values of expected outcome
:return : mean absolute error computed from given feature's
"""
total = sum(abs(y - predicted_y[i]) for i, y in enumerate(original_y))
return total / len(original_y)
def main():
"""Driver function"""
data = collect_dataset()
len_data = data.shape[0]
data_x = np.c_[np.ones(len_data), data[:, :-1]].astype(float)
data_y = data[:, -1].astype(float)
theta = run_linear_regression(data_x, data_y)
len_result = theta.shape[1]
print("Resultant Feature vector : ")
for i in range(len_result):
print(f"{theta[0, i]:.5f}")
if __name__ == "__main__":
main()
|
Locally weighted linear regression, also called local regression, is a type of nonparametric linear regression that prioritizes data closest to a given prediction point. The algorithm estimates the vector of model coefficients using weighted least squares regression: XWXXWy, where X is the design matrix, y is the response vector, and W is the diagonal weight matrix. This implementation calculates w, the weight of the ith training sample, using the Gaussian weight: w expx x2, where x is the ith training sample, x is the prediction point, is the bandwidth, and x is the Euclidean norm also called the 2norm or the L norm. The bandwidth controls how quickly the weight of a training sample decreases as its distance from the prediction point increases. One can think of the Gaussian weight as a bell curve centered around the prediction point: a training sample is weighted lower if it's farther from the center, and controls the spread of the bell curve. Other types of locally weighted regression such as locally estimated scatterplot smoothing LOESS typically use different weight functions. References: https:en.wikipedia.orgwikiLocalregression https:en.wikipedia.orgwikiWeightedleastsquares https:cs229.stanford.edunotes2022fallmainnotes.pdf Calculate the weight of every point in the training data around a given prediction point Args: point: xvalue at which the prediction is being made xtrain: ndarray of xvalues for training tau: bandwidth value, controls how quickly the weight of training values decreases as the distance from the prediction point increases Returns: m x m weight matrix around the prediction point, where m is the size of the training set weightmatrix ... np.array1., 1., ... np.array16.99, 10.34, 21.01,23.68, 24.59,25.69, ... 0.6 ... array1.43807972e207, 0.00000000e000, 0.00000000e000, 0.00000000e000, 0.00000000e000, 0.00000000e000, 0.00000000e000, 0.00000000e000, 0.00000000e000 Calculate the local weights at a given prediction point using the weight matrix for that point Args: point: xvalue at which the prediction is being made xtrain: ndarray of xvalues for training ytrain: ndarray of yvalues for training tau: bandwidth value, controls how quickly the weight of training values decreases as the distance from the prediction point increases Returns: ndarray of local weights localweight ... np.array1., 1., ... np.array16.99, 10.34, 21.01,23.68, 24.59,25.69, ... np.array1.01, 1.66, 3.5, ... 0.6 ... array0.00873174, 0.08272556 Calculate predictions for each point in the training data Args: xtrain: ndarray of xvalues for training ytrain: ndarray of yvalues for training tau: bandwidth value, controls how quickly the weight of training values decreases as the distance from the prediction point increases Returns: ndarray of predictions localweightregression ... np.array16.99, 10.34, 21.01, 23.68, 24.59, 25.69, ... np.array1.01, 1.66, 3.5, ... 0.6 ... array1.07173261, 1.65970737, 3.50160179 Load data from seaborn and split it into x and y points pass No doctests, function is for demo purposes only pairing elements of one and xdata Plot predictions and display the graph pass No doctests, function is for demo purposes only Demo with a dataset from the seaborn module | import matplotlib.pyplot as plt
import numpy as np
def weight_matrix(point: np.ndarray, x_train: np.ndarray, tau: float) -> np.ndarray:
"""
Calculate the weight of every point in the training data around a given
prediction point
Args:
point: x-value at which the prediction is being made
x_train: ndarray of x-values for training
tau: bandwidth value, controls how quickly the weight of training values
decreases as the distance from the prediction point increases
Returns:
m x m weight matrix around the prediction point, where m is the size of
the training set
>>> weight_matrix(
... np.array([1., 1.]),
... np.array([[16.99, 10.34], [21.01,23.68], [24.59,25.69]]),
... 0.6
... )
array([[1.43807972e-207, 0.00000000e+000, 0.00000000e+000],
[0.00000000e+000, 0.00000000e+000, 0.00000000e+000],
[0.00000000e+000, 0.00000000e+000, 0.00000000e+000]])
"""
m = len(x_train) # Number of training samples
weights = np.eye(m) # Initialize weights as identity matrix
for j in range(m):
diff = point - x_train[j]
weights[j, j] = np.exp(diff @ diff.T / (-2.0 * tau**2))
return weights
def local_weight(
point: np.ndarray, x_train: np.ndarray, y_train: np.ndarray, tau: float
) -> np.ndarray:
"""
Calculate the local weights at a given prediction point using the weight
matrix for that point
Args:
point: x-value at which the prediction is being made
x_train: ndarray of x-values for training
y_train: ndarray of y-values for training
tau: bandwidth value, controls how quickly the weight of training values
decreases as the distance from the prediction point increases
Returns:
ndarray of local weights
>>> local_weight(
... np.array([1., 1.]),
... np.array([[16.99, 10.34], [21.01,23.68], [24.59,25.69]]),
... np.array([[1.01, 1.66, 3.5]]),
... 0.6
... )
array([[0.00873174],
[0.08272556]])
"""
weight_mat = weight_matrix(point, x_train, tau)
weight = np.linalg.inv(x_train.T @ weight_mat @ x_train) @ (
x_train.T @ weight_mat @ y_train.T
)
return weight
def local_weight_regression(
x_train: np.ndarray, y_train: np.ndarray, tau: float
) -> np.ndarray:
"""
Calculate predictions for each point in the training data
Args:
x_train: ndarray of x-values for training
y_train: ndarray of y-values for training
tau: bandwidth value, controls how quickly the weight of training values
decreases as the distance from the prediction point increases
Returns:
ndarray of predictions
>>> local_weight_regression(
... np.array([[16.99, 10.34], [21.01, 23.68], [24.59, 25.69]]),
... np.array([[1.01, 1.66, 3.5]]),
... 0.6
... )
array([1.07173261, 1.65970737, 3.50160179])
"""
y_pred = np.zeros(len(x_train)) # Initialize array of predictions
for i, item in enumerate(x_train):
y_pred[i] = np.dot(item, local_weight(item, x_train, y_train, tau)).item()
return y_pred
def load_data(
dataset_name: str, x_name: str, y_name: str
) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
"""
Load data from seaborn and split it into x and y points
>>> pass # No doctests, function is for demo purposes only
"""
import seaborn as sns
data = sns.load_dataset(dataset_name)
x_data = np.array(data[x_name])
y_data = np.array(data[y_name])
one = np.ones(len(y_data))
# pairing elements of one and x_data
x_train = np.column_stack((one, x_data))
return x_train, x_data, y_data
def plot_preds(
x_train: np.ndarray,
preds: np.ndarray,
x_data: np.ndarray,
y_data: np.ndarray,
x_name: str,
y_name: str,
) -> None:
"""
Plot predictions and display the graph
>>> pass # No doctests, function is for demo purposes only
"""
x_train_sorted = np.sort(x_train, axis=0)
plt.scatter(x_data, y_data, color="blue")
plt.plot(
x_train_sorted[:, 1],
preds[x_train[:, 1].argsort(0)],
color="yellow",
linewidth=5,
)
plt.title("Local Weighted Regression")
plt.xlabel(x_name)
plt.ylabel(y_name)
plt.show()
if __name__ == "__main__":
import doctest
doctest.testmod()
# Demo with a dataset from the seaborn module
training_data_x, total_bill, tip = load_data("tips", "total_bill", "tip")
predictions = local_weight_regression(training_data_x, tip, 5)
plot_preds(training_data_x, predictions, total_bill, tip, "total_bill", "tip")
|
!usrbinpython Logistic Regression from scratch In62: In63: importing all the required libraries Implementing logistic regression for classification problem Helpful resources: Coursera ML course https:medium.commartinpellalogisticregressionfromscratchinpython124c5636b8ac getipython.runlinemagic'matplotlib', 'inline' In67: sigmoid function or logistic function is used as a hypothesis function in classification problems Also known as Logistic Function. 1 fx 1 e The sigmoid function approaches a value of 1 as its input 'x' becomes increasing positive. Opposite for negative values. Reference: https:en.wikipedia.orgwikiSigmoidfunction param z: input to the function returns: returns value in the range 0 to 1 Examples: sigmoidfunction4 0.9820137900379085 sigmoidfunctionnp.array3, 3 array0.04742587, 0.95257413 sigmoidfunctionnp.array3, 3, 1 array0.04742587, 0.95257413, 0.73105858 sigmoidfunctionnp.array0.01, 2, 1.9 array0.49750002, 0.11920292, 0.13010847 sigmoidfunctionnp.array1.3, 5.3, 12 array0.21416502, 0.9950332 , 0.99999386 sigmoidfunctionnp.array0.01, 0.02, 4.1 array0.50249998, 0.50499983, 0.9836975 sigmoidfunctionnp.array0.8 array0.68997448 Cost function quantifies the error between predicted and expected values. The cost function used in Logistic Regression is called Log Loss or Cross Entropy Function. J 1m y loghx 1 y log1 hx Where: J is the cost that we want to minimize during training m is the number of training examples represents the summation over all training examples y is the actual binary label 0 or 1 for a given example hx is the predicted probability that x belongs to the positive class param h: the output of sigmoid function. It is the estimated probability that the input example 'x' belongs to the positive class param y: the actual binary label associated with input example 'x' Examples: estimations sigmoidfunctionnp.array0.3, 4.3, 8.1 costfunctionhestimations,ynp.array1, 0, 1 0.18937868932131605 estimations sigmoidfunctionnp.array4, 3, 1 costfunctionhestimations,ynp.array1, 0, 0 1.459999655669926 estimations sigmoidfunctionnp.array4, 3, 1 costfunctionhestimations,ynp.array1,0,0 0.1266663223365915 estimations sigmoidfunction0 costfunctionhestimations,ynp.array1 0.6931471805599453 References: https:en.wikipedia.orgwikiLogisticregression here alpha is the learning rate, X is the feature matrix,y is the target matrix In68: | #!/usr/bin/python
# Logistic Regression from scratch
# In[62]:
# In[63]:
# importing all the required libraries
"""
Implementing logistic regression for classification problem
Helpful resources:
Coursera ML course
https://medium.com/@martinpella/logistic-regression-from-scratch-in-python-124c5636b8ac
"""
import numpy as np
from matplotlib import pyplot as plt
from sklearn import datasets
# get_ipython().run_line_magic('matplotlib', 'inline')
# In[67]:
# sigmoid function or logistic function is used as a hypothesis function in
# classification problems
def sigmoid_function(z: float | np.ndarray) -> float | np.ndarray:
"""
Also known as Logistic Function.
1
f(x) = -------
1 + e⁻ˣ
The sigmoid function approaches a value of 1 as its input 'x' becomes
increasing positive. Opposite for negative values.
Reference: https://en.wikipedia.org/wiki/Sigmoid_function
@param z: input to the function
@returns: returns value in the range 0 to 1
Examples:
>>> sigmoid_function(4)
0.9820137900379085
>>> sigmoid_function(np.array([-3, 3]))
array([0.04742587, 0.95257413])
>>> sigmoid_function(np.array([-3, 3, 1]))
array([0.04742587, 0.95257413, 0.73105858])
>>> sigmoid_function(np.array([-0.01, -2, -1.9]))
array([0.49750002, 0.11920292, 0.13010847])
>>> sigmoid_function(np.array([-1.3, 5.3, 12]))
array([0.21416502, 0.9950332 , 0.99999386])
>>> sigmoid_function(np.array([0.01, 0.02, 4.1]))
array([0.50249998, 0.50499983, 0.9836975 ])
>>> sigmoid_function(np.array([0.8]))
array([0.68997448])
"""
return 1 / (1 + np.exp(-z))
def cost_function(h: np.ndarray, y: np.ndarray) -> float:
"""
Cost function quantifies the error between predicted and expected values.
The cost function used in Logistic Regression is called Log Loss
or Cross Entropy Function.
J(θ) = (1/m) * Σ [ -y * log(hθ(x)) - (1 - y) * log(1 - hθ(x)) ]
Where:
- J(θ) is the cost that we want to minimize during training
- m is the number of training examples
- Σ represents the summation over all training examples
- y is the actual binary label (0 or 1) for a given example
- hθ(x) is the predicted probability that x belongs to the positive class
@param h: the output of sigmoid function. It is the estimated probability
that the input example 'x' belongs to the positive class
@param y: the actual binary label associated with input example 'x'
Examples:
>>> estimations = sigmoid_function(np.array([0.3, -4.3, 8.1]))
>>> cost_function(h=estimations,y=np.array([1, 0, 1]))
0.18937868932131605
>>> estimations = sigmoid_function(np.array([4, 3, 1]))
>>> cost_function(h=estimations,y=np.array([1, 0, 0]))
1.459999655669926
>>> estimations = sigmoid_function(np.array([4, -3, -1]))
>>> cost_function(h=estimations,y=np.array([1,0,0]))
0.1266663223365915
>>> estimations = sigmoid_function(0)
>>> cost_function(h=estimations,y=np.array([1]))
0.6931471805599453
References:
- https://en.wikipedia.org/wiki/Logistic_regression
"""
return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean()
def log_likelihood(x, y, weights):
scores = np.dot(x, weights)
return np.sum(y * scores - np.log(1 + np.exp(scores)))
# here alpha is the learning rate, X is the feature matrix,y is the target matrix
def logistic_reg(alpha, x, y, max_iterations=70000):
theta = np.zeros(x.shape[1])
for iterations in range(max_iterations):
z = np.dot(x, theta)
h = sigmoid_function(z)
gradient = np.dot(x.T, h - y) / y.size
theta = theta - alpha * gradient # updating the weights
z = np.dot(x, theta)
h = sigmoid_function(z)
j = cost_function(h, y)
if iterations % 100 == 0:
print(f"loss: {j} \t") # printing the loss after every 100 iterations
return theta
# In[68]:
if __name__ == "__main__":
import doctest
doctest.testmod()
iris = datasets.load_iris()
x = iris.data[:, :2]
y = (iris.target != 0) * 1
alpha = 0.1
theta = logistic_reg(alpha, x, y, max_iterations=70000)
print("theta: ", theta) # printing the theta i.e our weights vector
def predict_prob(x):
return sigmoid_function(
np.dot(x, theta)
) # predicting the value of probability from the logistic regression algorithm
plt.figure(figsize=(10, 6))
plt.scatter(x[y == 0][:, 0], x[y == 0][:, 1], color="b", label="0")
plt.scatter(x[y == 1][:, 0], x[y == 1][:, 1], color="r", label="1")
(x1_min, x1_max) = (x[:, 0].min(), x[:, 0].max())
(x2_min, x2_max) = (x[:, 1].min(), x[:, 1].max())
(xx1, xx2) = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max))
grid = np.c_[xx1.ravel(), xx2.ravel()]
probs = predict_prob(grid).reshape(xx1.shape)
plt.contour(xx1, xx2, probs, [0.5], linewidths=1, colors="black")
plt.legend()
plt.show()
|
Calculate the mean binary crossentropy BCE loss between true labels and predicted probabilities. BCE loss quantifies dissimilarity between true labels 0 or 1 and predicted probabilities. It's widely used in binary classification tasks. BCE ytrue lnypred 1 ytrue ln1 ypred Reference: https:en.wikipedia.orgwikiCrossentropy Parameters: ytrue: True binary labels 0 or 1 ypred: Predicted probabilities for class 1 epsilon: Small constant to avoid numerical instability truelabels np.array0, 1, 1, 0, 1 predictedprobs np.array0.2, 0.7, 0.9, 0.3, 0.8 binarycrossentropytruelabels, predictedprobs 0.2529995012327421 truelabels np.array0, 1, 1, 0, 1 predictedprobs np.array0.3, 0.8, 0.9, 0.2 binarycrossentropytruelabels, predictedprobs Traceback most recent call last: ... ValueError: Input arrays must have the same length. Calculate the mean binary focal crossentropy BFCE loss between true labels and predicted probabilities. BFCE loss quantifies dissimilarity between true labels 0 or 1 and predicted probabilities. It's a variation of binary crossentropy that addresses class imbalance by focusing on hard examples. BCFE alpha 1 ypredgamma ytrue logypred 1 alpha ypredgamma 1 ytrue log1 ypred Reference: Lin et al., 2018https:arxiv.orgpdf1708.02002.pdf Parameters: ytrue: True binary labels 0 or 1. ypred: Predicted probabilities for class 1. gamma: Focusing parameter for modulating the loss default: 2.0. alpha: Weighting factor for class 1 default: 0.25. epsilon: Small constant to avoid numerical instability. truelabels np.array0, 1, 1, 0, 1 predictedprobs np.array0.2, 0.7, 0.9, 0.3, 0.8 binaryfocalcrossentropytruelabels, predictedprobs 0.008257977659239775 truelabels np.array0, 1, 1, 0, 1 predictedprobs np.array0.3, 0.8, 0.9, 0.2 binaryfocalcrossentropytruelabels, predictedprobs Traceback most recent call last: ... ValueError: Input arrays must have the same length. Clip predicted probabilities to avoid log0 Calculate categorical crossentropy CCE loss between true class labels and predicted class probabilities. CCE ytrue lnypred Reference: https:en.wikipedia.orgwikiCrossentropy Parameters: ytrue: True class labels onehot encoded ypred: Predicted class probabilities epsilon: Small constant to avoid numerical instability truelabels np.array1, 0, 0, 0, 1, 0, 0, 0, 1 predprobs np.array0.9, 0.1, 0.0, 0.2, 0.7, 0.1, 0.0, 0.1, 0.9 categoricalcrossentropytruelabels, predprobs 0.567395975254385 truelabels np.array1, 0, 0, 1 predprobs np.array0.9, 0.1, 0.0, 0.2, 0.7, 0.1 categoricalcrossentropytruelabels, predprobs Traceback most recent call last: ... ValueError: Input arrays must have the same shape. truelabels np.array2, 0, 1, 1, 0, 0 predprobs np.array0.9, 0.1, 0.0, 0.2, 0.7, 0.1 categoricalcrossentropytruelabels, predprobs Traceback most recent call last: ... ValueError: ytrue must be onehot encoded. truelabels np.array1, 0, 1, 1, 0, 0 predprobs np.array0.9, 0.1, 0.0, 0.2, 0.7, 0.1 categoricalcrossentropytruelabels, predprobs Traceback most recent call last: ... ValueError: ytrue must be onehot encoded. truelabels np.array1, 0, 0, 0, 1, 0 predprobs np.array0.9, 0.1, 0.1, 0.2, 0.7, 0.1 categoricalcrossentropytruelabels, predprobs Traceback most recent call last: ... ValueError: Predicted probabilities must sum to approximately 1. Calculate the mean hinge loss for between true labels and predicted probabilities for training support vector machines SVMs. Hinge loss max0, 1 true pred Reference: https:en.wikipedia.orgwikiHingeloss Args: ytrue: actual values ground truth encoded as 1 or 1 ypred: predicted values truelabels np.array1, 1, 1, 1, 1 pred np.array4, 0.3, 0.7, 5, 10 hingelosstruelabels, pred 1.52 truelabels np.array1, 1, 1, 1, 1, 1 pred np.array4, 0.3, 0.7, 5, 10 hingelosstruelabels, pred Traceback most recent call last: ... ValueError: Length of predicted and actual array must be same. truelabels np.array1, 1, 10, 1, 1 pred np.array4, 0.3, 0.7, 5, 10 hingelosstruelabels, pred Traceback most recent call last: ... ValueError: ytrue can have values 1 or 1 only. Calculate the mean Huber loss between the given ground truth and predicted values. The Huber loss describes the penalty incurred by an estimation procedure, and it serves as a measure of accuracy for regression models. Huber loss 0.5 ytrue ypred2 if ytrue ypred delta delta ytrue ypred 0.5 delta2 otherwise Reference: https:en.wikipedia.orgwikiHuberloss Parameters: ytrue: The true values ground truth ypred: The predicted values truevalues np.array0.9, 10.0, 2.0, 1.0, 5.2 predictedvalues np.array0.8, 2.1, 2.9, 4.2, 5.2 np.isclosehuberlosstruevalues, predictedvalues, 1.0, 2.102 True truelabels np.array11.0, 21.0, 3.32, 4.0, 5.0 predictedprobs np.array8.3, 20.8, 2.9, 11.2, 5.0 np.isclosehuberlosstruelabels, predictedprobs, 1.0, 1.80164 True truelabels np.array11.0, 21.0, 3.32, 4.0 predictedprobs np.array8.3, 20.8, 2.9, 11.2, 5.0 huberlosstruelabels, predictedprobs, 1.0 Traceback most recent call last: ... ValueError: Input arrays must have the same length. Calculate the mean squared error MSE between ground truth and predicted values. MSE measures the squared difference between true values and predicted values, and it serves as a measure of accuracy for regression models. MSE 1n ytrue ypred2 Reference: https:en.wikipedia.orgwikiMeansquarederror Parameters: ytrue: The true values ground truth ypred: The predicted values truevalues np.array1.0, 2.0, 3.0, 4.0, 5.0 predictedvalues np.array0.8, 2.1, 2.9, 4.2, 5.2 np.isclosemeansquarederrortruevalues, predictedvalues, 0.028 True truelabels np.array1.0, 2.0, 3.0, 4.0, 5.0 predictedprobs np.array0.3, 0.8, 0.9, 0.2 meansquarederrortruelabels, predictedprobs Traceback most recent call last: ... ValueError: Input arrays must have the same length. Calculates the Mean Absolute Error MAE between ground truth observed and predicted values. MAE measures the absolute difference between true values and predicted values. Equation: MAE 1n absytrue ypred Reference: https:en.wikipedia.orgwikiMeanabsoluteerror Parameters: ytrue: The true values ground truth ypred: The predicted values truevalues np.array1.0, 2.0, 3.0, 4.0, 5.0 predictedvalues np.array0.8, 2.1, 2.9, 4.2, 5.2 np.isclosemeanabsoluteerrortruevalues, predictedvalues, 0.16 True truevalues np.array1.0, 2.0, 3.0, 4.0, 5.0 predictedvalues np.array0.8, 2.1, 2.9, 4.2, 5.2 np.isclosemeanabsoluteerrortruevalues, predictedvalues, 2.16 False truelabels np.array1.0, 2.0, 3.0, 4.0, 5.0 predictedprobs np.array0.3, 0.8, 0.9, 5.2 meanabsoluteerrortruelabels, predictedprobs Traceback most recent call last: ... ValueError: Input arrays must have the same length. Calculate the mean squared logarithmic error MSLE between ground truth and predicted values. MSLE measures the squared logarithmic difference between true values and predicted values for regression models. It's particularly useful for dealing with skewed or largevalue data, and it's often used when the relative differences between predicted and true values are more important than absolute differences. MSLE 1n log1 ytrue log1 ypred2 Reference: https:insideaiml.comblogMeanSquaredLogarithmicErrorLoss1035 Parameters: ytrue: The true values ground truth ypred: The predicted values truevalues np.array1.0, 2.0, 3.0, 4.0, 5.0 predictedvalues np.array0.8, 2.1, 2.9, 4.2, 5.2 meansquaredlogarithmicerrortruevalues, predictedvalues 0.0030860877925181344 truelabels np.array1.0, 2.0, 3.0, 4.0, 5.0 predictedprobs np.array0.3, 0.8, 0.9, 0.2 meansquaredlogarithmicerrortruelabels, predictedprobs Traceback most recent call last: ... ValueError: Input arrays must have the same length. Calculate the Mean Absolute Percentage Error between ytrue and ypred. Mean Absolute Percentage Error calculates the average of the absolute percentage differences between the predicted and true values. Formula ytrueiYprediytruein Source: https:stephenallwright.comgoodmapescore Parameters: ytrue np.ndarray: Numpy array containing truetarget values. ypred np.ndarray: Numpy array containing predicted values. Returns: float: The Mean Absolute Percentage error between ytrue and ypred. Examples: ytrue np.array10, 20, 30, 40 ypred np.array12, 18, 33, 45 meanabsolutepercentageerrorytrue, ypred 0.13125 ytrue np.array1, 2, 3, 4 ypred np.array2, 3, 4, 5 meanabsolutepercentageerrorytrue, ypred 0.5208333333333333 ytrue np.array34, 37, 44, 47, 48, 48, 46, 43, 32, 27, 26, 24 ypred np.array37, 40, 46, 44, 46, 50, 45, 44, 34, 30, 22, 23 meanabsolutepercentageerrorytrue, ypred 0.064671076436071 Calculate the perplexity for the ytrue and ypred. Compute the Perplexity which useful in predicting language model accuracy in Natural Language Processing NLP. Perplexity is measure of how certain the model in its predictions. Perplexity Loss exp1N lnpx Reference: https:en.wikipedia.orgwikiPerplexity Args: ytrue: Actual label encoded sentences of shape batchsize, sentencelength ypred: Predicted sentences of shape batchsize, sentencelength, vocabsize epsilon: Small floating point number to avoid getting inf for log0 Returns: Perplexity loss between ytrue and ypred. ytrue np.array1, 4, 2, 3 ypred np.array ... 0.28, 0.19, 0.21 , 0.15, 0.15, ... 0.24, 0.19, 0.09, 0.18, 0.27, ... 0.03, 0.26, 0.21, 0.18, 0.30, ... 0.28, 0.10, 0.33, 0.15, 0.12 ... perplexitylossytrue, ypred 5.0247347775367945 ytrue np.array1, 4, 2, 3 ypred np.array ... 0.28, 0.19, 0.21 , 0.15, 0.15, ... 0.24, 0.19, 0.09, 0.18, 0.27, ... 0.30, 0.10, 0.20, 0.15, 0.25, ... 0.03, 0.26, 0.21, 0.18, 0.30, ... 0.28, 0.10, 0.33, 0.15, 0.12, ... 0.30, 0.10, 0.20, 0.15, 0.25, ... perplexitylossytrue, ypred Traceback most recent call last: ... ValueError: Sentence length of ytrue and ypred must be equal. ytrue np.array1, 4, 2, 11 ypred np.array ... 0.28, 0.19, 0.21 , 0.15, 0.15, ... 0.24, 0.19, 0.09, 0.18, 0.27, ... 0.03, 0.26, 0.21, 0.18, 0.30, ... 0.28, 0.10, 0.33, 0.15, 0.12 ... perplexitylossytrue, ypred Traceback most recent call last: ... ValueError: Label value must not be greater than vocabulary size. ytrue np.array1, 4 ypred np.array ... 0.28, 0.19, 0.21 , 0.15, 0.15, ... 0.24, 0.19, 0.09, 0.18, 0.27, ... 0.03, 0.26, 0.21, 0.18, 0.30, ... 0.28, 0.10, 0.33, 0.15, 0.12 ... perplexitylossytrue, ypred Traceback most recent call last: ... ValueError: Batch size of ytrue and ypred must be equal. Matrix to select prediction value only for true class Getting the matrix containing prediction for only true class Calculating perplexity for each sentence | import numpy as np
def binary_cross_entropy(
y_true: np.ndarray, y_pred: np.ndarray, epsilon: float = 1e-15
) -> float:
"""
Calculate the mean binary cross-entropy (BCE) loss between true labels and predicted
probabilities.
BCE loss quantifies dissimilarity between true labels (0 or 1) and predicted
probabilities. It's widely used in binary classification tasks.
BCE = -Σ(y_true * ln(y_pred) + (1 - y_true) * ln(1 - y_pred))
Reference: https://en.wikipedia.org/wiki/Cross_entropy
Parameters:
- y_true: True binary labels (0 or 1)
- y_pred: Predicted probabilities for class 1
- epsilon: Small constant to avoid numerical instability
>>> true_labels = np.array([0, 1, 1, 0, 1])
>>> predicted_probs = np.array([0.2, 0.7, 0.9, 0.3, 0.8])
>>> binary_cross_entropy(true_labels, predicted_probs)
0.2529995012327421
>>> true_labels = np.array([0, 1, 1, 0, 1])
>>> predicted_probs = np.array([0.3, 0.8, 0.9, 0.2])
>>> binary_cross_entropy(true_labels, predicted_probs)
Traceback (most recent call last):
...
ValueError: Input arrays must have the same length.
"""
if len(y_true) != len(y_pred):
raise ValueError("Input arrays must have the same length.")
y_pred = np.clip(y_pred, epsilon, 1 - epsilon) # Clip predictions to avoid log(0)
bce_loss = -(y_true * np.log(y_pred) + (1 - y_true) * np.log(1 - y_pred))
return np.mean(bce_loss)
def binary_focal_cross_entropy(
y_true: np.ndarray,
y_pred: np.ndarray,
gamma: float = 2.0,
alpha: float = 0.25,
epsilon: float = 1e-15,
) -> float:
"""
Calculate the mean binary focal cross-entropy (BFCE) loss between true labels
and predicted probabilities.
BFCE loss quantifies dissimilarity between true labels (0 or 1) and predicted
probabilities. It's a variation of binary cross-entropy that addresses class
imbalance by focusing on hard examples.
BCFE = -Σ(alpha * (1 - y_pred)**gamma * y_true * log(y_pred)
+ (1 - alpha) * y_pred**gamma * (1 - y_true) * log(1 - y_pred))
Reference: [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf)
Parameters:
- y_true: True binary labels (0 or 1).
- y_pred: Predicted probabilities for class 1.
- gamma: Focusing parameter for modulating the loss (default: 2.0).
- alpha: Weighting factor for class 1 (default: 0.25).
- epsilon: Small constant to avoid numerical instability.
>>> true_labels = np.array([0, 1, 1, 0, 1])
>>> predicted_probs = np.array([0.2, 0.7, 0.9, 0.3, 0.8])
>>> binary_focal_cross_entropy(true_labels, predicted_probs)
0.008257977659239775
>>> true_labels = np.array([0, 1, 1, 0, 1])
>>> predicted_probs = np.array([0.3, 0.8, 0.9, 0.2])
>>> binary_focal_cross_entropy(true_labels, predicted_probs)
Traceback (most recent call last):
...
ValueError: Input arrays must have the same length.
"""
if len(y_true) != len(y_pred):
raise ValueError("Input arrays must have the same length.")
# Clip predicted probabilities to avoid log(0)
y_pred = np.clip(y_pred, epsilon, 1 - epsilon)
bcfe_loss = -(
alpha * (1 - y_pred) ** gamma * y_true * np.log(y_pred)
+ (1 - alpha) * y_pred**gamma * (1 - y_true) * np.log(1 - y_pred)
)
return np.mean(bcfe_loss)
def categorical_cross_entropy(
y_true: np.ndarray, y_pred: np.ndarray, epsilon: float = 1e-15
) -> float:
"""
Calculate categorical cross-entropy (CCE) loss between true class labels and
predicted class probabilities.
CCE = -Σ(y_true * ln(y_pred))
Reference: https://en.wikipedia.org/wiki/Cross_entropy
Parameters:
- y_true: True class labels (one-hot encoded)
- y_pred: Predicted class probabilities
- epsilon: Small constant to avoid numerical instability
>>> true_labels = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
>>> pred_probs = np.array([[0.9, 0.1, 0.0], [0.2, 0.7, 0.1], [0.0, 0.1, 0.9]])
>>> categorical_cross_entropy(true_labels, pred_probs)
0.567395975254385
>>> true_labels = np.array([[1, 0], [0, 1]])
>>> pred_probs = np.array([[0.9, 0.1, 0.0], [0.2, 0.7, 0.1]])
>>> categorical_cross_entropy(true_labels, pred_probs)
Traceback (most recent call last):
...
ValueError: Input arrays must have the same shape.
>>> true_labels = np.array([[2, 0, 1], [1, 0, 0]])
>>> pred_probs = np.array([[0.9, 0.1, 0.0], [0.2, 0.7, 0.1]])
>>> categorical_cross_entropy(true_labels, pred_probs)
Traceback (most recent call last):
...
ValueError: y_true must be one-hot encoded.
>>> true_labels = np.array([[1, 0, 1], [1, 0, 0]])
>>> pred_probs = np.array([[0.9, 0.1, 0.0], [0.2, 0.7, 0.1]])
>>> categorical_cross_entropy(true_labels, pred_probs)
Traceback (most recent call last):
...
ValueError: y_true must be one-hot encoded.
>>> true_labels = np.array([[1, 0, 0], [0, 1, 0]])
>>> pred_probs = np.array([[0.9, 0.1, 0.1], [0.2, 0.7, 0.1]])
>>> categorical_cross_entropy(true_labels, pred_probs)
Traceback (most recent call last):
...
ValueError: Predicted probabilities must sum to approximately 1.
"""
if y_true.shape != y_pred.shape:
raise ValueError("Input arrays must have the same shape.")
if np.any((y_true != 0) & (y_true != 1)) or np.any(y_true.sum(axis=1) != 1):
raise ValueError("y_true must be one-hot encoded.")
if not np.all(np.isclose(np.sum(y_pred, axis=1), 1, rtol=epsilon, atol=epsilon)):
raise ValueError("Predicted probabilities must sum to approximately 1.")
y_pred = np.clip(y_pred, epsilon, 1) # Clip predictions to avoid log(0)
return -np.sum(y_true * np.log(y_pred))
def hinge_loss(y_true: np.ndarray, y_pred: np.ndarray) -> float:
"""
Calculate the mean hinge loss for between true labels and predicted probabilities
for training support vector machines (SVMs).
Hinge loss = max(0, 1 - true * pred)
Reference: https://en.wikipedia.org/wiki/Hinge_loss
Args:
- y_true: actual values (ground truth) encoded as -1 or 1
- y_pred: predicted values
>>> true_labels = np.array([-1, 1, 1, -1, 1])
>>> pred = np.array([-4, -0.3, 0.7, 5, 10])
>>> hinge_loss(true_labels, pred)
1.52
>>> true_labels = np.array([-1, 1, 1, -1, 1, 1])
>>> pred = np.array([-4, -0.3, 0.7, 5, 10])
>>> hinge_loss(true_labels, pred)
Traceback (most recent call last):
...
ValueError: Length of predicted and actual array must be same.
>>> true_labels = np.array([-1, 1, 10, -1, 1])
>>> pred = np.array([-4, -0.3, 0.7, 5, 10])
>>> hinge_loss(true_labels, pred)
Traceback (most recent call last):
...
ValueError: y_true can have values -1 or 1 only.
"""
if len(y_true) != len(y_pred):
raise ValueError("Length of predicted and actual array must be same.")
if np.any((y_true != -1) & (y_true != 1)):
raise ValueError("y_true can have values -1 or 1 only.")
hinge_losses = np.maximum(0, 1.0 - (y_true * y_pred))
return np.mean(hinge_losses)
def huber_loss(y_true: np.ndarray, y_pred: np.ndarray, delta: float) -> float:
"""
Calculate the mean Huber loss between the given ground truth and predicted values.
The Huber loss describes the penalty incurred by an estimation procedure, and it
serves as a measure of accuracy for regression models.
Huber loss =
0.5 * (y_true - y_pred)^2 if |y_true - y_pred| <= delta
delta * |y_true - y_pred| - 0.5 * delta^2 otherwise
Reference: https://en.wikipedia.org/wiki/Huber_loss
Parameters:
- y_true: The true values (ground truth)
- y_pred: The predicted values
>>> true_values = np.array([0.9, 10.0, 2.0, 1.0, 5.2])
>>> predicted_values = np.array([0.8, 2.1, 2.9, 4.2, 5.2])
>>> np.isclose(huber_loss(true_values, predicted_values, 1.0), 2.102)
True
>>> true_labels = np.array([11.0, 21.0, 3.32, 4.0, 5.0])
>>> predicted_probs = np.array([8.3, 20.8, 2.9, 11.2, 5.0])
>>> np.isclose(huber_loss(true_labels, predicted_probs, 1.0), 1.80164)
True
>>> true_labels = np.array([11.0, 21.0, 3.32, 4.0])
>>> predicted_probs = np.array([8.3, 20.8, 2.9, 11.2, 5.0])
>>> huber_loss(true_labels, predicted_probs, 1.0)
Traceback (most recent call last):
...
ValueError: Input arrays must have the same length.
"""
if len(y_true) != len(y_pred):
raise ValueError("Input arrays must have the same length.")
huber_mse = 0.5 * (y_true - y_pred) ** 2
huber_mae = delta * (np.abs(y_true - y_pred) - 0.5 * delta)
return np.where(np.abs(y_true - y_pred) <= delta, huber_mse, huber_mae).mean()
def mean_squared_error(y_true: np.ndarray, y_pred: np.ndarray) -> float:
"""
Calculate the mean squared error (MSE) between ground truth and predicted values.
MSE measures the squared difference between true values and predicted values, and it
serves as a measure of accuracy for regression models.
MSE = (1/n) * Σ(y_true - y_pred)^2
Reference: https://en.wikipedia.org/wiki/Mean_squared_error
Parameters:
- y_true: The true values (ground truth)
- y_pred: The predicted values
>>> true_values = np.array([1.0, 2.0, 3.0, 4.0, 5.0])
>>> predicted_values = np.array([0.8, 2.1, 2.9, 4.2, 5.2])
>>> np.isclose(mean_squared_error(true_values, predicted_values), 0.028)
True
>>> true_labels = np.array([1.0, 2.0, 3.0, 4.0, 5.0])
>>> predicted_probs = np.array([0.3, 0.8, 0.9, 0.2])
>>> mean_squared_error(true_labels, predicted_probs)
Traceback (most recent call last):
...
ValueError: Input arrays must have the same length.
"""
if len(y_true) != len(y_pred):
raise ValueError("Input arrays must have the same length.")
squared_errors = (y_true - y_pred) ** 2
return np.mean(squared_errors)
def mean_absolute_error(y_true: np.ndarray, y_pred: np.ndarray) -> float:
"""
Calculates the Mean Absolute Error (MAE) between ground truth (observed)
and predicted values.
MAE measures the absolute difference between true values and predicted values.
Equation:
MAE = (1/n) * Σ(abs(y_true - y_pred))
Reference: https://en.wikipedia.org/wiki/Mean_absolute_error
Parameters:
- y_true: The true values (ground truth)
- y_pred: The predicted values
>>> true_values = np.array([1.0, 2.0, 3.0, 4.0, 5.0])
>>> predicted_values = np.array([0.8, 2.1, 2.9, 4.2, 5.2])
>>> np.isclose(mean_absolute_error(true_values, predicted_values), 0.16)
True
>>> true_values = np.array([1.0, 2.0, 3.0, 4.0, 5.0])
>>> predicted_values = np.array([0.8, 2.1, 2.9, 4.2, 5.2])
>>> np.isclose(mean_absolute_error(true_values, predicted_values), 2.16)
False
>>> true_labels = np.array([1.0, 2.0, 3.0, 4.0, 5.0])
>>> predicted_probs = np.array([0.3, 0.8, 0.9, 5.2])
>>> mean_absolute_error(true_labels, predicted_probs)
Traceback (most recent call last):
...
ValueError: Input arrays must have the same length.
"""
if len(y_true) != len(y_pred):
raise ValueError("Input arrays must have the same length.")
return np.mean(abs(y_true - y_pred))
def mean_squared_logarithmic_error(y_true: np.ndarray, y_pred: np.ndarray) -> float:
"""
Calculate the mean squared logarithmic error (MSLE) between ground truth and
predicted values.
MSLE measures the squared logarithmic difference between true values and predicted
values for regression models. It's particularly useful for dealing with skewed or
large-value data, and it's often used when the relative differences between
predicted and true values are more important than absolute differences.
MSLE = (1/n) * Σ(log(1 + y_true) - log(1 + y_pred))^2
Reference: https://insideaiml.com/blog/MeanSquared-Logarithmic-Error-Loss-1035
Parameters:
- y_true: The true values (ground truth)
- y_pred: The predicted values
>>> true_values = np.array([1.0, 2.0, 3.0, 4.0, 5.0])
>>> predicted_values = np.array([0.8, 2.1, 2.9, 4.2, 5.2])
>>> mean_squared_logarithmic_error(true_values, predicted_values)
0.0030860877925181344
>>> true_labels = np.array([1.0, 2.0, 3.0, 4.0, 5.0])
>>> predicted_probs = np.array([0.3, 0.8, 0.9, 0.2])
>>> mean_squared_logarithmic_error(true_labels, predicted_probs)
Traceback (most recent call last):
...
ValueError: Input arrays must have the same length.
"""
if len(y_true) != len(y_pred):
raise ValueError("Input arrays must have the same length.")
squared_logarithmic_errors = (np.log1p(y_true) - np.log1p(y_pred)) ** 2
return np.mean(squared_logarithmic_errors)
def mean_absolute_percentage_error(
y_true: np.ndarray, y_pred: np.ndarray, epsilon: float = 1e-15
) -> float:
"""
Calculate the Mean Absolute Percentage Error between y_true and y_pred.
Mean Absolute Percentage Error calculates the average of the absolute
percentage differences between the predicted and true values.
Formula = (Σ|y_true[i]-Y_pred[i]/y_true[i]|)/n
Source: https://stephenallwright.com/good-mape-score/
Parameters:
y_true (np.ndarray): Numpy array containing true/target values.
y_pred (np.ndarray): Numpy array containing predicted values.
Returns:
float: The Mean Absolute Percentage error between y_true and y_pred.
Examples:
>>> y_true = np.array([10, 20, 30, 40])
>>> y_pred = np.array([12, 18, 33, 45])
>>> mean_absolute_percentage_error(y_true, y_pred)
0.13125
>>> y_true = np.array([1, 2, 3, 4])
>>> y_pred = np.array([2, 3, 4, 5])
>>> mean_absolute_percentage_error(y_true, y_pred)
0.5208333333333333
>>> y_true = np.array([34, 37, 44, 47, 48, 48, 46, 43, 32, 27, 26, 24])
>>> y_pred = np.array([37, 40, 46, 44, 46, 50, 45, 44, 34, 30, 22, 23])
>>> mean_absolute_percentage_error(y_true, y_pred)
0.064671076436071
"""
if len(y_true) != len(y_pred):
raise ValueError("The length of the two arrays should be the same.")
y_true = np.where(y_true == 0, epsilon, y_true)
absolute_percentage_diff = np.abs((y_true - y_pred) / y_true)
return np.mean(absolute_percentage_diff)
def perplexity_loss(
y_true: np.ndarray, y_pred: np.ndarray, epsilon: float = 1e-7
) -> float:
"""
Calculate the perplexity for the y_true and y_pred.
Compute the Perplexity which useful in predicting language model
accuracy in Natural Language Processing (NLP.)
Perplexity is measure of how certain the model in its predictions.
Perplexity Loss = exp(-1/N (Σ ln(p(x)))
Reference:
https://en.wikipedia.org/wiki/Perplexity
Args:
y_true: Actual label encoded sentences of shape (batch_size, sentence_length)
y_pred: Predicted sentences of shape (batch_size, sentence_length, vocab_size)
epsilon: Small floating point number to avoid getting inf for log(0)
Returns:
Perplexity loss between y_true and y_pred.
>>> y_true = np.array([[1, 4], [2, 3]])
>>> y_pred = np.array(
... [[[0.28, 0.19, 0.21 , 0.15, 0.15],
... [0.24, 0.19, 0.09, 0.18, 0.27]],
... [[0.03, 0.26, 0.21, 0.18, 0.30],
... [0.28, 0.10, 0.33, 0.15, 0.12]]]
... )
>>> perplexity_loss(y_true, y_pred)
5.0247347775367945
>>> y_true = np.array([[1, 4], [2, 3]])
>>> y_pred = np.array(
... [[[0.28, 0.19, 0.21 , 0.15, 0.15],
... [0.24, 0.19, 0.09, 0.18, 0.27],
... [0.30, 0.10, 0.20, 0.15, 0.25]],
... [[0.03, 0.26, 0.21, 0.18, 0.30],
... [0.28, 0.10, 0.33, 0.15, 0.12],
... [0.30, 0.10, 0.20, 0.15, 0.25]],]
... )
>>> perplexity_loss(y_true, y_pred)
Traceback (most recent call last):
...
ValueError: Sentence length of y_true and y_pred must be equal.
>>> y_true = np.array([[1, 4], [2, 11]])
>>> y_pred = np.array(
... [[[0.28, 0.19, 0.21 , 0.15, 0.15],
... [0.24, 0.19, 0.09, 0.18, 0.27]],
... [[0.03, 0.26, 0.21, 0.18, 0.30],
... [0.28, 0.10, 0.33, 0.15, 0.12]]]
... )
>>> perplexity_loss(y_true, y_pred)
Traceback (most recent call last):
...
ValueError: Label value must not be greater than vocabulary size.
>>> y_true = np.array([[1, 4]])
>>> y_pred = np.array(
... [[[0.28, 0.19, 0.21 , 0.15, 0.15],
... [0.24, 0.19, 0.09, 0.18, 0.27]],
... [[0.03, 0.26, 0.21, 0.18, 0.30],
... [0.28, 0.10, 0.33, 0.15, 0.12]]]
... )
>>> perplexity_loss(y_true, y_pred)
Traceback (most recent call last):
...
ValueError: Batch size of y_true and y_pred must be equal.
"""
vocab_size = y_pred.shape[2]
if y_true.shape[0] != y_pred.shape[0]:
raise ValueError("Batch size of y_true and y_pred must be equal.")
if y_true.shape[1] != y_pred.shape[1]:
raise ValueError("Sentence length of y_true and y_pred must be equal.")
if np.max(y_true) > vocab_size:
raise ValueError("Label value must not be greater than vocabulary size.")
# Matrix to select prediction value only for true class
filter_matrix = np.array(
[[list(np.eye(vocab_size)[word]) for word in sentence] for sentence in y_true]
)
# Getting the matrix containing prediction for only true class
true_class_pred = np.sum(y_pred * filter_matrix, axis=2).clip(epsilon, 1)
# Calculating perplexity for each sentence
perp_losses = np.exp(np.negative(np.mean(np.log(true_class_pred), axis=1)))
return np.mean(perp_losses)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Mel Frequency Cepstral Coefficients MFCC Calculation MFCC is an algorithm widely used in audio and speech processing to represent the shortterm power spectrum of a sound signal in a more compact and discriminative way. It is particularly popular in speech and audio processing tasks such as speech recognition and speaker identification. How Mel Frequency Cepstral Coefficients are Calculated: 1. Preprocessing: Load an audio signal and normalize it to ensure that the values fall within a specific range e.g., between 1 and 1. Frame the audio signal into overlapping, fixedlength segments, typically using a technique like windowing to reduce spectral leakage. 2. Fourier Transform: Apply a Fast Fourier Transform FFT to each audio frame to convert it from the time domain to the frequency domain. This results in a representation of the audio frame as a sequence of frequency components. 3. Power Spectrum: Calculate the power spectrum by taking the squared magnitude of each frequency component obtained from the FFT. This step measures the energy distribution across different frequency bands. 4. Mel Filterbank: Apply a set of triangular filterbanks spaced in the Mel frequency scale to the power spectrum. These filters mimic the human auditory system's frequency response. Each filterbank sums the power spectrum values within its band. 5. Logarithmic Compression: Take the logarithm typically base 10 of the filterbank values to compress the dynamic range. This step mimics the logarithmic response of the human ear to sound intensity. 6. Discrete Cosine Transform DCT: Apply the Discrete Cosine Transform to the log filterbank energies to obtain the MFCC coefficients. This transformation helps decorrelate the filterbank energies and captures the most important features of the audio signal. 7. Feature Extraction: Select a subset of the DCT coefficients to form the feature vector. Often, the first few coefficients e.g., 1213 are used for most applications. References: MelFrequency Cepstral Coefficients MFCCs: https:en.wikipedia.orgwikiMelfrequencycepstrum Speech and Language Processing by Daniel Jurafsky James H. Martin: https:web.stanford.edujurafskyslp3 Mel Frequency Cepstral Coefficient MFCC tutorial http:practicalcryptography.commiscellaneousmachinelearning guidemelfrequencycepstralcoefficientsmfccs Author: Amir Lavasani Calculate Mel Frequency Cepstral Coefficients MFCCs from an audio signal. Args: audio: The input audio signal. samplerate: The sample rate of the audio signal in Hz. fttsize: The size of the FFT window default is 1024. hoplength: The hop length for frame creation default is 20ms. melfilternum: The number of Mel filters default is 10. dctfilternum: The number of DCT filters default is 40. Returns: A matrix of MFCCs for the input audio. Raises: ValueError: If the input audio is empty. Example: samplerate 44100 Sample rate of 44.1 kHz duration 2.0 Duration of 1 second t np.linspace0, duration, intsamplerate duration, endpointFalse audio 0.5 np.sin2 np.pi 440.0 t Generate a 440 Hz sine wave mfccs mfccaudio, samplerate mfccs.shape 40, 101 normalize audio frame audio into convert to frequency domain For simplicity we will choose the Hanning window. Normalize an audio signal by scaling it to have values between 1 and 1. Args: audio: The input audio signal. Returns: The normalized audio signal. Examples: audio np.array1, 2, 3, 4, 5 normalizedaudio normalizeaudio np.maxnormalizedaudio 1.0 np.minnormalizedaudio 0.2 Divide the entire audio signal by the maximum absolute value Split an audio signal into overlapping frames. Args: audio: The input audio signal. samplerate: The sample rate of the audio signal. hoplength: The length of the hopping default is 20ms. fttsize: The size of the FFT window default is 1024. Returns: An array of overlapping frames. Examples: audio np.array1, 2, 3, 4, 5, 6, 7, 8, 9, 101000 samplerate 8000 frames audioframesaudio, samplerate, hoplength10, fttsize512 frames.shape 126, 512 Pad the audio signal to handle edge cases Calculate the number of frames Initialize an array to store the frames Split the audio signal into frames Calculate the Fast Fourier Transform FFT of windowed audio data. Args: audiowindowed: The windowed audio signal. fttsize: The size of the FFT default is 1024. Returns: The FFT of the audio data. Examples: audiowindowed np.array1.0, 2.0, 3.0, 4.0, 5.0, 6.0 audiofft calculatefftaudiowindowed, fttsize4 np.allcloseaudiofft0, np.array6.00.j, 1.50.8660254j, 1.50.8660254j True Transpose the audio data to have time in rows and channels in columns Initialize an array to store the FFT results Compute FFT for each channel Transpose the FFT results back to the original shape Calculate the power of the audio signal from its FFT. Args: audiofft: The FFT of the audio signal. Returns: The power of the audio signal. Examples: audiofft np.array12j, 23j, 34j, 45j power calculatesignalpoweraudiofft np.allclosepower, np.array5, 13, 25, 41 True Calculate the power by squaring the absolute values of the FFT coefficients Convert a frequency in Hertz to the mel scale. Args: freq: The frequency in Hertz. Returns: The frequency in mel scale. Examples: roundfreqtomel1000, 2 999.99 Use the formula to convert frequency to the mel scale Convert a frequency in the mel scale to Hertz. Args: mels: The frequency in mel scale. Returns: The frequency in Hertz. Examples: roundmeltofreq999.99, 2 1000.01 Use the formula to convert mel scale to frequency Create a Melspaced filter bank for audio processing. Args: samplerate: The sample rate of the audio. melfilternum: The number of mel filters default is 10. fttsize: The size of the FFT default is 1024. Returns: Melspaced filter bank. Examples: roundmelspacedfilterbank8000, 10, 102401, 10 0.0004603981 Calculate filter points and mel frequencies normalize filters taken from the librosa library Generate filters for audio processing. Args: filterpoints: A list of filter points. fttsize: The size of the FFT. Returns: A matrix of filters. Examples: getfiltersnp.array0, 20, 51, 95, 161, 256, dtypeint, 512.shape 4, 257 Linearly increase values from 0 to 1 Linearly decrease values from 1 to 0 Calculate the filter points and frequencies for mel frequency filters. Args: samplerate: The sample rate of the audio. freqmin: The minimum frequency in Hertz. freqhigh: The maximum frequency in Hertz. melfilternum: The number of mel filters default is 10. fttsize: The size of the FFT default is 1024. Returns: Filter points and corresponding frequencies. Examples: filterpoints getfilterpoints8000, 0, 4000, melfilternum4, fttsize512 filterpoints0 array 0, 20, 51, 95, 161, 256 filterpoints1 array 0. , 324.46707094, 799.33254207, 1494.30973963, 2511.42581671, 4000. Convert minimum and maximum frequencies to mel scale Generate equally spaced mel frequencies Convert mel frequencies back to Hertz Calculate filter points as integer values Compute the Discrete Cosine Transform DCT basis matrix. Args: dctfilternum: The number of DCT filters to generate. filternum: The number of the fbank filters. Returns: The DCT basis matrix. Examples: rounddiscretecosinetransform3, 500, 5 0.44721 Example function to calculate Mel Frequency Cepstral Coefficients MFCCs from an audio file. Args: wavfilepath: The path to the WAV audio file. Returns: np.ndarray: The computed MFCCs for the audio. Load the audio from the WAV file Calculate MFCCs | import logging
import numpy as np
import scipy.fftpack as fft
from scipy.signal import get_window
logging.basicConfig(filename=f"{__file__}.log", level=logging.INFO)
def mfcc(
audio: np.ndarray,
sample_rate: int,
ftt_size: int = 1024,
hop_length: int = 20,
mel_filter_num: int = 10,
dct_filter_num: int = 40,
) -> np.ndarray:
"""
Calculate Mel Frequency Cepstral Coefficients (MFCCs) from an audio signal.
Args:
audio: The input audio signal.
sample_rate: The sample rate of the audio signal (in Hz).
ftt_size: The size of the FFT window (default is 1024).
hop_length: The hop length for frame creation (default is 20ms).
mel_filter_num: The number of Mel filters (default is 10).
dct_filter_num: The number of DCT filters (default is 40).
Returns:
A matrix of MFCCs for the input audio.
Raises:
ValueError: If the input audio is empty.
Example:
>>> sample_rate = 44100 # Sample rate of 44.1 kHz
>>> duration = 2.0 # Duration of 1 second
>>> t = np.linspace(0, duration, int(sample_rate * duration), endpoint=False)
>>> audio = 0.5 * np.sin(2 * np.pi * 440.0 * t) # Generate a 440 Hz sine wave
>>> mfccs = mfcc(audio, sample_rate)
>>> mfccs.shape
(40, 101)
"""
logging.info(f"Sample rate: {sample_rate}Hz")
logging.info(f"Audio duration: {len(audio) / sample_rate}s")
logging.info(f"Audio min: {np.min(audio)}")
logging.info(f"Audio max: {np.max(audio)}")
# normalize audio
audio_normalized = normalize(audio)
logging.info(f"Normalized audio min: {np.min(audio_normalized)}")
logging.info(f"Normalized audio max: {np.max(audio_normalized)}")
# frame audio into
audio_framed = audio_frames(
audio_normalized, sample_rate, ftt_size=ftt_size, hop_length=hop_length
)
logging.info(f"Framed audio shape: {audio_framed.shape}")
logging.info(f"First frame: {audio_framed[0]}")
# convert to frequency domain
# For simplicity we will choose the Hanning window.
window = get_window("hann", ftt_size, fftbins=True)
audio_windowed = audio_framed * window
logging.info(f"Windowed audio shape: {audio_windowed.shape}")
logging.info(f"First frame: {audio_windowed[0]}")
audio_fft = calculate_fft(audio_windowed, ftt_size)
logging.info(f"fft audio shape: {audio_fft.shape}")
logging.info(f"First frame: {audio_fft[0]}")
audio_power = calculate_signal_power(audio_fft)
logging.info(f"power audio shape: {audio_power.shape}")
logging.info(f"First frame: {audio_power[0]}")
filters = mel_spaced_filterbank(sample_rate, mel_filter_num, ftt_size)
logging.info(f"filters shape: {filters.shape}")
audio_filtered = np.dot(filters, np.transpose(audio_power))
audio_log = 10.0 * np.log10(audio_filtered)
logging.info(f"audio_log shape: {audio_log.shape}")
dct_filters = discrete_cosine_transform(dct_filter_num, mel_filter_num)
cepstral_coefficents = np.dot(dct_filters, audio_log)
logging.info(f"cepstral_coefficents shape: {cepstral_coefficents.shape}")
return cepstral_coefficents
def normalize(audio: np.ndarray) -> np.ndarray:
"""
Normalize an audio signal by scaling it to have values between -1 and 1.
Args:
audio: The input audio signal.
Returns:
The normalized audio signal.
Examples:
>>> audio = np.array([1, 2, 3, 4, 5])
>>> normalized_audio = normalize(audio)
>>> np.max(normalized_audio)
1.0
>>> np.min(normalized_audio)
0.2
"""
# Divide the entire audio signal by the maximum absolute value
return audio / np.max(np.abs(audio))
def audio_frames(
audio: np.ndarray,
sample_rate: int,
hop_length: int = 20,
ftt_size: int = 1024,
) -> np.ndarray:
"""
Split an audio signal into overlapping frames.
Args:
audio: The input audio signal.
sample_rate: The sample rate of the audio signal.
hop_length: The length of the hopping (default is 20ms).
ftt_size: The size of the FFT window (default is 1024).
Returns:
An array of overlapping frames.
Examples:
>>> audio = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]*1000)
>>> sample_rate = 8000
>>> frames = audio_frames(audio, sample_rate, hop_length=10, ftt_size=512)
>>> frames.shape
(126, 512)
"""
hop_size = np.round(sample_rate * hop_length / 1000).astype(int)
# Pad the audio signal to handle edge cases
audio = np.pad(audio, int(ftt_size / 2), mode="reflect")
# Calculate the number of frames
frame_count = int((len(audio) - ftt_size) / hop_size) + 1
# Initialize an array to store the frames
frames = np.zeros((frame_count, ftt_size))
# Split the audio signal into frames
for n in range(frame_count):
frames[n] = audio[n * hop_size : n * hop_size + ftt_size]
return frames
def calculate_fft(audio_windowed: np.ndarray, ftt_size: int = 1024) -> np.ndarray:
"""
Calculate the Fast Fourier Transform (FFT) of windowed audio data.
Args:
audio_windowed: The windowed audio signal.
ftt_size: The size of the FFT (default is 1024).
Returns:
The FFT of the audio data.
Examples:
>>> audio_windowed = np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
>>> audio_fft = calculate_fft(audio_windowed, ftt_size=4)
>>> np.allclose(audio_fft[0], np.array([6.0+0.j, -1.5+0.8660254j, -1.5-0.8660254j]))
True
"""
# Transpose the audio data to have time in rows and channels in columns
audio_transposed = np.transpose(audio_windowed)
# Initialize an array to store the FFT results
audio_fft = np.empty(
(int(1 + ftt_size // 2), audio_transposed.shape[1]),
dtype=np.complex64,
order="F",
)
# Compute FFT for each channel
for n in range(audio_fft.shape[1]):
audio_fft[:, n] = fft.fft(audio_transposed[:, n], axis=0)[: audio_fft.shape[0]]
# Transpose the FFT results back to the original shape
return np.transpose(audio_fft)
def calculate_signal_power(audio_fft: np.ndarray) -> np.ndarray:
"""
Calculate the power of the audio signal from its FFT.
Args:
audio_fft: The FFT of the audio signal.
Returns:
The power of the audio signal.
Examples:
>>> audio_fft = np.array([1+2j, 2+3j, 3+4j, 4+5j])
>>> power = calculate_signal_power(audio_fft)
>>> np.allclose(power, np.array([5, 13, 25, 41]))
True
"""
# Calculate the power by squaring the absolute values of the FFT coefficients
return np.square(np.abs(audio_fft))
def freq_to_mel(freq: float) -> float:
"""
Convert a frequency in Hertz to the mel scale.
Args:
freq: The frequency in Hertz.
Returns:
The frequency in mel scale.
Examples:
>>> round(freq_to_mel(1000), 2)
999.99
"""
# Use the formula to convert frequency to the mel scale
return 2595.0 * np.log10(1.0 + freq / 700.0)
def mel_to_freq(mels: float) -> float:
"""
Convert a frequency in the mel scale to Hertz.
Args:
mels: The frequency in mel scale.
Returns:
The frequency in Hertz.
Examples:
>>> round(mel_to_freq(999.99), 2)
1000.01
"""
# Use the formula to convert mel scale to frequency
return 700.0 * (10.0 ** (mels / 2595.0) - 1.0)
def mel_spaced_filterbank(
sample_rate: int, mel_filter_num: int = 10, ftt_size: int = 1024
) -> np.ndarray:
"""
Create a Mel-spaced filter bank for audio processing.
Args:
sample_rate: The sample rate of the audio.
mel_filter_num: The number of mel filters (default is 10).
ftt_size: The size of the FFT (default is 1024).
Returns:
Mel-spaced filter bank.
Examples:
>>> round(mel_spaced_filterbank(8000, 10, 1024)[0][1], 10)
0.0004603981
"""
freq_min = 0
freq_high = sample_rate // 2
logging.info(f"Minimum frequency: {freq_min}")
logging.info(f"Maximum frequency: {freq_high}")
# Calculate filter points and mel frequencies
filter_points, mel_freqs = get_filter_points(
sample_rate,
freq_min,
freq_high,
mel_filter_num,
ftt_size,
)
filters = get_filters(filter_points, ftt_size)
# normalize filters
# taken from the librosa library
enorm = 2.0 / (mel_freqs[2 : mel_filter_num + 2] - mel_freqs[:mel_filter_num])
return filters * enorm[:, np.newaxis]
def get_filters(filter_points: np.ndarray, ftt_size: int) -> np.ndarray:
"""
Generate filters for audio processing.
Args:
filter_points: A list of filter points.
ftt_size: The size of the FFT.
Returns:
A matrix of filters.
Examples:
>>> get_filters(np.array([0, 20, 51, 95, 161, 256], dtype=int), 512).shape
(4, 257)
"""
num_filters = len(filter_points) - 2
filters = np.zeros((num_filters, int(ftt_size / 2) + 1))
for n in range(num_filters):
start = filter_points[n]
mid = filter_points[n + 1]
end = filter_points[n + 2]
# Linearly increase values from 0 to 1
filters[n, start:mid] = np.linspace(0, 1, mid - start)
# Linearly decrease values from 1 to 0
filters[n, mid:end] = np.linspace(1, 0, end - mid)
return filters
def get_filter_points(
sample_rate: int,
freq_min: int,
freq_high: int,
mel_filter_num: int = 10,
ftt_size: int = 1024,
) -> tuple[np.ndarray, np.ndarray]:
"""
Calculate the filter points and frequencies for mel frequency filters.
Args:
sample_rate: The sample rate of the audio.
freq_min: The minimum frequency in Hertz.
freq_high: The maximum frequency in Hertz.
mel_filter_num: The number of mel filters (default is 10).
ftt_size: The size of the FFT (default is 1024).
Returns:
Filter points and corresponding frequencies.
Examples:
>>> filter_points = get_filter_points(8000, 0, 4000, mel_filter_num=4, ftt_size=512)
>>> filter_points[0]
array([ 0, 20, 51, 95, 161, 256])
>>> filter_points[1]
array([ 0. , 324.46707094, 799.33254207, 1494.30973963,
2511.42581671, 4000. ])
"""
# Convert minimum and maximum frequencies to mel scale
fmin_mel = freq_to_mel(freq_min)
fmax_mel = freq_to_mel(freq_high)
logging.info(f"MEL min: {fmin_mel}")
logging.info(f"MEL max: {fmax_mel}")
# Generate equally spaced mel frequencies
mels = np.linspace(fmin_mel, fmax_mel, num=mel_filter_num + 2)
# Convert mel frequencies back to Hertz
freqs = mel_to_freq(mels)
# Calculate filter points as integer values
filter_points = np.floor((ftt_size + 1) / sample_rate * freqs).astype(int)
return filter_points, freqs
def discrete_cosine_transform(dct_filter_num: int, filter_num: int) -> np.ndarray:
"""
Compute the Discrete Cosine Transform (DCT) basis matrix.
Args:
dct_filter_num: The number of DCT filters to generate.
filter_num: The number of the fbank filters.
Returns:
The DCT basis matrix.
Examples:
>>> round(discrete_cosine_transform(3, 5)[0][0], 5)
0.44721
"""
basis = np.empty((dct_filter_num, filter_num))
basis[0, :] = 1.0 / np.sqrt(filter_num)
samples = np.arange(1, 2 * filter_num, 2) * np.pi / (2.0 * filter_num)
for i in range(1, dct_filter_num):
basis[i, :] = np.cos(i * samples) * np.sqrt(2.0 / filter_num)
return basis
def example(wav_file_path: str = "./path-to-file/sample.wav") -> np.ndarray:
"""
Example function to calculate Mel Frequency Cepstral Coefficients
(MFCCs) from an audio file.
Args:
wav_file_path: The path to the WAV audio file.
Returns:
np.ndarray: The computed MFCCs for the audio.
"""
from scipy.io import wavfile
# Load the audio from the WAV file
sample_rate, audio = wavfile.read(wav_file_path)
# Calculate MFCCs
return mfcc(audio, sample_rate)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
wrapperY 0, 0, 1 | from sklearn.neural_network import MLPClassifier
X = [[0.0, 0.0], [1.0, 1.0], [1.0, 0.0], [0.0, 1.0]]
y = [0, 1, 0, 0]
clf = MLPClassifier(
solver="lbfgs", alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1
)
clf.fit(X, y)
test = [[0.0, 0.0], [0.0, 1.0], [1.0, 1.0]]
Y = clf.predict(test)
def wrapper(y):
"""
>>> wrapper(Y)
[0, 0, 1]
"""
return list(y)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Polynomial regression is a type of regression analysis that models the relationship between a predictor x and the response y as an mthdegree polynomial: y x x ... x By treating x, x, ..., x as distinct variables, we see that polynomial regression is a special case of multiple linear regression. Therefore, we can use ordinary least squares OLS estimation to estimate the vector of model parameters , , , ..., for polynomial regression: XXXy Xy where X is the design matrix, y is the response vector, and X denotes the MoorePenrose pseudoinverse of X. In the case of polynomial regression, the design matrix is 1 x x x X 1 x x x 1 x x x In OLS estimation, inverting XX to compute X can be very numerically unstable. This implementation sidesteps this need to invert XX by computing X using singular value decomposition SVD: VUy where UV is an SVD of X. References: https:en.wikipedia.orgwikiPolynomialregression https:en.wikipedia.orgwikiMooreE28093Penroseinverse https:en.wikipedia.orgwikiNumericalmethodsforlinearleastsquares https:en.wikipedia.orgwikiSingularvaluedecomposition raises ValueError: if the polynomial degree is negative Constructs a polynomial regression design matrix for the given input data. For input data x x, x, ..., x and polynomial degree m, the design matrix is the Vandermonde matrix 1 x x x X 1 x x x 1 x x x Reference: https:en.wikipedia.orgwikiVandermondematrix param data: the input predictor values x, either for model fitting or for prediction param degree: the polynomial degree m returns: the Vandermonde matrix X see above raises ValueError: if input data is not N x 1 x np.array0, 1, 2 PolynomialRegression.designmatrixx, degree0 array1, 1, 1 PolynomialRegression.designmatrixx, degree1 array1, 0, 1, 1, 1, 2 PolynomialRegression.designmatrixx, degree2 array1, 0, 0, 1, 1, 1, 1, 2, 4 PolynomialRegression.designmatrixx, degree3 array1, 0, 0, 0, 1, 1, 1, 1, 1, 2, 4, 8 PolynomialRegression.designmatrixnp.array0, 0, 0 , 0, degree3 Traceback most recent call last: ... ValueError: Data must have dimensions N x 1 Computes the polynomial regression model parameters using ordinary least squares OLS estimation: XXXy Xy where X denotes the MoorePenrose pseudoinverse of the design matrix X. This function computes X using singular value decomposition SVD. References: https:en.wikipedia.orgwikiMooreE28093Penroseinverse https:en.wikipedia.orgwikiSingularvaluedecomposition https:en.wikipedia.orgwikiMulticollinearity param xtrain: the predictor values x for model fitting param ytrain: the response values y for model fitting raises ArithmeticError: if X isn't full rank, then XX is singular and doesn't exist x np.array0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 y x3 2 x2 3 x 5 polyreg PolynomialRegressiondegree3 polyreg.fitx, y polyreg.params array5., 3., 2., 1. polyreg PolynomialRegressiondegree20 polyreg.fitx, y Traceback most recent call last: ... ArithmeticError: Design matrix is not full rank, can't compute coefficients Make sure errors don't grow too large: coefs np.array250, 50, 2, 36, 20, 12, 10, 2, 1, 15, 1 y PolynomialRegression.designmatrixx, lencoefs 1 coefs polyreg PolynomialRegressiondegreelencoefs 1 polyreg.fitx, y np.allclosepolyreg.params, coefs, atol10e3 True np.linalg.pinv computes the MoorePenrose pseudoinverse using SVD Computes the predicted response values y for the given input data by constructing the design matrix X and evaluating y X. param data: the predictor values x for prediction returns: the predicted response values y X raises ArithmeticError: if this function is called before the model parameters are fit x np.array0, 1, 2, 3, 4 y x3 2 x2 3 x 5 polyreg PolynomialRegressiondegree3 polyreg.fitx, y polyreg.predictnp.array1 array11. polyreg.predictnp.array2 array27. polyreg.predictnp.array6 array157. PolynomialRegressiondegree3.predictx Traceback most recent call last: ... ArithmeticError: Predictor hasn't been fit yet Fit a polynomial regression model to predict fuel efficiency using seaborn's mpg dataset pass Placeholder, function is only for demo purposes | import matplotlib.pyplot as plt
import numpy as np
class PolynomialRegression:
__slots__ = "degree", "params"
def __init__(self, degree: int) -> None:
"""
@raises ValueError: if the polynomial degree is negative
"""
if degree < 0:
raise ValueError("Polynomial degree must be non-negative")
self.degree = degree
self.params = None
@staticmethod
def _design_matrix(data: np.ndarray, degree: int) -> np.ndarray:
"""
Constructs a polynomial regression design matrix for the given input data. For
input data x = (x₁, x₂, ..., xₙ) and polynomial degree m, the design matrix is
the Vandermonde matrix
|1 x₁ x₁² ⋯ x₁ᵐ|
X = |1 x₂ x₂² ⋯ x₂ᵐ|
|⋮ ⋮ ⋮ ⋱ ⋮ |
|1 xₙ xₙ² ⋯ xₙᵐ|
Reference: https://en.wikipedia.org/wiki/Vandermonde_matrix
@param data: the input predictor values x, either for model fitting or for
prediction
@param degree: the polynomial degree m
@returns: the Vandermonde matrix X (see above)
@raises ValueError: if input data is not N x 1
>>> x = np.array([0, 1, 2])
>>> PolynomialRegression._design_matrix(x, degree=0)
array([[1],
[1],
[1]])
>>> PolynomialRegression._design_matrix(x, degree=1)
array([[1, 0],
[1, 1],
[1, 2]])
>>> PolynomialRegression._design_matrix(x, degree=2)
array([[1, 0, 0],
[1, 1, 1],
[1, 2, 4]])
>>> PolynomialRegression._design_matrix(x, degree=3)
array([[1, 0, 0, 0],
[1, 1, 1, 1],
[1, 2, 4, 8]])
>>> PolynomialRegression._design_matrix(np.array([[0, 0], [0 , 0]]), degree=3)
Traceback (most recent call last):
...
ValueError: Data must have dimensions N x 1
"""
rows, *remaining = data.shape
if remaining:
raise ValueError("Data must have dimensions N x 1")
return np.vander(data, N=degree + 1, increasing=True)
def fit(self, x_train: np.ndarray, y_train: np.ndarray) -> None:
"""
Computes the polynomial regression model parameters using ordinary least squares
(OLS) estimation:
β = (XᵀX)⁻¹Xᵀy = X⁺y
where X⁺ denotes the Moore–Penrose pseudoinverse of the design matrix X. This
function computes X⁺ using singular value decomposition (SVD).
References:
- https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse
- https://en.wikipedia.org/wiki/Singular_value_decomposition
- https://en.wikipedia.org/wiki/Multicollinearity
@param x_train: the predictor values x for model fitting
@param y_train: the response values y for model fitting
@raises ArithmeticError: if X isn't full rank, then XᵀX is singular and β
doesn't exist
>>> x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
>>> y = x**3 - 2 * x**2 + 3 * x - 5
>>> poly_reg = PolynomialRegression(degree=3)
>>> poly_reg.fit(x, y)
>>> poly_reg.params
array([-5., 3., -2., 1.])
>>> poly_reg = PolynomialRegression(degree=20)
>>> poly_reg.fit(x, y)
Traceback (most recent call last):
...
ArithmeticError: Design matrix is not full rank, can't compute coefficients
Make sure errors don't grow too large:
>>> coefs = np.array([-250, 50, -2, 36, 20, -12, 10, 2, -1, -15, 1])
>>> y = PolynomialRegression._design_matrix(x, len(coefs) - 1) @ coefs
>>> poly_reg = PolynomialRegression(degree=len(coefs) - 1)
>>> poly_reg.fit(x, y)
>>> np.allclose(poly_reg.params, coefs, atol=10e-3)
True
"""
X = PolynomialRegression._design_matrix(x_train, self.degree) # noqa: N806
_, cols = X.shape
if np.linalg.matrix_rank(X) < cols:
raise ArithmeticError(
"Design matrix is not full rank, can't compute coefficients"
)
# np.linalg.pinv() computes the Moore–Penrose pseudoinverse using SVD
self.params = np.linalg.pinv(X) @ y_train
def predict(self, data: np.ndarray) -> np.ndarray:
"""
Computes the predicted response values y for the given input data by
constructing the design matrix X and evaluating y = Xβ.
@param data: the predictor values x for prediction
@returns: the predicted response values y = Xβ
@raises ArithmeticError: if this function is called before the model
parameters are fit
>>> x = np.array([0, 1, 2, 3, 4])
>>> y = x**3 - 2 * x**2 + 3 * x - 5
>>> poly_reg = PolynomialRegression(degree=3)
>>> poly_reg.fit(x, y)
>>> poly_reg.predict(np.array([-1]))
array([-11.])
>>> poly_reg.predict(np.array([-2]))
array([-27.])
>>> poly_reg.predict(np.array([6]))
array([157.])
>>> PolynomialRegression(degree=3).predict(x)
Traceback (most recent call last):
...
ArithmeticError: Predictor hasn't been fit yet
"""
if self.params is None:
raise ArithmeticError("Predictor hasn't been fit yet")
return PolynomialRegression._design_matrix(data, self.degree) @ self.params
def main() -> None:
"""
Fit a polynomial regression model to predict fuel efficiency using seaborn's mpg
dataset
>>> pass # Placeholder, function is only for demo purposes
"""
import seaborn as sns
mpg_data = sns.load_dataset("mpg")
poly_reg = PolynomialRegression(degree=2)
poly_reg.fit(mpg_data.weight, mpg_data.mpg)
weight_sorted = np.sort(mpg_data.weight)
predictions = poly_reg.predict(weight_sorted)
plt.scatter(mpg_data.weight, mpg_data.mpg, color="gray", alpha=0.5)
plt.plot(weight_sorted, predictions, color="red", linewidth=3)
plt.title("Predicting Fuel Efficiency Using Polynomial Regression")
plt.xlabel("Weight (lbs)")
plt.ylabel("Fuel Efficiency (mpg)")
plt.show()
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
|
Here I implemented the scoring functions. MAE, MSE, RMSE, RMSLE are included. Those are used for calculating differences between predicted values and actual values. Metrics are slightly differentiated. Sometimes squared, rooted, even log is used. Using log and roots can be perceived as tools for penalizing big errors. However, using appropriate metrics depends on the situations, and types of data Mean Absolute Error Examplesrounded for precision: actual 1,2,3;predict 1,4,3 np.aroundmaepredict,actual,decimals 2 0.67 actual 1,1,1;predict 1,1,1 maepredict,actual 0.0 Mean Squared Error Examplesrounded for precision: actual 1,2,3;predict 1,4,3 np.aroundmsepredict,actual,decimals 2 1.33 actual 1,1,1;predict 1,1,1 msepredict,actual 0.0 Root Mean Squared Error Examplesrounded for precision: actual 1,2,3;predict 1,4,3 np.aroundrmsepredict,actual,decimals 2 1.15 actual 1,1,1;predict 1,1,1 rmsepredict,actual 0.0 Root Mean Square Logarithmic Error Examplesrounded for precision: actual 10,10,30;predict 10,2,30 np.aroundrmslepredict,actual,decimals 2 0.75 actual 1,1,1;predict 1,1,1 rmslepredict,actual 0.0 Mean Bias Deviation This value is Negative, if the model underpredicts, positive, if it overpredicts. Examplerounded for precision: Here the model overpredicts actual 1,2,3;predict 2,3,4 np.aroundmbdpredict,actual,decimals 2 50.0 Here the model underpredicts actual 1,2,3;predict 0,1,1 np.aroundmbdpredict,actual,decimals 2 66.67 printnumerator, denumerator | import numpy as np
""" Here I implemented the scoring functions.
MAE, MSE, RMSE, RMSLE are included.
Those are used for calculating differences between
predicted values and actual values.
Metrics are slightly differentiated. Sometimes squared, rooted,
even log is used.
Using log and roots can be perceived as tools for penalizing big
errors. However, using appropriate metrics depends on the situations,
and types of data
"""
# Mean Absolute Error
def mae(predict, actual):
"""
Examples(rounded for precision):
>>> actual = [1,2,3];predict = [1,4,3]
>>> np.around(mae(predict,actual),decimals = 2)
0.67
>>> actual = [1,1,1];predict = [1,1,1]
>>> mae(predict,actual)
0.0
"""
predict = np.array(predict)
actual = np.array(actual)
difference = abs(predict - actual)
score = difference.mean()
return score
# Mean Squared Error
def mse(predict, actual):
"""
Examples(rounded for precision):
>>> actual = [1,2,3];predict = [1,4,3]
>>> np.around(mse(predict,actual),decimals = 2)
1.33
>>> actual = [1,1,1];predict = [1,1,1]
>>> mse(predict,actual)
0.0
"""
predict = np.array(predict)
actual = np.array(actual)
difference = predict - actual
square_diff = np.square(difference)
score = square_diff.mean()
return score
# Root Mean Squared Error
def rmse(predict, actual):
"""
Examples(rounded for precision):
>>> actual = [1,2,3];predict = [1,4,3]
>>> np.around(rmse(predict,actual),decimals = 2)
1.15
>>> actual = [1,1,1];predict = [1,1,1]
>>> rmse(predict,actual)
0.0
"""
predict = np.array(predict)
actual = np.array(actual)
difference = predict - actual
square_diff = np.square(difference)
mean_square_diff = square_diff.mean()
score = np.sqrt(mean_square_diff)
return score
# Root Mean Square Logarithmic Error
def rmsle(predict, actual):
"""
Examples(rounded for precision):
>>> actual = [10,10,30];predict = [10,2,30]
>>> np.around(rmsle(predict,actual),decimals = 2)
0.75
>>> actual = [1,1,1];predict = [1,1,1]
>>> rmsle(predict,actual)
0.0
"""
predict = np.array(predict)
actual = np.array(actual)
log_predict = np.log(predict + 1)
log_actual = np.log(actual + 1)
difference = log_predict - log_actual
square_diff = np.square(difference)
mean_square_diff = square_diff.mean()
score = np.sqrt(mean_square_diff)
return score
# Mean Bias Deviation
def mbd(predict, actual):
"""
This value is Negative, if the model underpredicts,
positive, if it overpredicts.
Example(rounded for precision):
Here the model overpredicts
>>> actual = [1,2,3];predict = [2,3,4]
>>> np.around(mbd(predict,actual),decimals = 2)
50.0
Here the model underpredicts
>>> actual = [1,2,3];predict = [0,1,1]
>>> np.around(mbd(predict,actual),decimals = 2)
-66.67
"""
predict = np.array(predict)
actual = np.array(actual)
difference = predict - actual
numerator = np.sum(difference) / len(predict)
denumerator = np.sum(actual) / len(predict)
# print(numerator, denumerator)
score = float(numerator) / denumerator * 100
return score
def manual_accuracy(predict, actual):
return np.mean(np.array(actual) == np.array(predict))
|
https:en.wikipedia.orgwikiSelforganizingmap Compute the winning vector by Euclidean distance SelfOrganizingMap.getwinner1, 2, 3, 4, 5, 6, 1, 2, 3 1 Update the winning vector. SelfOrganizingMap.update1, 2, 3, 4, 5, 6, 1, 2, 3, 1, 0.1 1, 2, 3, 3.7, 4.7, 6 Driver code Training Examples m, n weight initialization n, C training training sample Compute the winning vector Update the winning vector classify test sample results running the main function | import math
class SelfOrganizingMap:
def get_winner(self, weights: list[list[float]], sample: list[int]) -> int:
"""
Compute the winning vector by Euclidean distance
>>> SelfOrganizingMap().get_winner([[1, 2, 3], [4, 5, 6]], [1, 2, 3])
1
"""
d0 = 0.0
d1 = 0.0
for i in range(len(sample)):
d0 += math.pow((sample[i] - weights[0][i]), 2)
d1 += math.pow((sample[i] - weights[1][i]), 2)
return 0 if d0 > d1 else 1
return 0
def update(
self, weights: list[list[int | float]], sample: list[int], j: int, alpha: float
) -> list[list[int | float]]:
"""
Update the winning vector.
>>> SelfOrganizingMap().update([[1, 2, 3], [4, 5, 6]], [1, 2, 3], 1, 0.1)
[[1, 2, 3], [3.7, 4.7, 6]]
"""
for i in range(len(weights)):
weights[j][i] += alpha * (sample[i] - weights[j][i])
return weights
# Driver code
def main() -> None:
# Training Examples ( m, n )
training_samples = [[1, 1, 0, 0], [0, 0, 0, 1], [1, 0, 0, 0], [0, 0, 1, 1]]
# weight initialization ( n, C )
weights = [[0.2, 0.6, 0.5, 0.9], [0.8, 0.4, 0.7, 0.3]]
# training
self_organizing_map = SelfOrganizingMap()
epochs = 3
alpha = 0.5
for _ in range(epochs):
for j in range(len(training_samples)):
# training sample
sample = training_samples[j]
# Compute the winning vector
winner = self_organizing_map.get_winner(weights, sample)
# Update the winning vector
weights = self_organizing_map.update(weights, sample, winner, alpha)
# classify test sample
sample = [0, 0, 0, 1]
winner = self_organizing_map.get_winner(weights, sample)
# results
print(f"Clusters that the test sample belongs to : {winner}")
print(f"Weights that have been trained : {weights}")
# running the main() function
if __name__ == "__main__":
main()
|
Implementation of sequential minimal optimization SMO for support vector machines SVM. Sequential minimal optimization SMO is an algorithm for solving the quadratic programming QP problem that arises during the training of support vector machines. It was invented by John Platt in 1998. Input: 0: type: numpy.ndarray. 1: first column of ndarray must be tags of samples, must be 1 or 1. 2: rows of ndarray represent samples. Usage: Command: python3 sequentialminimumoptimization.py Code: from sequentialminimumoptimization import SmoSVM, Kernel kernel Kernelkernel'poly', degree3., coef01., gamma0.5 initalphas np.zerostrain.shape0 SVM SmoSVMtraintrain, alphalistinitalphas, kernelfunckernel, cost0.4, b0.0, tolerance0.001 SVM.fit predict SVM.predicttestsamples Reference: https:www.microsoft.comenusresearchwpcontentuploads201602smobook.pdf https:www.microsoft.comenusresearchwpcontentuploads201602tr9814.pdf Calculate alphas using SMO algorithm 1: Find alpha1, alpha2 2: calculate new alpha2 and new alpha1 3: update thresholdb 4: update error value,here we only calculate those nonbound samples' error if i1 or i2 is nonbound,update there error value to zero Predict test samples Check if alpha violate KKT condition Get value calculated from kernel function for test samples,use Kernel function for train samples,Kernel values have been saved in matrix Get sample's error Two cases: 1:Sampleindex is nonbound,Fetch error from list: error 2:sampleindex is bound,Use predicted value deduct true value: gxi yi get from error data get by gxi yi Calculate Kernel matrix of all possible i1,i2 ,saving time Predict test sample's tag Choose alpha1 and alpha2 Choose first alpha ;steps: 1:First loop over all sample 2:Second loop over all nonbound samples till all nonbound samples does not voilate kkt condition. 3:Repeat this two process endlessly,till all samples does not voilate kkt condition samples after first loop. all sample nonbound sample Choose the second alpha by using heuristic algorithm ;steps: 1: Choose alpha2 which gets the maximum step size E1 E2. 2: Start in a random point,loop over all nonbound samples till alpha1 and alpha2 are optimized. 3: Start in a random point,loop over all samples till alpha1 and alpha2 are optimized. Get the new alpha2 and new alpha1 calculate L and H which bound the new alpha2 calculate eta select the new alpha2 which could get the minimal objectives a2new has a boundary way 1 way 2 Use objective function check which alpha2 new could get the minimal objectives a1new has a boundary too Normalise data using minmax way 0: download dataset and load into pandas' dataframe 1: preprocessing data 2: dividing data into traindata data and testdata data 3: choose kernel function,and set initial alphas to zerooptional 4: calculating best alphas using SMO algorithm and predict testdata samples 5: check accuracy change stdout We can not get the optimum w of our kernel svm model which is different from linear svm. For this reason, we generate randomly distributed points with high desity and prediced values of these points are calculated by using our trained model. Then we could use this prediced values to draw contour map. And this contour map can represent svm's partition boundary. Plot contour map which represents the partition boundary Plot all train samples Plot support vectors | import os
import sys
import urllib.request
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.datasets import make_blobs, make_circles
from sklearn.preprocessing import StandardScaler
CANCER_DATASET_URL = (
"https://archive.ics.uci.edu/ml/machine-learning-databases/"
"breast-cancer-wisconsin/wdbc.data"
)
class SmoSVM:
def __init__(
self,
train,
kernel_func,
alpha_list=None,
cost=0.4,
b=0.0,
tolerance=0.001,
auto_norm=True,
):
self._init = True
self._auto_norm = auto_norm
self._c = np.float64(cost)
self._b = np.float64(b)
self._tol = np.float64(tolerance) if tolerance > 0.0001 else np.float64(0.001)
self.tags = train[:, 0]
self.samples = self._norm(train[:, 1:]) if self._auto_norm else train[:, 1:]
self.alphas = alpha_list if alpha_list is not None else np.zeros(train.shape[0])
self.Kernel = kernel_func
self._eps = 0.001
self._all_samples = list(range(self.length))
self._K_matrix = self._calculate_k_matrix()
self._error = np.zeros(self.length)
self._unbound = []
self.choose_alpha = self._choose_alphas()
# Calculate alphas using SMO algorithm
def fit(self):
k = self._k
state = None
while True:
# 1: Find alpha1, alpha2
try:
i1, i2 = self.choose_alpha.send(state)
state = None
except StopIteration:
print("Optimization done!\nEvery sample satisfy the KKT condition!")
break
# 2: calculate new alpha2 and new alpha1
y1, y2 = self.tags[i1], self.tags[i2]
a1, a2 = self.alphas[i1].copy(), self.alphas[i2].copy()
e1, e2 = self._e(i1), self._e(i2)
args = (i1, i2, a1, a2, e1, e2, y1, y2)
a1_new, a2_new = self._get_new_alpha(*args)
if not a1_new and not a2_new:
state = False
continue
self.alphas[i1], self.alphas[i2] = a1_new, a2_new
# 3: update threshold(b)
b1_new = np.float64(
-e1
- y1 * k(i1, i1) * (a1_new - a1)
- y2 * k(i2, i1) * (a2_new - a2)
+ self._b
)
b2_new = np.float64(
-e2
- y2 * k(i2, i2) * (a2_new - a2)
- y1 * k(i1, i2) * (a1_new - a1)
+ self._b
)
if 0.0 < a1_new < self._c:
b = b1_new
if 0.0 < a2_new < self._c:
b = b2_new
if not (np.float64(0) < a2_new < self._c) and not (
np.float64(0) < a1_new < self._c
):
b = (b1_new + b2_new) / 2.0
b_old = self._b
self._b = b
# 4: update error value,here we only calculate those non-bound samples'
# error
self._unbound = [i for i in self._all_samples if self._is_unbound(i)]
for s in self.unbound:
if s in (i1, i2):
continue
self._error[s] += (
y1 * (a1_new - a1) * k(i1, s)
+ y2 * (a2_new - a2) * k(i2, s)
+ (self._b - b_old)
)
# if i1 or i2 is non-bound,update there error value to zero
if self._is_unbound(i1):
self._error[i1] = 0
if self._is_unbound(i2):
self._error[i2] = 0
# Predict test samples
def predict(self, test_samples, classify=True):
if test_samples.shape[1] > self.samples.shape[1]:
raise ValueError(
"Test samples' feature length does not equal to that of train samples"
)
if self._auto_norm:
test_samples = self._norm(test_samples)
results = []
for test_sample in test_samples:
result = self._predict(test_sample)
if classify:
results.append(1 if result > 0 else -1)
else:
results.append(result)
return np.array(results)
# Check if alpha violate KKT condition
def _check_obey_kkt(self, index):
alphas = self.alphas
tol = self._tol
r = self._e(index) * self.tags[index]
c = self._c
return (r < -tol and alphas[index] < c) or (r > tol and alphas[index] > 0.0)
# Get value calculated from kernel function
def _k(self, i1, i2):
# for test samples,use Kernel function
if isinstance(i2, np.ndarray):
return self.Kernel(self.samples[i1], i2)
# for train samples,Kernel values have been saved in matrix
else:
return self._K_matrix[i1, i2]
# Get sample's error
def _e(self, index):
"""
Two cases:
1:Sample[index] is non-bound,Fetch error from list: _error
2:sample[index] is bound,Use predicted value deduct true value: g(xi) - yi
"""
# get from error data
if self._is_unbound(index):
return self._error[index]
# get by g(xi) - yi
else:
gx = np.dot(self.alphas * self.tags, self._K_matrix[:, index]) + self._b
yi = self.tags[index]
return gx - yi
# Calculate Kernel matrix of all possible i1,i2 ,saving time
def _calculate_k_matrix(self):
k_matrix = np.zeros([self.length, self.length])
for i in self._all_samples:
for j in self._all_samples:
k_matrix[i, j] = np.float64(
self.Kernel(self.samples[i, :], self.samples[j, :])
)
return k_matrix
# Predict test sample's tag
def _predict(self, sample):
k = self._k
predicted_value = (
np.sum(
[
self.alphas[i1] * self.tags[i1] * k(i1, sample)
for i1 in self._all_samples
]
)
+ self._b
)
return predicted_value
# Choose alpha1 and alpha2
def _choose_alphas(self):
locis = yield from self._choose_a1()
if not locis:
return None
return locis
def _choose_a1(self):
"""
Choose first alpha ;steps:
1:First loop over all sample
2:Second loop over all non-bound samples till all non-bound samples does not
voilate kkt condition.
3:Repeat this two process endlessly,till all samples does not voilate kkt
condition samples after first loop.
"""
while True:
all_not_obey = True
# all sample
print("scanning all sample!")
for i1 in [i for i in self._all_samples if self._check_obey_kkt(i)]:
all_not_obey = False
yield from self._choose_a2(i1)
# non-bound sample
print("scanning non-bound sample!")
while True:
not_obey = True
for i1 in [
i
for i in self._all_samples
if self._check_obey_kkt(i) and self._is_unbound(i)
]:
not_obey = False
yield from self._choose_a2(i1)
if not_obey:
print("all non-bound samples fit the KKT condition!")
break
if all_not_obey:
print("all samples fit the KKT condition! Optimization done!")
break
return False
def _choose_a2(self, i1):
"""
Choose the second alpha by using heuristic algorithm ;steps:
1: Choose alpha2 which gets the maximum step size (|E1 - E2|).
2: Start in a random point,loop over all non-bound samples till alpha1 and
alpha2 are optimized.
3: Start in a random point,loop over all samples till alpha1 and alpha2 are
optimized.
"""
self._unbound = [i for i in self._all_samples if self._is_unbound(i)]
if len(self.unbound) > 0:
tmp_error = self._error.copy().tolist()
tmp_error_dict = {
index: value
for index, value in enumerate(tmp_error)
if self._is_unbound(index)
}
if self._e(i1) >= 0:
i2 = min(tmp_error_dict, key=lambda index: tmp_error_dict[index])
else:
i2 = max(tmp_error_dict, key=lambda index: tmp_error_dict[index])
cmd = yield i1, i2
if cmd is None:
return
for i2 in np.roll(self.unbound, np.random.choice(self.length)):
cmd = yield i1, i2
if cmd is None:
return
for i2 in np.roll(self._all_samples, np.random.choice(self.length)):
cmd = yield i1, i2
if cmd is None:
return
# Get the new alpha2 and new alpha1
def _get_new_alpha(self, i1, i2, a1, a2, e1, e2, y1, y2):
k = self._k
if i1 == i2:
return None, None
# calculate L and H which bound the new alpha2
s = y1 * y2
if s == -1:
l, h = max(0.0, a2 - a1), min(self._c, self._c + a2 - a1)
else:
l, h = max(0.0, a2 + a1 - self._c), min(self._c, a2 + a1)
if l == h:
return None, None
# calculate eta
k11 = k(i1, i1)
k22 = k(i2, i2)
k12 = k(i1, i2)
# select the new alpha2 which could get the minimal objectives
if (eta := k11 + k22 - 2.0 * k12) > 0.0:
a2_new_unc = a2 + (y2 * (e1 - e2)) / eta
# a2_new has a boundary
if a2_new_unc >= h:
a2_new = h
elif a2_new_unc <= l:
a2_new = l
else:
a2_new = a2_new_unc
else:
b = self._b
l1 = a1 + s * (a2 - l)
h1 = a1 + s * (a2 - h)
# way 1
f1 = y1 * (e1 + b) - a1 * k(i1, i1) - s * a2 * k(i1, i2)
f2 = y2 * (e2 + b) - a2 * k(i2, i2) - s * a1 * k(i1, i2)
ol = (
l1 * f1
+ l * f2
+ 1 / 2 * l1**2 * k(i1, i1)
+ 1 / 2 * l**2 * k(i2, i2)
+ s * l * l1 * k(i1, i2)
)
oh = (
h1 * f1
+ h * f2
+ 1 / 2 * h1**2 * k(i1, i1)
+ 1 / 2 * h**2 * k(i2, i2)
+ s * h * h1 * k(i1, i2)
)
"""
# way 2
Use objective function check which alpha2 new could get the minimal
objectives
"""
if ol < (oh - self._eps):
a2_new = l
elif ol > oh + self._eps:
a2_new = h
else:
a2_new = a2
# a1_new has a boundary too
a1_new = a1 + s * (a2 - a2_new)
if a1_new < 0:
a2_new += s * a1_new
a1_new = 0
if a1_new > self._c:
a2_new += s * (a1_new - self._c)
a1_new = self._c
return a1_new, a2_new
# Normalise data using min_max way
def _norm(self, data):
if self._init:
self._min = np.min(data, axis=0)
self._max = np.max(data, axis=0)
self._init = False
return (data - self._min) / (self._max - self._min)
else:
return (data - self._min) / (self._max - self._min)
def _is_unbound(self, index):
return bool(0.0 < self.alphas[index] < self._c)
def _is_support(self, index):
return bool(self.alphas[index] > 0)
@property
def unbound(self):
return self._unbound
@property
def support(self):
return [i for i in range(self.length) if self._is_support(i)]
@property
def length(self):
return self.samples.shape[0]
class Kernel:
def __init__(self, kernel, degree=1.0, coef0=0.0, gamma=1.0):
self.degree = np.float64(degree)
self.coef0 = np.float64(coef0)
self.gamma = np.float64(gamma)
self._kernel_name = kernel
self._kernel = self._get_kernel(kernel_name=kernel)
self._check()
def _polynomial(self, v1, v2):
return (self.gamma * np.inner(v1, v2) + self.coef0) ** self.degree
def _linear(self, v1, v2):
return np.inner(v1, v2) + self.coef0
def _rbf(self, v1, v2):
return np.exp(-1 * (self.gamma * np.linalg.norm(v1 - v2) ** 2))
def _check(self):
if self._kernel == self._rbf and self.gamma < 0:
raise ValueError("gamma value must greater than 0")
def _get_kernel(self, kernel_name):
maps = {"linear": self._linear, "poly": self._polynomial, "rbf": self._rbf}
return maps[kernel_name]
def __call__(self, v1, v2):
return self._kernel(v1, v2)
def __repr__(self):
return self._kernel_name
def count_time(func):
def call_func(*args, **kwargs):
import time
start_time = time.time()
func(*args, **kwargs)
end_time = time.time()
print(f"smo algorithm cost {end_time - start_time} seconds")
return call_func
@count_time
def test_cancel_data():
print("Hello!\nStart test svm by smo algorithm!")
# 0: download dataset and load into pandas' dataframe
if not os.path.exists(r"cancel_data.csv"):
request = urllib.request.Request( # noqa: S310
CANCER_DATASET_URL,
headers={"User-Agent": "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)"},
)
response = urllib.request.urlopen(request) # noqa: S310
content = response.read().decode("utf-8")
with open(r"cancel_data.csv", "w") as f:
f.write(content)
data = pd.read_csv(r"cancel_data.csv", header=None)
# 1: pre-processing data
del data[data.columns.tolist()[0]]
data = data.dropna(axis=0)
data = data.replace({"M": np.float64(1), "B": np.float64(-1)})
samples = np.array(data)[:, :]
# 2: dividing data into train_data data and test_data data
train_data, test_data = samples[:328, :], samples[328:, :]
test_tags, test_samples = test_data[:, 0], test_data[:, 1:]
# 3: choose kernel function,and set initial alphas to zero(optional)
mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5)
al = np.zeros(train_data.shape[0])
# 4: calculating best alphas using SMO algorithm and predict test_data samples
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
alpha_list=al,
cost=0.4,
b=0.0,
tolerance=0.001,
)
mysvm.fit()
predict = mysvm.predict(test_samples)
# 5: check accuracy
score = 0
test_num = test_tags.shape[0]
for i in range(test_tags.shape[0]):
if test_tags[i] == predict[i]:
score += 1
print(f"\nall: {test_num}\nright: {score}\nfalse: {test_num - score}")
print(f"Rough Accuracy: {score / test_tags.shape[0]}")
def test_demonstration():
# change stdout
print("\nStart plot,please wait!!!")
sys.stdout = open(os.devnull, "w")
ax1 = plt.subplot2grid((2, 2), (0, 0))
ax2 = plt.subplot2grid((2, 2), (0, 1))
ax3 = plt.subplot2grid((2, 2), (1, 0))
ax4 = plt.subplot2grid((2, 2), (1, 1))
ax1.set_title("linear svm,cost:0.1")
test_linear_kernel(ax1, cost=0.1)
ax2.set_title("linear svm,cost:500")
test_linear_kernel(ax2, cost=500)
ax3.set_title("rbf kernel svm,cost:0.1")
test_rbf_kernel(ax3, cost=0.1)
ax4.set_title("rbf kernel svm,cost:500")
test_rbf_kernel(ax4, cost=500)
sys.stdout = sys.__stdout__
print("Plot done!!!")
def test_linear_kernel(ax, cost):
train_x, train_y = make_blobs(
n_samples=500, centers=2, n_features=2, random_state=1
)
train_y[train_y == 0] = -1
scaler = StandardScaler()
train_x_scaled = scaler.fit_transform(train_x, train_y)
train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled))
mykernel = Kernel(kernel="linear", degree=5, coef0=1, gamma=0.5)
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
cost=cost,
tolerance=0.001,
auto_norm=False,
)
mysvm.fit()
plot_partition_boundary(mysvm, train_data, ax=ax)
def test_rbf_kernel(ax, cost):
train_x, train_y = make_circles(
n_samples=500, noise=0.1, factor=0.1, random_state=1
)
train_y[train_y == 0] = -1
scaler = StandardScaler()
train_x_scaled = scaler.fit_transform(train_x, train_y)
train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled))
mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5)
mysvm = SmoSVM(
train=train_data,
kernel_func=mykernel,
cost=cost,
tolerance=0.001,
auto_norm=False,
)
mysvm.fit()
plot_partition_boundary(mysvm, train_data, ax=ax)
def plot_partition_boundary(
model, train_data, ax, resolution=100, colors=("b", "k", "r")
):
"""
We can not get the optimum w of our kernel svm model which is different from linear
svm. For this reason, we generate randomly distributed points with high desity and
prediced values of these points are calculated by using our trained model. Then we
could use this prediced values to draw contour map.
And this contour map can represent svm's partition boundary.
"""
train_data_x = train_data[:, 1]
train_data_y = train_data[:, 2]
train_data_tags = train_data[:, 0]
xrange = np.linspace(train_data_x.min(), train_data_x.max(), resolution)
yrange = np.linspace(train_data_y.min(), train_data_y.max(), resolution)
test_samples = np.array([(x, y) for x in xrange for y in yrange]).reshape(
resolution * resolution, 2
)
test_tags = model.predict(test_samples, classify=False)
grid = test_tags.reshape((len(xrange), len(yrange)))
# Plot contour map which represents the partition boundary
ax.contour(
xrange,
yrange,
np.mat(grid).T,
levels=(-1, 0, 1),
linestyles=("--", "-", "--"),
linewidths=(1, 1, 1),
colors=colors,
)
# Plot all train samples
ax.scatter(
train_data_x,
train_data_y,
c=train_data_tags,
cmap=plt.cm.Dark2,
lw=0,
alpha=0.5,
)
# Plot support vectors
support = model.support
ax.scatter(
train_data_x[support],
train_data_y[support],
c=train_data_tags[support],
cmap=plt.cm.Dark2,
)
if __name__ == "__main__":
test_cancel_data()
test_demonstration()
plt.show()
|
Similarity Search : https:en.wikipedia.orgwikiSimilaritysearch Similarity search is a search algorithm for finding the nearest vector from vectors, used in natural language processing. In this algorithm, it calculates distance with euclidean distance and returns a list containing two data for each vector: 1. the nearest vector 2. distance between the vector and the nearest vector float Calculates euclidean distance between two data. :param inputa: ndarray of first vector. :param inputb: ndarray of second vector. :return: Euclidean distance of inputa and inputb. By using math.sqrt, result will be float. euclideannp.array0, np.array1 1.0 euclideannp.array0, 1, np.array1, 1 1.0 euclideannp.array0, 0, 0, np.array0, 0, 1 1.0 :param dataset: Set containing the vectors. Should be ndarray. :param valuearray: vectorvectors we want to know the nearest vector from dataset. :return: Result will be a list containing 1. the nearest vector 2. distance from the vector dataset np.array0, 1, 2 valuearray np.array0 similaritysearchdataset, valuearray 0, 0.0 dataset np.array0, 0, 1, 1, 2, 2 valuearray np.array0, 1 similaritysearchdataset, valuearray 0, 0, 1.0 dataset np.array0, 0, 0, 1, 1, 1, 2, 2, 2 valuearray np.array0, 0, 1 similaritysearchdataset, valuearray 0, 0, 0, 1.0 dataset np.array0, 0, 0, 1, 1, 1, 2, 2, 2 valuearray np.array0, 0, 0, 0, 0, 1 similaritysearchdataset, valuearray 0, 0, 0, 0.0, 0, 0, 0, 1.0 These are the errors that might occur: 1. If dimensions are different. For example, dataset has 2d array and valuearray has 1d array: dataset np.array1 valuearray np.array1 similaritysearchdataset, valuearray Traceback most recent call last: ... ValueError: Wrong input data's dimensions... dataset : 2, valuearray : 1 2. If data's shapes are different. For example, dataset has shape of 3, 2 and valuearray has 2, 3. We are expecting same shapes of two arrays, so it is wrong. dataset np.array0, 0, 1, 1, 2, 2 valuearray np.array0, 0, 0, 0, 0, 1 similaritysearchdataset, valuearray Traceback most recent call last: ... ValueError: Wrong input data's shape... dataset : 2, valuearray : 3 3. If data types are different. When trying to compare, we are expecting same types so they should be same. If not, it'll come up with errors. dataset np.array0, 0, 1, 1, 2, 2, dtypenp.float32 valuearray np.array0, 0, 0, 1, dtypenp.int32 similaritysearchdataset, valuearray doctest: NORMALIZEWHITESPACE Traceback most recent call last: ... TypeError: Input data have different datatype... dataset : float32, valuearray : int32 Calculates cosine similarity between two data. :param inputa: ndarray of first vector. :param inputb: ndarray of second vector. :return: Cosine similarity of inputa and inputb. By using math.sqrt, result will be float. cosinesimilaritynp.array1, np.array1 1.0 cosinesimilaritynp.array1, 2, np.array6, 32 0.9615239476408232 | from __future__ import annotations
import math
import numpy as np
from numpy.linalg import norm
def euclidean(input_a: np.ndarray, input_b: np.ndarray) -> float:
"""
Calculates euclidean distance between two data.
:param input_a: ndarray of first vector.
:param input_b: ndarray of second vector.
:return: Euclidean distance of input_a and input_b. By using math.sqrt(),
result will be float.
>>> euclidean(np.array([0]), np.array([1]))
1.0
>>> euclidean(np.array([0, 1]), np.array([1, 1]))
1.0
>>> euclidean(np.array([0, 0, 0]), np.array([0, 0, 1]))
1.0
"""
return math.sqrt(sum(pow(a - b, 2) for a, b in zip(input_a, input_b)))
def similarity_search(
dataset: np.ndarray, value_array: np.ndarray
) -> list[list[list[float] | float]]:
"""
:param dataset: Set containing the vectors. Should be ndarray.
:param value_array: vector/vectors we want to know the nearest vector from dataset.
:return: Result will be a list containing
1. the nearest vector
2. distance from the vector
>>> dataset = np.array([[0], [1], [2]])
>>> value_array = np.array([[0]])
>>> similarity_search(dataset, value_array)
[[[0], 0.0]]
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]])
>>> value_array = np.array([[0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0], 1.0]]
>>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]])
>>> value_array = np.array([[0, 0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0, 0], 1.0]]
>>> dataset = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]])
>>> value_array = np.array([[0, 0, 0], [0, 0, 1]])
>>> similarity_search(dataset, value_array)
[[[0, 0, 0], 0.0], [[0, 0, 0], 1.0]]
These are the errors that might occur:
1. If dimensions are different.
For example, dataset has 2d array and value_array has 1d array:
>>> dataset = np.array([[1]])
>>> value_array = np.array([1])
>>> similarity_search(dataset, value_array)
Traceback (most recent call last):
...
ValueError: Wrong input data's dimensions... dataset : 2, value_array : 1
2. If data's shapes are different.
For example, dataset has shape of (3, 2) and value_array has (2, 3).
We are expecting same shapes of two arrays, so it is wrong.
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]])
>>> value_array = np.array([[0, 0, 0], [0, 0, 1]])
>>> similarity_search(dataset, value_array)
Traceback (most recent call last):
...
ValueError: Wrong input data's shape... dataset : 2, value_array : 3
3. If data types are different.
When trying to compare, we are expecting same types so they should be same.
If not, it'll come up with errors.
>>> dataset = np.array([[0, 0], [1, 1], [2, 2]], dtype=np.float32)
>>> value_array = np.array([[0, 0], [0, 1]], dtype=np.int32)
>>> similarity_search(dataset, value_array) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
TypeError: Input data have different datatype...
dataset : float32, value_array : int32
"""
if dataset.ndim != value_array.ndim:
msg = (
"Wrong input data's dimensions... "
f"dataset : {dataset.ndim}, value_array : {value_array.ndim}"
)
raise ValueError(msg)
try:
if dataset.shape[1] != value_array.shape[1]:
msg = (
"Wrong input data's shape... "
f"dataset : {dataset.shape[1]}, value_array : {value_array.shape[1]}"
)
raise ValueError(msg)
except IndexError:
if dataset.ndim != value_array.ndim:
raise TypeError("Wrong shape")
if dataset.dtype != value_array.dtype:
msg = (
"Input data have different datatype... "
f"dataset : {dataset.dtype}, value_array : {value_array.dtype}"
)
raise TypeError(msg)
answer = []
for value in value_array:
dist = euclidean(value, dataset[0])
vector = dataset[0].tolist()
for dataset_value in dataset[1:]:
temp_dist = euclidean(value, dataset_value)
if dist > temp_dist:
dist = temp_dist
vector = dataset_value.tolist()
answer.append([vector, dist])
return answer
def cosine_similarity(input_a: np.ndarray, input_b: np.ndarray) -> float:
"""
Calculates cosine similarity between two data.
:param input_a: ndarray of first vector.
:param input_b: ndarray of second vector.
:return: Cosine similarity of input_a and input_b. By using math.sqrt(),
result will be float.
>>> cosine_similarity(np.array([1]), np.array([1]))
1.0
>>> cosine_similarity(np.array([1, 2]), np.array([6, 32]))
0.9615239476408232
"""
return np.dot(input_a, input_b) / (norm(input_a) * norm(input_b))
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Return the squared second norm of vector normsquaredv sumx x for x in v Args: vector ndarray: input vector Returns: float: squared second norm of vector normsquared1, 2 5 normsquarednp.asarray1, 2 5 normsquared0, 0 0 Support Vector Classifier Args: kernel str: kernel to use. Default: linear Possible choices: linear regularization: constraint for soft margin data not linearly separable Default: unbound SVCkernelasdf Traceback most recent call last: ... ValueError: Unknown kernel: asdf SVCkernelrbf Traceback most recent call last: ... ValueError: rbf kernel requires gamma SVCkernelrbf, gamma1 Traceback most recent call last: ... ValueError: gamma must be 0 in the future, there could be a default value like in sklearn sklear: defgamma 1nfeatures X.var wiki previously it was 1nfeatures kernels Linear kernel as if no kernel used at all return np.dotvector1, vector2 def rbfself, vector1: ndarray, vector2: ndarray float: return np.expself.gamma normsquaredvector1 vector2 def fitself, observations: listndarray, classes: ndarray None: self.observations observations self.classes classes using Wolfe's Dual to calculate w. Primal problem: minimize 12normsquaredw constraint: ynw . xn b 1 With l a vector Dual problem: maximize sumnln 12 sumnsummlnlmynymxn . xm constraint: self.C ln 0 and sumnlnyn 0 Then we get w using w sumnlnynxn At the end we can get b meanyn w . xn Since we use kernels, we only need lstar to calculate b and to classify observations n, np.shapeclasses def tominimizecandidate: ndarray float: s 0 n, np.shapecandidate for i in rangen: for j in rangen: s candidatei candidatej classesi classesj self.kernelobservationsi, observationsj return 1 2 s sumcandidate lycontraint LinearConstraintclasses, 0, 0 lbounds Bounds0, self.regularization lstar minimize tominimize, np.onesn, boundslbounds, constraintslycontraint .x self.optimum lstar calculating mean offset of separation plane to points s 0 for i in rangen: for j in rangen: s classesi classesi self.optimumi self.kernel observationsi, observationsj self.offset s n def predictself, observation: ndarray int: s sum self.optimumn self.classesn self.kernelself.observationsn, observation for n in rangelenself.classes return 1 if s self.offset 0 else 1 if name main: import doctest doctest.testmod | import numpy as np
from numpy import ndarray
from scipy.optimize import Bounds, LinearConstraint, minimize
def norm_squared(vector: ndarray) -> float:
"""
Return the squared second norm of vector
norm_squared(v) = sum(x * x for x in v)
Args:
vector (ndarray): input vector
Returns:
float: squared second norm of vector
>>> norm_squared([1, 2])
5
>>> norm_squared(np.asarray([1, 2]))
5
>>> norm_squared([0, 0])
0
"""
return np.dot(vector, vector)
class SVC:
"""
Support Vector Classifier
Args:
kernel (str): kernel to use. Default: linear
Possible choices:
- linear
regularization: constraint for soft margin (data not linearly separable)
Default: unbound
>>> SVC(kernel="asdf")
Traceback (most recent call last):
...
ValueError: Unknown kernel: asdf
>>> SVC(kernel="rbf")
Traceback (most recent call last):
...
ValueError: rbf kernel requires gamma
>>> SVC(kernel="rbf", gamma=-1)
Traceback (most recent call last):
...
ValueError: gamma must be > 0
"""
def __init__(
self,
*,
regularization: float = np.inf,
kernel: str = "linear",
gamma: float = 0.0,
) -> None:
self.regularization = regularization
self.gamma = gamma
if kernel == "linear":
self.kernel = self.__linear
elif kernel == "rbf":
if self.gamma == 0:
raise ValueError("rbf kernel requires gamma")
if not isinstance(self.gamma, (float, int)):
raise ValueError("gamma must be float or int")
if not self.gamma > 0:
raise ValueError("gamma must be > 0")
self.kernel = self.__rbf
# in the future, there could be a default value like in sklearn
# sklear: def_gamma = 1/(n_features * X.var()) (wiki)
# previously it was 1/(n_features)
else:
msg = f"Unknown kernel: {kernel}"
raise ValueError(msg)
# kernels
def __linear(self, vector1: ndarray, vector2: ndarray) -> float:
"""Linear kernel (as if no kernel used at all)"""
return np.dot(vector1, vector2)
def __rbf(self, vector1: ndarray, vector2: ndarray) -> float:
"""
RBF: Radial Basis Function Kernel
Note: for more information see:
https://en.wikipedia.org/wiki/Radial_basis_function_kernel
Args:
vector1 (ndarray): first vector
vector2 (ndarray): second vector)
Returns:
float: exp(-(gamma * norm_squared(vector1 - vector2)))
"""
return np.exp(-(self.gamma * norm_squared(vector1 - vector2)))
def fit(self, observations: list[ndarray], classes: ndarray) -> None:
"""
Fits the SVC with a set of observations.
Args:
observations (list[ndarray]): list of observations
classes (ndarray): classification of each observation (in {1, -1})
"""
self.observations = observations
self.classes = classes
# using Wolfe's Dual to calculate w.
# Primal problem: minimize 1/2*norm_squared(w)
# constraint: yn(w . xn + b) >= 1
#
# With l a vector
# Dual problem: maximize sum_n(ln) -
# 1/2 * sum_n(sum_m(ln*lm*yn*ym*xn . xm))
# constraint: self.C >= ln >= 0
# and sum_n(ln*yn) = 0
# Then we get w using w = sum_n(ln*yn*xn)
# At the end we can get b ~= mean(yn - w . xn)
#
# Since we use kernels, we only need l_star to calculate b
# and to classify observations
(n,) = np.shape(classes)
def to_minimize(candidate: ndarray) -> float:
"""
Opposite of the function to maximize
Args:
candidate (ndarray): candidate array to test
Return:
float: Wolfe's Dual result to minimize
"""
s = 0
(n,) = np.shape(candidate)
for i in range(n):
for j in range(n):
s += (
candidate[i]
* candidate[j]
* classes[i]
* classes[j]
* self.kernel(observations[i], observations[j])
)
return 1 / 2 * s - sum(candidate)
ly_contraint = LinearConstraint(classes, 0, 0)
l_bounds = Bounds(0, self.regularization)
l_star = minimize(
to_minimize, np.ones(n), bounds=l_bounds, constraints=[ly_contraint]
).x
self.optimum = l_star
# calculating mean offset of separation plane to points
s = 0
for i in range(n):
for j in range(n):
s += classes[i] - classes[i] * self.optimum[i] * self.kernel(
observations[i], observations[j]
)
self.offset = s / n
def predict(self, observation: ndarray) -> int:
"""
Get the expected class of an observation
Args:
observation (Vector): observation
Returns:
int {1, -1}: expected class
>>> xs = [
... np.asarray([0, 1]), np.asarray([0, 2]),
... np.asarray([1, 1]), np.asarray([1, 2])
... ]
>>> y = np.asarray([1, 1, -1, -1])
>>> s = SVC()
>>> s.fit(xs, y)
>>> s.predict(np.asarray([0, 1]))
1
>>> s.predict(np.asarray([1, 1]))
-1
>>> s.predict(np.asarray([2, 2]))
-1
"""
s = sum(
self.optimum[n]
* self.classes[n]
* self.kernel(self.observations[n], observation)
for n in range(len(self.classes))
)
return 1 if s + self.offset >= 0 else -1
if __name__ == "__main__":
import doctest
doctest.testmod()
|
tfidf Wikipedia: https:en.wikipedia.orgwikiTfE28093idf tfidf and other word frequency algorithms are often used as a weighting factor in information retrieval and text mining. 83 of textbased recommender systems use tfidf for term weighting. In Layman's terms, tfidf is a statistic intended to reflect how important a word is to a document in a corpus a collection of documents Here I've implemented several word frequency algorithms that are commonly used in information retrieval: Term Frequency, Document Frequency, and TFIDF TermFrequencyInverseDocumentFrequency are included. Term Frequency is a statistical function that returns a number representing how frequently an expression occurs in a document. This indicates how significant a particular term is in a given document. Document Frequency is a statistical function that returns an integer representing the number of documents in a corpus that a term occurs in where the max number returned would be the number of documents in the corpus. Inverse Document Frequency is mathematically written as log10Ndf, where N is the number of documents in your corpus and df is the Document Frequency. If df is 0, a ZeroDivisionError will be thrown. TermFrequencyInverseDocumentFrequency is a measure of the originality of a term. It is mathematically written as tflog10Ndf. It compares the number of times a term appears in a document with the number of documents the term appears in. If df is 0, a ZeroDivisionError will be thrown. Return the number of times a term occurs within a given document. params: term, the term to search a document for, and document, the document to search within returns: an integer representing the number of times a term is found within the document examples: termfrequencyto, To be, or not to be 2 strip all punctuation and newlines and replace it with '' Calculate the number of documents in a corpus that contain a given term params : term, the term to search each document for, and corpus, a collection of documents. Each document should be separated by a newline. returns : the number of documents in the corpus that contain the term you are searching for and the number of documents in the corpus examples : documentfrequencyfirst, This is the first document in the corpus.nThIs is the second document in the corpus.nTHIS is the third document in the corpus. 1, 3 Return an integer denoting the importance of a word. This measure of importance is calculated by log10Ndf, where N is the number of documents and df is the Document Frequency. params : df, the Document Frequency, N, the number of documents in the corpus and smoothing, if True return the idfsmooth returns : log10Ndf or 1log10N1df examples : inversedocumentfrequency3, 0 Traceback most recent call last: ... ValueError: log100 is undefined. inversedocumentfrequency1, 3 0.477 inversedocumentfrequency0, 3 Traceback most recent call last: ... ZeroDivisionError: df must be 0 inversedocumentfrequency0, 3,True 1.477 Combine the term frequency and inverse document frequency functions to calculate the originality of a term. This 'originality' is calculated by multiplying the term frequency and the inverse document frequency : tfidf TF IDF params : tf, the term frequency, and idf, the inverse document frequency examples : tfidf2, 0.477 0.954 | import string
from math import log10
"""
tf-idf Wikipedia: https://en.wikipedia.org/wiki/Tf%E2%80%93idf
tf-idf and other word frequency algorithms are often used
as a weighting factor in information retrieval and text
mining. 83% of text-based recommender systems use
tf-idf for term weighting. In Layman's terms, tf-idf
is a statistic intended to reflect how important a word
is to a document in a corpus (a collection of documents)
Here I've implemented several word frequency algorithms
that are commonly used in information retrieval: Term Frequency,
Document Frequency, and TF-IDF (Term-Frequency*Inverse-Document-Frequency)
are included.
Term Frequency is a statistical function that
returns a number representing how frequently
an expression occurs in a document. This
indicates how significant a particular term is in
a given document.
Document Frequency is a statistical function that returns
an integer representing the number of documents in a
corpus that a term occurs in (where the max number returned
would be the number of documents in the corpus).
Inverse Document Frequency is mathematically written as
log10(N/df), where N is the number of documents in your
corpus and df is the Document Frequency. If df is 0, a
ZeroDivisionError will be thrown.
Term-Frequency*Inverse-Document-Frequency is a measure
of the originality of a term. It is mathematically written
as tf*log10(N/df). It compares the number of times
a term appears in a document with the number of documents
the term appears in. If df is 0, a ZeroDivisionError will be thrown.
"""
def term_frequency(term: str, document: str) -> int:
"""
Return the number of times a term occurs within
a given document.
@params: term, the term to search a document for, and document,
the document to search within
@returns: an integer representing the number of times a term is
found within the document
@examples:
>>> term_frequency("to", "To be, or not to be")
2
"""
# strip all punctuation and newlines and replace it with ''
document_without_punctuation = document.translate(
str.maketrans("", "", string.punctuation)
).replace("\n", "")
tokenize_document = document_without_punctuation.split(" ") # word tokenization
return len([word for word in tokenize_document if word.lower() == term.lower()])
def document_frequency(term: str, corpus: str) -> tuple[int, int]:
"""
Calculate the number of documents in a corpus that contain a
given term
@params : term, the term to search each document for, and corpus, a collection of
documents. Each document should be separated by a newline.
@returns : the number of documents in the corpus that contain the term you are
searching for and the number of documents in the corpus
@examples :
>>> document_frequency("first", "This is the first document in the corpus.\\nThIs\
is the second document in the corpus.\\nTHIS is \
the third document in the corpus.")
(1, 3)
"""
corpus_without_punctuation = corpus.lower().translate(
str.maketrans("", "", string.punctuation)
) # strip all punctuation and replace it with ''
docs = corpus_without_punctuation.split("\n")
term = term.lower()
return (len([doc for doc in docs if term in doc]), len(docs))
def inverse_document_frequency(df: int, n: int, smoothing=False) -> float:
"""
Return an integer denoting the importance
of a word. This measure of importance is
calculated by log10(N/df), where N is the
number of documents and df is
the Document Frequency.
@params : df, the Document Frequency, N,
the number of documents in the corpus and
smoothing, if True return the idf-smooth
@returns : log10(N/df) or 1+log10(N/1+df)
@examples :
>>> inverse_document_frequency(3, 0)
Traceback (most recent call last):
...
ValueError: log10(0) is undefined.
>>> inverse_document_frequency(1, 3)
0.477
>>> inverse_document_frequency(0, 3)
Traceback (most recent call last):
...
ZeroDivisionError: df must be > 0
>>> inverse_document_frequency(0, 3,True)
1.477
"""
if smoothing:
if n == 0:
raise ValueError("log10(0) is undefined.")
return round(1 + log10(n / (1 + df)), 3)
if df == 0:
raise ZeroDivisionError("df must be > 0")
elif n == 0:
raise ValueError("log10(0) is undefined.")
return round(log10(n / df), 3)
def tf_idf(tf: int, idf: int) -> float:
"""
Combine the term frequency
and inverse document frequency functions to
calculate the originality of a term. This
'originality' is calculated by multiplying
the term frequency and the inverse document
frequency : tf-idf = TF * IDF
@params : tf, the term frequency, and idf, the inverse document
frequency
@examples :
>>> tf_idf(2, 0.477)
0.954
"""
return round(tf * idf, 3)
|
XGBoost Classifier Example Split dataset into features and target data is features datahandling'data':'5.1, 3.5, 1.4, 0.2','target':0 '5.1, 3.5, 1.4, 0.2', 0 datahandling ... 'data': '4.9, 3.0, 1.4, 0.2, 4.7, 3.2, 1.3, 0.2', 'target': 0, 0 ... '4.9, 3.0, 1.4, 0.2, 4.7, 3.2, 1.3, 0.2', 0, 0 THIS TEST IS BROKEN!! xgboostnp.array5.1, 3.6, 1.4, 0.2, np.array0 XGBClassifierbasescore0.5, booster'gbtree', callbacksNone, colsamplebylevel1, colsamplebynode1, colsamplebytree1, earlystoppingroundsNone, enablecategoricalFalse, evalmetricNone, gamma0, gpuid1, growpolicy'depthwise', importancetypeNone, interactionconstraints'', learningrate0.300000012, maxbin256, maxcattoonehot4, maxdeltastep0, maxdepth6, maxleaves0, minchildweight1, missingnan, monotoneconstraints'', nestimators100, njobs0, numparalleltree1, predictor'auto', randomstate0, regalpha0, reglambda1, ... main Url for the algorithm: https:xgboost.readthedocs.ioenstable Iris type dataset is used to demonstrate algorithm. Load Iris dataset Create an XGBoost Classifier from the training data Display the confusion matrix of the classifier with both training and test sets | # XGBoost Classifier Example
import numpy as np
from matplotlib import pyplot as plt
from sklearn.datasets import load_iris
from sklearn.metrics import ConfusionMatrixDisplay
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
def data_handling(data: dict) -> tuple:
# Split dataset into features and target
# data is features
"""
>>> data_handling(({'data':'[5.1, 3.5, 1.4, 0.2]','target':([0])}))
('[5.1, 3.5, 1.4, 0.2]', [0])
>>> data_handling(
... {'data': '[4.9, 3.0, 1.4, 0.2], [4.7, 3.2, 1.3, 0.2]', 'target': ([0, 0])}
... )
('[4.9, 3.0, 1.4, 0.2], [4.7, 3.2, 1.3, 0.2]', [0, 0])
"""
return (data["data"], data["target"])
def xgboost(features: np.ndarray, target: np.ndarray) -> XGBClassifier:
"""
# THIS TEST IS BROKEN!! >>> xgboost(np.array([[5.1, 3.6, 1.4, 0.2]]), np.array([0]))
XGBClassifier(base_score=0.5, booster='gbtree', callbacks=None,
colsample_bylevel=1, colsample_bynode=1, colsample_bytree=1,
early_stopping_rounds=None, enable_categorical=False,
eval_metric=None, gamma=0, gpu_id=-1, grow_policy='depthwise',
importance_type=None, interaction_constraints='',
learning_rate=0.300000012, max_bin=256, max_cat_to_onehot=4,
max_delta_step=0, max_depth=6, max_leaves=0, min_child_weight=1,
missing=nan, monotone_constraints='()', n_estimators=100,
n_jobs=0, num_parallel_tree=1, predictor='auto', random_state=0,
reg_alpha=0, reg_lambda=1, ...)
"""
classifier = XGBClassifier()
classifier.fit(features, target)
return classifier
def main() -> None:
"""
>>> main()
Url for the algorithm:
https://xgboost.readthedocs.io/en/stable/
Iris type dataset is used to demonstrate algorithm.
"""
# Load Iris dataset
iris = load_iris()
features, targets = data_handling(iris)
x_train, x_test, y_train, y_test = train_test_split(
features, targets, test_size=0.25
)
names = iris["target_names"]
# Create an XGBoost Classifier from the training data
xgboost_classifier = xgboost(x_train, y_train)
# Display the confusion matrix of the classifier with both training and test sets
ConfusionMatrixDisplay.from_estimator(
xgboost_classifier,
x_test,
y_test,
display_labels=names,
cmap="Blues",
normalize="true",
)
plt.title("Normalized Confusion Matrix - IRIS Dataset")
plt.show()
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
main()
|
XGBoost Regressor Example Split dataset into features and target. Data is features. datahandling ... 'data':' 8.3252 41. 6.9841269 1.02380952 322. 2.55555556 37.88 122.23 ' ... ,'target':4.526 ' 8.3252 41. 6.9841269 1.02380952 322. 2.55555556 37.88 122.23 ', 4.526 xgboostnp.array 2.3571 , 52. , 6.00813008, 1.06775068, ... 907. , 2.45799458, 40.58 , 124.26,np.array1.114, ... np.array1.97840000e00, 3.70000000e01, 4.98858447e00, 1.03881279e00, ... 1.14300000e03, 2.60958904e00, 3.67800000e01, 1.19780000e02 array1.1139996, dtypefloat32 Predict target for test data The URL for this algorithm https:xgboost.readthedocs.ioenstable California house price dataset is used to demonstrate the algorithm. Expected error values: Mean Absolute Error: 0.30957163379906033 Mean Square Error: 0.22611560196662744 Load California house price dataset Error printing | # XGBoost Regressor Example
import numpy as np
from sklearn.datasets import fetch_california_housing
from sklearn.metrics import mean_absolute_error, mean_squared_error
from sklearn.model_selection import train_test_split
from xgboost import XGBRegressor
def data_handling(data: dict) -> tuple:
# Split dataset into features and target. Data is features.
"""
>>> data_handling((
... {'data':'[ 8.3252 41. 6.9841269 1.02380952 322. 2.55555556 37.88 -122.23 ]'
... ,'target':([4.526])}))
('[ 8.3252 41. 6.9841269 1.02380952 322. 2.55555556 37.88 -122.23 ]', [4.526])
"""
return (data["data"], data["target"])
def xgboost(
features: np.ndarray, target: np.ndarray, test_features: np.ndarray
) -> np.ndarray:
"""
>>> xgboost(np.array([[ 2.3571 , 52. , 6.00813008, 1.06775068,
... 907. , 2.45799458, 40.58 , -124.26]]),np.array([1.114]),
... np.array([[1.97840000e+00, 3.70000000e+01, 4.98858447e+00, 1.03881279e+00,
... 1.14300000e+03, 2.60958904e+00, 3.67800000e+01, -1.19780000e+02]]))
array([[1.1139996]], dtype=float32)
"""
xgb = XGBRegressor(
verbosity=0, random_state=42, tree_method="exact", base_score=0.5
)
xgb.fit(features, target)
# Predict target for test data
predictions = xgb.predict(test_features)
predictions = predictions.reshape(len(predictions), 1)
return predictions
def main() -> None:
"""
The URL for this algorithm
https://xgboost.readthedocs.io/en/stable/
California house price dataset is used to demonstrate the algorithm.
Expected error values:
Mean Absolute Error: 0.30957163379906033
Mean Square Error: 0.22611560196662744
"""
# Load California house price dataset
california = fetch_california_housing()
data, target = data_handling(california)
x_train, x_test, y_train, y_test = train_test_split(
data, target, test_size=0.25, random_state=1
)
predictions = xgboost(x_train, y_train, x_test)
# Error printing
print(f"Mean Absolute Error: {mean_absolute_error(y_test, predictions)}")
print(f"Mean Square Error: {mean_squared_error(y_test, predictions)}")
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
main()
|
Absolute Value. def absvalnum: float float: return num if num 0 else num def absminx: listint int: if lenx 0: raise ValueErrorabsmin arg is an empty sequence j x0 for i in x: if absvali absvalj: j i return j def absmaxx: listint int: if lenx 0: raise ValueErrorabsmax arg is an empty sequence j x0 for i in x: if absi absj: j i return j def absmaxsortx: listint int: if lenx 0: raise ValueErrorabsmaxsort arg is an empty sequence return sortedx, keyabs1 def testabsval: assert absval0 0 assert absval34 34 assert absval100000000000 100000000000 a 3, 1, 2, 11 assert absmaxa 11 assert absmaxsorta 11 assert absmina 1 if name main: import doctest doctest.testmod testabsval printabsval34 34 | def abs_val(num: float) -> float:
"""
Find the absolute value of a number.
>>> abs_val(-5.1)
5.1
>>> abs_val(-5) == abs_val(5)
True
>>> abs_val(0)
0
"""
return -num if num < 0 else num
def abs_min(x: list[int]) -> int:
"""
>>> abs_min([0,5,1,11])
0
>>> abs_min([3,-10,-2])
-2
>>> abs_min([])
Traceback (most recent call last):
...
ValueError: abs_min() arg is an empty sequence
"""
if len(x) == 0:
raise ValueError("abs_min() arg is an empty sequence")
j = x[0]
for i in x:
if abs_val(i) < abs_val(j):
j = i
return j
def abs_max(x: list[int]) -> int:
"""
>>> abs_max([0,5,1,11])
11
>>> abs_max([3,-10,-2])
-10
>>> abs_max([])
Traceback (most recent call last):
...
ValueError: abs_max() arg is an empty sequence
"""
if len(x) == 0:
raise ValueError("abs_max() arg is an empty sequence")
j = x[0]
for i in x:
if abs(i) > abs(j):
j = i
return j
def abs_max_sort(x: list[int]) -> int:
"""
>>> abs_max_sort([0,5,1,11])
11
>>> abs_max_sort([3,-10,-2])
-10
>>> abs_max_sort([])
Traceback (most recent call last):
...
ValueError: abs_max_sort() arg is an empty sequence
"""
if len(x) == 0:
raise ValueError("abs_max_sort() arg is an empty sequence")
return sorted(x, key=abs)[-1]
def test_abs_val():
"""
>>> test_abs_val()
"""
assert abs_val(0) == 0
assert abs_val(34) == 34
assert abs_val(-100000000000) == 100000000000
a = [-3, -1, 2, -11]
assert abs_max(a) == -11
assert abs_max_sort(a) == -11
assert abs_min(a) == -1
if __name__ == "__main__":
import doctest
doctest.testmod()
test_abs_val()
print(abs_val(-34)) # --> 34
|
Illustrate how to add the integer without arithmetic operation Author: suraj Kumar Time Complexity: 1 https:en.wikipedia.orgwikiBitwiseoperation Implementation of addition of integer Examples: add3, 5 8 add13, 5 18 add7, 2 5 add0, 7 7 add321, 0 321 | def add(first: int, second: int) -> int:
"""
Implementation of addition of integer
Examples:
>>> add(3, 5)
8
>>> add(13, 5)
18
>>> add(-7, 2)
-5
>>> add(0, -7)
-7
>>> add(-321, 0)
-321
"""
while second != 0:
c = first & second
first ^= second
second = c << 1
return first
if __name__ == "__main__":
import doctest
doctest.testmod()
first = int(input("Enter the first number: ").strip())
second = int(input("Enter the second number: ").strip())
print(f"{add(first, second) = }")
|
Finds the aliquot sum of an input integer, where the aliquot sum of a number n is defined as the sum of all natural numbers less than n that divide n evenly. For example, the aliquot sum of 15 is 1 3 5 9. This is a simple On implementation. param inputnum: a positive integer whose aliquot sum is to be found return: the aliquot sum of inputnum, if inputnum is positive. Otherwise, raise a ValueError Wikipedia Explanation: https:en.wikipedia.orgwikiAliquotsum aliquotsum15 9 aliquotsum6 6 aliquotsum1 Traceback most recent call last: ... ValueError: Input must be positive aliquotsum0 Traceback most recent call last: ... ValueError: Input must be positive aliquotsum1.6 Traceback most recent call last: ... ValueError: Input must be an integer aliquotsum12 16 aliquotsum1 0 aliquotsum19 1 | def aliquot_sum(input_num: int) -> int:
"""
Finds the aliquot sum of an input integer, where the
aliquot sum of a number n is defined as the sum of all
natural numbers less than n that divide n evenly. For
example, the aliquot sum of 15 is 1 + 3 + 5 = 9. This is
a simple O(n) implementation.
@param input_num: a positive integer whose aliquot sum is to be found
@return: the aliquot sum of input_num, if input_num is positive.
Otherwise, raise a ValueError
Wikipedia Explanation: https://en.wikipedia.org/wiki/Aliquot_sum
>>> aliquot_sum(15)
9
>>> aliquot_sum(6)
6
>>> aliquot_sum(-1)
Traceback (most recent call last):
...
ValueError: Input must be positive
>>> aliquot_sum(0)
Traceback (most recent call last):
...
ValueError: Input must be positive
>>> aliquot_sum(1.6)
Traceback (most recent call last):
...
ValueError: Input must be an integer
>>> aliquot_sum(12)
16
>>> aliquot_sum(1)
0
>>> aliquot_sum(19)
1
"""
if not isinstance(input_num, int):
raise ValueError("Input must be an integer")
if input_num <= 0:
raise ValueError("Input must be positive")
return sum(
divisor for divisor in range(1, input_num // 2 + 1) if input_num % divisor == 0
)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
In a multithreaded download, this algorithm could be used to provide each worker thread with a block of nonoverlapping bytes to download. For example: for i in allocationlist: requests.geturl,headers'Range':f'bytesi' Divide a number of bytes into x partitions. :param numberofbytes: the total of bytes. :param partitions: the number of partition need to be allocated. :return: list of bytes to be assigned to each worker thread allocationnum16647, 4 '14161', '41628322', '832312483', '1248416647' allocationnum50000, 5 '110000', '1000120000', '2000130000', '3000140000', '4000150000' allocationnum888, 999 Traceback most recent call last: ... ValueError: partitions can not numberofbytes! allocationnum888, 4 Traceback most recent call last: ... ValueError: partitions must be a positive number! | from __future__ import annotations
def allocation_num(number_of_bytes: int, partitions: int) -> list[str]:
"""
Divide a number of bytes into x partitions.
:param number_of_bytes: the total of bytes.
:param partitions: the number of partition need to be allocated.
:return: list of bytes to be assigned to each worker thread
>>> allocation_num(16647, 4)
['1-4161', '4162-8322', '8323-12483', '12484-16647']
>>> allocation_num(50000, 5)
['1-10000', '10001-20000', '20001-30000', '30001-40000', '40001-50000']
>>> allocation_num(888, 999)
Traceback (most recent call last):
...
ValueError: partitions can not > number_of_bytes!
>>> allocation_num(888, -4)
Traceback (most recent call last):
...
ValueError: partitions must be a positive number!
"""
if partitions <= 0:
raise ValueError("partitions must be a positive number!")
if partitions > number_of_bytes:
raise ValueError("partitions can not > number_of_bytes!")
bytes_per_partition = number_of_bytes // partitions
allocation_list = []
for i in range(partitions):
start_bytes = i * bytes_per_partition + 1
end_bytes = (
number_of_bytes if i == partitions - 1 else (i + 1) * bytes_per_partition
)
allocation_list.append(f"{start_bytes}-{end_bytes}")
return allocation_list
if __name__ == "__main__":
import doctest
doctest.testmod()
|
arclength45, 5 3.9269908169872414 arclength120, 15 31.415926535897928 arclength90, 10 15.707963267948966 | from math import pi
def arc_length(angle: int, radius: int) -> float:
"""
>>> arc_length(45, 5)
3.9269908169872414
>>> arc_length(120, 15)
31.415926535897928
>>> arc_length(90, 10)
15.707963267948966
"""
return 2 * pi * radius * (angle / 360)
if __name__ == "__main__":
print(arc_length(90, 10))
|
Find the area of various geometric shapes Wikipedia reference: https:en.wikipedia.orgwikiArea Calculate the Surface Area of a Cube. surfaceareacube1 6 surfaceareacube1.6 15.360000000000003 surfaceareacube0 0 surfaceareacube3 54 surfaceareacube1 Traceback most recent call last: ... ValueError: surfaceareacube only accepts nonnegative values Calculate the Surface Area of a Cuboid. surfaceareacuboid1, 2, 3 22 surfaceareacuboid0, 0, 0 0 surfaceareacuboid1.6, 2.6, 3.6 38.56 surfaceareacuboid1, 2, 3 Traceback most recent call last: ... ValueError: surfaceareacuboid only accepts nonnegative values surfaceareacuboid1, 2, 3 Traceback most recent call last: ... ValueError: surfaceareacuboid only accepts nonnegative values surfaceareacuboid1, 2, 3 Traceback most recent call last: ... ValueError: surfaceareacuboid only accepts nonnegative values Calculate the Surface Area of a Sphere. Wikipedia reference: https:en.wikipedia.orgwikiSphere Formula: 4 pi r2 surfaceareasphere5 314.1592653589793 surfaceareasphere1 12.566370614359172 surfaceareasphere1.6 32.169908772759484 surfaceareasphere0 0.0 surfaceareasphere1 Traceback most recent call last: ... ValueError: surfaceareasphere only accepts nonnegative values Calculate the Surface Area of a Hemisphere. Formula: 3 pi r2 surfaceareahemisphere5 235.61944901923448 surfaceareahemisphere1 9.42477796076938 surfaceareahemisphere0 0.0 surfaceareahemisphere1.1 11.40398133253095 surfaceareahemisphere1 Traceback most recent call last: ... ValueError: surfaceareahemisphere only accepts nonnegative values Calculate the Surface Area of a Cone. Wikipedia reference: https:en.wikipedia.orgwikiCone Formula: pi r r h 2 r 2 0.5 surfaceareacone10, 24 1130.9733552923256 surfaceareacone6, 8 301.59289474462014 surfaceareacone1.6, 2.6 23.387862992395807 surfaceareacone0, 0 0.0 surfaceareacone1, 2 Traceback most recent call last: ... ValueError: surfaceareacone only accepts nonnegative values surfaceareacone1, 2 Traceback most recent call last: ... ValueError: surfaceareacone only accepts nonnegative values surfaceareacone1, 2 Traceback most recent call last: ... ValueError: surfaceareacone only accepts nonnegative values Calculate the Surface Area of a Conical Frustum. surfaceareaconicalfrustum1, 2, 3 45.511728065337266 surfaceareaconicalfrustum4, 5, 6 300.7913575056268 surfaceareaconicalfrustum0, 0, 0 0.0 surfaceareaconicalfrustum1.6, 2.6, 3.6 78.57907060751548 surfaceareaconicalfrustum1, 2, 3 Traceback most recent call last: ... ValueError: surfaceareaconicalfrustum only accepts nonnegative values surfaceareaconicalfrustum1, 2, 3 Traceback most recent call last: ... ValueError: surfaceareaconicalfrustum only accepts nonnegative values surfaceareaconicalfrustum1, 2, 3 Traceback most recent call last: ... ValueError: surfaceareaconicalfrustum only accepts nonnegative values Calculate the Surface Area of a Cylinder. Wikipedia reference: https:en.wikipedia.orgwikiCylinder Formula: 2 pi r h r surfaceareacylinder7, 10 747.6990515543707 surfaceareacylinder1.6, 2.6 42.22300526424682 surfaceareacylinder0, 0 0.0 surfaceareacylinder6, 8 527.7875658030853 surfaceareacylinder1, 2 Traceback most recent call last: ... ValueError: surfaceareacylinder only accepts nonnegative values surfaceareacylinder1, 2 Traceback most recent call last: ... ValueError: surfaceareacylinder only accepts nonnegative values surfaceareacylinder1, 2 Traceback most recent call last: ... ValueError: surfaceareacylinder only accepts nonnegative values Calculate the Area of a Torus. Wikipedia reference: https:en.wikipedia.orgwikiTorus :return 4pi2 torusradius tuberadius surfaceareatorus1, 1 39.47841760435743 surfaceareatorus4, 3 473.7410112522892 surfaceareatorus3, 4 Traceback most recent call last: ... ValueError: surfaceareatorus does not support spindle or self intersecting tori surfaceareatorus1.6, 1.6 101.06474906715503 surfaceareatorus0, 0 0.0 surfaceareatorus1, 1 Traceback most recent call last: ... ValueError: surfaceareatorus only accepts nonnegative values surfaceareatorus1, 1 Traceback most recent call last: ... ValueError: surfaceareatorus only accepts nonnegative values Calculate the area of a rectangle. arearectangle10, 20 200 arearectangle1.6, 2.6 4.16 arearectangle0, 0 0 arearectangle1, 2 Traceback most recent call last: ... ValueError: arearectangle only accepts nonnegative values arearectangle1, 2 Traceback most recent call last: ... ValueError: arearectangle only accepts nonnegative values arearectangle1, 2 Traceback most recent call last: ... ValueError: arearectangle only accepts nonnegative values Calculate the area of a square. areasquare10 100 areasquare0 0 areasquare1.6 2.5600000000000005 areasquare1 Traceback most recent call last: ... ValueError: areasquare only accepts nonnegative values Calculate the area of a triangle given the base and height. areatriangle10, 10 50.0 areatriangle1.6, 2.6 2.08 areatriangle0, 0 0.0 areatriangle1, 2 Traceback most recent call last: ... ValueError: areatriangle only accepts nonnegative values areatriangle1, 2 Traceback most recent call last: ... ValueError: areatriangle only accepts nonnegative values areatriangle1, 2 Traceback most recent call last: ... ValueError: areatriangle only accepts nonnegative values Calculate area of triangle when the length of 3 sides are known. This function uses Heron's formula: https:en.wikipedia.orgwikiHeron27sformula areatrianglethreesides5, 12, 13 30.0 areatrianglethreesides10, 11, 12 51.521233486786784 areatrianglethreesides0, 0, 0 0.0 areatrianglethreesides1.6, 2.6, 3.6 1.8703742940919619 areatrianglethreesides1, 2, 1 Traceback most recent call last: ... ValueError: areatrianglethreesides only accepts nonnegative values areatrianglethreesides1, 2, 1 Traceback most recent call last: ... ValueError: areatrianglethreesides only accepts nonnegative values areatrianglethreesides2, 4, 7 Traceback most recent call last: ... ValueError: Given three sides do not form a triangle areatrianglethreesides2, 7, 4 Traceback most recent call last: ... ValueError: Given three sides do not form a triangle areatrianglethreesides7, 2, 4 Traceback most recent call last: ... ValueError: Given three sides do not form a triangle Calculate the area of a parallelogram. areaparallelogram10, 20 200 areaparallelogram1.6, 2.6 4.16 areaparallelogram0, 0 0 areaparallelogram1, 2 Traceback most recent call last: ... ValueError: areaparallelogram only accepts nonnegative values areaparallelogram1, 2 Traceback most recent call last: ... ValueError: areaparallelogram only accepts nonnegative values areaparallelogram1, 2 Traceback most recent call last: ... ValueError: areaparallelogram only accepts nonnegative values Calculate the area of a trapezium. areatrapezium10, 20, 30 450.0 areatrapezium1.6, 2.6, 3.6 7.5600000000000005 areatrapezium0, 0, 0 0.0 areatrapezium1, 2, 3 Traceback most recent call last: ... ValueError: areatrapezium only accepts nonnegative values areatrapezium1, 2, 3 Traceback most recent call last: ... ValueError: areatrapezium only accepts nonnegative values areatrapezium1, 2, 3 Traceback most recent call last: ... ValueError: areatrapezium only accepts nonnegative values areatrapezium1, 2, 3 Traceback most recent call last: ... ValueError: areatrapezium only accepts nonnegative values areatrapezium1, 2, 3 Traceback most recent call last: ... ValueError: areatrapezium only accepts nonnegative values areatrapezium1, 2, 3 Traceback most recent call last: ... ValueError: areatrapezium only accepts nonnegative values areatrapezium1, 2, 3 Traceback most recent call last: ... ValueError: areatrapezium only accepts nonnegative values Calculate the area of a circle. areacircle20 1256.6370614359173 areacircle1.6 8.042477193189871 areacircle0 0.0 areacircle1 Traceback most recent call last: ... ValueError: areacircle only accepts nonnegative values Calculate the area of a ellipse. areaellipse10, 10 314.1592653589793 areaellipse10, 20 628.3185307179587 areaellipse0, 0 0.0 areaellipse1.6, 2.6 13.06902543893354 areaellipse10, 20 Traceback most recent call last: ... ValueError: areaellipse only accepts nonnegative values areaellipse10, 20 Traceback most recent call last: ... ValueError: areaellipse only accepts nonnegative values areaellipse10, 20 Traceback most recent call last: ... ValueError: areaellipse only accepts nonnegative values Calculate the area of a rhombus. arearhombus10, 20 100.0 arearhombus1.6, 2.6 2.08 arearhombus0, 0 0.0 arearhombus1, 2 Traceback most recent call last: ... ValueError: arearhombus only accepts nonnegative values arearhombus1, 2 Traceback most recent call last: ... ValueError: arearhombus only accepts nonnegative values arearhombus1, 2 Traceback most recent call last: ... ValueError: arearhombus only accepts nonnegative values Calculate the area of a regular polygon. Wikipedia reference: https:en.wikipedia.orgwikiPolygonRegularpolygons Formula: ns2cotpin4 arearegpolygon3, 10 43.301270189221945 arearegpolygon4, 10 100.00000000000001 arearegpolygon0, 0 Traceback most recent call last: ... ValueError: arearegpolygon only accepts integers greater than or equal to three as number of sides arearegpolygon1, 2 Traceback most recent call last: ... ValueError: arearegpolygon only accepts integers greater than or equal to three as number of sides arearegpolygon5, 2 Traceback most recent call last: ... ValueError: arearegpolygon only accepts nonnegative values as length of a side arearegpolygon1, 2 Traceback most recent call last: ... ValueError: arearegpolygon only accepts integers greater than or equal to three as number of sides | from math import pi, sqrt, tan
def surface_area_cube(side_length: float) -> float:
"""
Calculate the Surface Area of a Cube.
>>> surface_area_cube(1)
6
>>> surface_area_cube(1.6)
15.360000000000003
>>> surface_area_cube(0)
0
>>> surface_area_cube(3)
54
>>> surface_area_cube(-1)
Traceback (most recent call last):
...
ValueError: surface_area_cube() only accepts non-negative values
"""
if side_length < 0:
raise ValueError("surface_area_cube() only accepts non-negative values")
return 6 * side_length**2
def surface_area_cuboid(length: float, breadth: float, height: float) -> float:
"""
Calculate the Surface Area of a Cuboid.
>>> surface_area_cuboid(1, 2, 3)
22
>>> surface_area_cuboid(0, 0, 0)
0
>>> surface_area_cuboid(1.6, 2.6, 3.6)
38.56
>>> surface_area_cuboid(-1, 2, 3)
Traceback (most recent call last):
...
ValueError: surface_area_cuboid() only accepts non-negative values
>>> surface_area_cuboid(1, -2, 3)
Traceback (most recent call last):
...
ValueError: surface_area_cuboid() only accepts non-negative values
>>> surface_area_cuboid(1, 2, -3)
Traceback (most recent call last):
...
ValueError: surface_area_cuboid() only accepts non-negative values
"""
if length < 0 or breadth < 0 or height < 0:
raise ValueError("surface_area_cuboid() only accepts non-negative values")
return 2 * ((length * breadth) + (breadth * height) + (length * height))
def surface_area_sphere(radius: float) -> float:
"""
Calculate the Surface Area of a Sphere.
Wikipedia reference: https://en.wikipedia.org/wiki/Sphere
Formula: 4 * pi * r^2
>>> surface_area_sphere(5)
314.1592653589793
>>> surface_area_sphere(1)
12.566370614359172
>>> surface_area_sphere(1.6)
32.169908772759484
>>> surface_area_sphere(0)
0.0
>>> surface_area_sphere(-1)
Traceback (most recent call last):
...
ValueError: surface_area_sphere() only accepts non-negative values
"""
if radius < 0:
raise ValueError("surface_area_sphere() only accepts non-negative values")
return 4 * pi * radius**2
def surface_area_hemisphere(radius: float) -> float:
"""
Calculate the Surface Area of a Hemisphere.
Formula: 3 * pi * r^2
>>> surface_area_hemisphere(5)
235.61944901923448
>>> surface_area_hemisphere(1)
9.42477796076938
>>> surface_area_hemisphere(0)
0.0
>>> surface_area_hemisphere(1.1)
11.40398133253095
>>> surface_area_hemisphere(-1)
Traceback (most recent call last):
...
ValueError: surface_area_hemisphere() only accepts non-negative values
"""
if radius < 0:
raise ValueError("surface_area_hemisphere() only accepts non-negative values")
return 3 * pi * radius**2
def surface_area_cone(radius: float, height: float) -> float:
"""
Calculate the Surface Area of a Cone.
Wikipedia reference: https://en.wikipedia.org/wiki/Cone
Formula: pi * r * (r + (h ** 2 + r ** 2) ** 0.5)
>>> surface_area_cone(10, 24)
1130.9733552923256
>>> surface_area_cone(6, 8)
301.59289474462014
>>> surface_area_cone(1.6, 2.6)
23.387862992395807
>>> surface_area_cone(0, 0)
0.0
>>> surface_area_cone(-1, -2)
Traceback (most recent call last):
...
ValueError: surface_area_cone() only accepts non-negative values
>>> surface_area_cone(1, -2)
Traceback (most recent call last):
...
ValueError: surface_area_cone() only accepts non-negative values
>>> surface_area_cone(-1, 2)
Traceback (most recent call last):
...
ValueError: surface_area_cone() only accepts non-negative values
"""
if radius < 0 or height < 0:
raise ValueError("surface_area_cone() only accepts non-negative values")
return pi * radius * (radius + (height**2 + radius**2) ** 0.5)
def surface_area_conical_frustum(
radius_1: float, radius_2: float, height: float
) -> float:
"""
Calculate the Surface Area of a Conical Frustum.
>>> surface_area_conical_frustum(1, 2, 3)
45.511728065337266
>>> surface_area_conical_frustum(4, 5, 6)
300.7913575056268
>>> surface_area_conical_frustum(0, 0, 0)
0.0
>>> surface_area_conical_frustum(1.6, 2.6, 3.6)
78.57907060751548
>>> surface_area_conical_frustum(-1, 2, 3)
Traceback (most recent call last):
...
ValueError: surface_area_conical_frustum() only accepts non-negative values
>>> surface_area_conical_frustum(1, -2, 3)
Traceback (most recent call last):
...
ValueError: surface_area_conical_frustum() only accepts non-negative values
>>> surface_area_conical_frustum(1, 2, -3)
Traceback (most recent call last):
...
ValueError: surface_area_conical_frustum() only accepts non-negative values
"""
if radius_1 < 0 or radius_2 < 0 or height < 0:
raise ValueError(
"surface_area_conical_frustum() only accepts non-negative values"
)
slant_height = (height**2 + (radius_1 - radius_2) ** 2) ** 0.5
return pi * ((slant_height * (radius_1 + radius_2)) + radius_1**2 + radius_2**2)
def surface_area_cylinder(radius: float, height: float) -> float:
"""
Calculate the Surface Area of a Cylinder.
Wikipedia reference: https://en.wikipedia.org/wiki/Cylinder
Formula: 2 * pi * r * (h + r)
>>> surface_area_cylinder(7, 10)
747.6990515543707
>>> surface_area_cylinder(1.6, 2.6)
42.22300526424682
>>> surface_area_cylinder(0, 0)
0.0
>>> surface_area_cylinder(6, 8)
527.7875658030853
>>> surface_area_cylinder(-1, -2)
Traceback (most recent call last):
...
ValueError: surface_area_cylinder() only accepts non-negative values
>>> surface_area_cylinder(1, -2)
Traceback (most recent call last):
...
ValueError: surface_area_cylinder() only accepts non-negative values
>>> surface_area_cylinder(-1, 2)
Traceback (most recent call last):
...
ValueError: surface_area_cylinder() only accepts non-negative values
"""
if radius < 0 or height < 0:
raise ValueError("surface_area_cylinder() only accepts non-negative values")
return 2 * pi * radius * (height + radius)
def surface_area_torus(torus_radius: float, tube_radius: float) -> float:
"""Calculate the Area of a Torus.
Wikipedia reference: https://en.wikipedia.org/wiki/Torus
:return 4pi^2 * torus_radius * tube_radius
>>> surface_area_torus(1, 1)
39.47841760435743
>>> surface_area_torus(4, 3)
473.7410112522892
>>> surface_area_torus(3, 4)
Traceback (most recent call last):
...
ValueError: surface_area_torus() does not support spindle or self intersecting tori
>>> surface_area_torus(1.6, 1.6)
101.06474906715503
>>> surface_area_torus(0, 0)
0.0
>>> surface_area_torus(-1, 1)
Traceback (most recent call last):
...
ValueError: surface_area_torus() only accepts non-negative values
>>> surface_area_torus(1, -1)
Traceback (most recent call last):
...
ValueError: surface_area_torus() only accepts non-negative values
"""
if torus_radius < 0 or tube_radius < 0:
raise ValueError("surface_area_torus() only accepts non-negative values")
if torus_radius < tube_radius:
raise ValueError(
"surface_area_torus() does not support spindle or self intersecting tori"
)
return 4 * pow(pi, 2) * torus_radius * tube_radius
def area_rectangle(length: float, width: float) -> float:
"""
Calculate the area of a rectangle.
>>> area_rectangle(10, 20)
200
>>> area_rectangle(1.6, 2.6)
4.16
>>> area_rectangle(0, 0)
0
>>> area_rectangle(-1, -2)
Traceback (most recent call last):
...
ValueError: area_rectangle() only accepts non-negative values
>>> area_rectangle(1, -2)
Traceback (most recent call last):
...
ValueError: area_rectangle() only accepts non-negative values
>>> area_rectangle(-1, 2)
Traceback (most recent call last):
...
ValueError: area_rectangle() only accepts non-negative values
"""
if length < 0 or width < 0:
raise ValueError("area_rectangle() only accepts non-negative values")
return length * width
def area_square(side_length: float) -> float:
"""
Calculate the area of a square.
>>> area_square(10)
100
>>> area_square(0)
0
>>> area_square(1.6)
2.5600000000000005
>>> area_square(-1)
Traceback (most recent call last):
...
ValueError: area_square() only accepts non-negative values
"""
if side_length < 0:
raise ValueError("area_square() only accepts non-negative values")
return side_length**2
def area_triangle(base: float, height: float) -> float:
"""
Calculate the area of a triangle given the base and height.
>>> area_triangle(10, 10)
50.0
>>> area_triangle(1.6, 2.6)
2.08
>>> area_triangle(0, 0)
0.0
>>> area_triangle(-1, -2)
Traceback (most recent call last):
...
ValueError: area_triangle() only accepts non-negative values
>>> area_triangle(1, -2)
Traceback (most recent call last):
...
ValueError: area_triangle() only accepts non-negative values
>>> area_triangle(-1, 2)
Traceback (most recent call last):
...
ValueError: area_triangle() only accepts non-negative values
"""
if base < 0 or height < 0:
raise ValueError("area_triangle() only accepts non-negative values")
return (base * height) / 2
def area_triangle_three_sides(side1: float, side2: float, side3: float) -> float:
"""
Calculate area of triangle when the length of 3 sides are known.
This function uses Heron's formula: https://en.wikipedia.org/wiki/Heron%27s_formula
>>> area_triangle_three_sides(5, 12, 13)
30.0
>>> area_triangle_three_sides(10, 11, 12)
51.521233486786784
>>> area_triangle_three_sides(0, 0, 0)
0.0
>>> area_triangle_three_sides(1.6, 2.6, 3.6)
1.8703742940919619
>>> area_triangle_three_sides(-1, -2, -1)
Traceback (most recent call last):
...
ValueError: area_triangle_three_sides() only accepts non-negative values
>>> area_triangle_three_sides(1, -2, 1)
Traceback (most recent call last):
...
ValueError: area_triangle_three_sides() only accepts non-negative values
>>> area_triangle_three_sides(2, 4, 7)
Traceback (most recent call last):
...
ValueError: Given three sides do not form a triangle
>>> area_triangle_three_sides(2, 7, 4)
Traceback (most recent call last):
...
ValueError: Given three sides do not form a triangle
>>> area_triangle_three_sides(7, 2, 4)
Traceback (most recent call last):
...
ValueError: Given three sides do not form a triangle
"""
if side1 < 0 or side2 < 0 or side3 < 0:
raise ValueError("area_triangle_three_sides() only accepts non-negative values")
elif side1 + side2 < side3 or side1 + side3 < side2 or side2 + side3 < side1:
raise ValueError("Given three sides do not form a triangle")
semi_perimeter = (side1 + side2 + side3) / 2
area = sqrt(
semi_perimeter
* (semi_perimeter - side1)
* (semi_perimeter - side2)
* (semi_perimeter - side3)
)
return area
def area_parallelogram(base: float, height: float) -> float:
"""
Calculate the area of a parallelogram.
>>> area_parallelogram(10, 20)
200
>>> area_parallelogram(1.6, 2.6)
4.16
>>> area_parallelogram(0, 0)
0
>>> area_parallelogram(-1, -2)
Traceback (most recent call last):
...
ValueError: area_parallelogram() only accepts non-negative values
>>> area_parallelogram(1, -2)
Traceback (most recent call last):
...
ValueError: area_parallelogram() only accepts non-negative values
>>> area_parallelogram(-1, 2)
Traceback (most recent call last):
...
ValueError: area_parallelogram() only accepts non-negative values
"""
if base < 0 or height < 0:
raise ValueError("area_parallelogram() only accepts non-negative values")
return base * height
def area_trapezium(base1: float, base2: float, height: float) -> float:
"""
Calculate the area of a trapezium.
>>> area_trapezium(10, 20, 30)
450.0
>>> area_trapezium(1.6, 2.6, 3.6)
7.5600000000000005
>>> area_trapezium(0, 0, 0)
0.0
>>> area_trapezium(-1, -2, -3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(-1, 2, 3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(1, -2, 3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(1, 2, -3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(-1, -2, 3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(1, -2, -3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
>>> area_trapezium(-1, 2, -3)
Traceback (most recent call last):
...
ValueError: area_trapezium() only accepts non-negative values
"""
if base1 < 0 or base2 < 0 or height < 0:
raise ValueError("area_trapezium() only accepts non-negative values")
return 1 / 2 * (base1 + base2) * height
def area_circle(radius: float) -> float:
"""
Calculate the area of a circle.
>>> area_circle(20)
1256.6370614359173
>>> area_circle(1.6)
8.042477193189871
>>> area_circle(0)
0.0
>>> area_circle(-1)
Traceback (most recent call last):
...
ValueError: area_circle() only accepts non-negative values
"""
if radius < 0:
raise ValueError("area_circle() only accepts non-negative values")
return pi * radius**2
def area_ellipse(radius_x: float, radius_y: float) -> float:
"""
Calculate the area of a ellipse.
>>> area_ellipse(10, 10)
314.1592653589793
>>> area_ellipse(10, 20)
628.3185307179587
>>> area_ellipse(0, 0)
0.0
>>> area_ellipse(1.6, 2.6)
13.06902543893354
>>> area_ellipse(-10, 20)
Traceback (most recent call last):
...
ValueError: area_ellipse() only accepts non-negative values
>>> area_ellipse(10, -20)
Traceback (most recent call last):
...
ValueError: area_ellipse() only accepts non-negative values
>>> area_ellipse(-10, -20)
Traceback (most recent call last):
...
ValueError: area_ellipse() only accepts non-negative values
"""
if radius_x < 0 or radius_y < 0:
raise ValueError("area_ellipse() only accepts non-negative values")
return pi * radius_x * radius_y
def area_rhombus(diagonal_1: float, diagonal_2: float) -> float:
"""
Calculate the area of a rhombus.
>>> area_rhombus(10, 20)
100.0
>>> area_rhombus(1.6, 2.6)
2.08
>>> area_rhombus(0, 0)
0.0
>>> area_rhombus(-1, -2)
Traceback (most recent call last):
...
ValueError: area_rhombus() only accepts non-negative values
>>> area_rhombus(1, -2)
Traceback (most recent call last):
...
ValueError: area_rhombus() only accepts non-negative values
>>> area_rhombus(-1, 2)
Traceback (most recent call last):
...
ValueError: area_rhombus() only accepts non-negative values
"""
if diagonal_1 < 0 or diagonal_2 < 0:
raise ValueError("area_rhombus() only accepts non-negative values")
return 1 / 2 * diagonal_1 * diagonal_2
def area_reg_polygon(sides: int, length: float) -> float:
"""
Calculate the area of a regular polygon.
Wikipedia reference: https://en.wikipedia.org/wiki/Polygon#Regular_polygons
Formula: (n*s^2*cot(pi/n))/4
>>> area_reg_polygon(3, 10)
43.301270189221945
>>> area_reg_polygon(4, 10)
100.00000000000001
>>> area_reg_polygon(0, 0)
Traceback (most recent call last):
...
ValueError: area_reg_polygon() only accepts integers greater than or equal to \
three as number of sides
>>> area_reg_polygon(-1, -2)
Traceback (most recent call last):
...
ValueError: area_reg_polygon() only accepts integers greater than or equal to \
three as number of sides
>>> area_reg_polygon(5, -2)
Traceback (most recent call last):
...
ValueError: area_reg_polygon() only accepts non-negative values as \
length of a side
>>> area_reg_polygon(-1, 2)
Traceback (most recent call last):
...
ValueError: area_reg_polygon() only accepts integers greater than or equal to \
three as number of sides
"""
if not isinstance(sides, int) or sides < 3:
raise ValueError(
"area_reg_polygon() only accepts integers greater than or \
equal to three as number of sides"
)
elif length < 0:
raise ValueError(
"area_reg_polygon() only accepts non-negative values as \
length of a side"
)
return (sides * length**2) / (4 * tan(pi / sides))
return (sides * length**2) / (4 * tan(pi / sides))
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True) # verbose so we can see methods missing tests
print("[DEMO] Areas of various geometric shapes: \n")
print(f"Rectangle: {area_rectangle(10, 20) = }")
print(f"Square: {area_square(10) = }")
print(f"Triangle: {area_triangle(10, 10) = }")
print(f"Triangle: {area_triangle_three_sides(5, 12, 13) = }")
print(f"Parallelogram: {area_parallelogram(10, 20) = }")
print(f"Rhombus: {area_rhombus(10, 20) = }")
print(f"Trapezium: {area_trapezium(10, 20, 30) = }")
print(f"Circle: {area_circle(20) = }")
print(f"Ellipse: {area_ellipse(10, 20) = }")
print("\nSurface Areas of various geometric shapes: \n")
print(f"Cube: {surface_area_cube(20) = }")
print(f"Cuboid: {surface_area_cuboid(10, 20, 30) = }")
print(f"Sphere: {surface_area_sphere(20) = }")
print(f"Hemisphere: {surface_area_hemisphere(20) = }")
print(f"Cone: {surface_area_cone(10, 20) = }")
print(f"Conical Frustum: {surface_area_conical_frustum(10, 20, 30) = }")
print(f"Cylinder: {surface_area_cylinder(10, 20) = }")
print(f"Torus: {surface_area_torus(20, 10) = }")
print(f"Equilateral Triangle: {area_reg_polygon(3, 10) = }")
print(f"Square: {area_reg_polygon(4, 10) = }")
print(f"Reqular Pentagon: {area_reg_polygon(5, 10) = }")
|
Approximates the area under the curve using the trapezoidal rule Treats curve as a collection of linear lines and sums the area of the trapezium shape they form :param fnc: a function which defines a curve :param xstart: left end point to indicate the start of line segment :param xend: right end point to indicate end of line segment :param steps: an accuracy gauge; more steps increases the accuracy :return: a float representing the length of the curve def fx: ... return 5 ftrapezoidalareaf, 12.0, 14.0, 1000:.3f '10.000' def fx: ... return 9x2 ftrapezoidalareaf, 4.0, 0, 10000:.4f '192.0000' ftrapezoidalareaf, 4.0, 4.0, 10000:.4f '384.0000' Approximates small segments of curve as linear and solve for trapezoidal area Increment step | from __future__ import annotations
from collections.abc import Callable
def trapezoidal_area(
fnc: Callable[[float], float],
x_start: float,
x_end: float,
steps: int = 100,
) -> float:
"""
Treats curve as a collection of linear lines and sums the area of the
trapezium shape they form
:param fnc: a function which defines a curve
:param x_start: left end point to indicate the start of line segment
:param x_end: right end point to indicate end of line segment
:param steps: an accuracy gauge; more steps increases the accuracy
:return: a float representing the length of the curve
>>> def f(x):
... return 5
>>> f"{trapezoidal_area(f, 12.0, 14.0, 1000):.3f}"
'10.000'
>>> def f(x):
... return 9*x**2
>>> f"{trapezoidal_area(f, -4.0, 0, 10000):.4f}"
'192.0000'
>>> f"{trapezoidal_area(f, -4.0, 4.0, 10000):.4f}"
'384.0000'
"""
x1 = x_start
fx1 = fnc(x_start)
area = 0.0
for _ in range(steps):
# Approximates small segments of curve as linear and solve
# for trapezoidal area
x2 = (x_end - x_start) / steps + x1
fx2 = fnc(x2)
area += abs(fx2 + fx1) * (x2 - x1) / 2
# Increment step
x1 = x2
fx1 = fx2
return area
if __name__ == "__main__":
def f(x):
return x**3 + x**2
print("f(x) = x^3 + x^2")
print("The area between the curve, x = -5, x = 5 and the x axis is:")
i = 10
while i <= 100000:
print(f"with {i} steps: {trapezoidal_area(f, -5, 5, i)}")
i *= 10
|
Return the average absolute deviation of a list of numbers. Wiki: https:en.wikipedia.orgwikiAverageabsolutedeviation averageabsolutedeviation0 0.0 averageabsolutedeviation4, 1, 3, 2 1.0 averageabsolutedeviation2, 70, 6, 50, 20, 8, 4, 0 20.0 averageabsolutedeviation20, 0, 30, 15 16.25 averageabsolutedeviation Traceback most recent call last: ... ValueError: List is empty | def average_absolute_deviation(nums: list[int]) -> float:
"""
Return the average absolute deviation of a list of numbers.
Wiki: https://en.wikipedia.org/wiki/Average_absolute_deviation
>>> average_absolute_deviation([0])
0.0
>>> average_absolute_deviation([4, 1, 3, 2])
1.0
>>> average_absolute_deviation([2, 70, 6, 50, 20, 8, 4, 0])
20.0
>>> average_absolute_deviation([-20, 0, 30, 15])
16.25
>>> average_absolute_deviation([])
Traceback (most recent call last):
...
ValueError: List is empty
"""
if not nums: # Makes sure that the list is not empty
raise ValueError("List is empty")
average = sum(nums) / len(nums) # Calculate the average
return sum(abs(x - average) for x in nums) / len(nums)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Find mean of a list of numbers. Wiki: https:en.wikipedia.orgwikiMean mean3, 6, 9, 12, 15, 18, 21 12.0 mean5, 10, 15, 20, 25, 30, 35 20.0 mean1, 2, 3, 4, 5, 6, 7, 8 4.5 mean Traceback most recent call last: ... ValueError: List is empty | from __future__ import annotations
def mean(nums: list) -> float:
"""
Find mean of a list of numbers.
Wiki: https://en.wikipedia.org/wiki/Mean
>>> mean([3, 6, 9, 12, 15, 18, 21])
12.0
>>> mean([5, 10, 15, 20, 25, 30, 35])
20.0
>>> mean([1, 2, 3, 4, 5, 6, 7, 8])
4.5
>>> mean([])
Traceback (most recent call last):
...
ValueError: List is empty
"""
if not nums:
raise ValueError("List is empty")
return sum(nums) / len(nums)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Find median of a list of numbers. Wiki: https:en.wikipedia.orgwikiMedian median0 0 median4, 1, 3, 2 2.5 median2, 70, 6, 50, 20, 8, 4 8 Args: nums: List of nums Returns: Median. The sorted function returns listSupportsRichComparisonTsorted which does not support | from __future__ import annotations
def median(nums: list) -> int | float:
"""
Find median of a list of numbers.
Wiki: https://en.wikipedia.org/wiki/Median
>>> median([0])
0
>>> median([4, 1, 3, 2])
2.5
>>> median([2, 70, 6, 50, 20, 8, 4])
8
Args:
nums: List of nums
Returns:
Median.
"""
# The sorted function returns list[SupportsRichComparisonT@sorted]
# which does not support `+`
sorted_list: list[int] = sorted(nums)
length = len(sorted_list)
mid_index = length >> 1
return (
(sorted_list[mid_index] + sorted_list[mid_index - 1]) / 2
if length % 2 == 0
else sorted_list[mid_index]
)
def main():
import doctest
doctest.testmod()
if __name__ == "__main__":
main()
|
This function returns the modeMode as in the measures of central tendency of the input data. The input list may contain any Datastructure or any Datatype. mode2, 3, 4, 5, 3, 4, 2, 5, 2, 2, 4, 2, 2, 2 2 mode3, 4, 5, 3, 4, 2, 5, 2, 2, 4, 4, 2, 2, 2 2 mode3, 4, 5, 3, 4, 2, 5, 2, 2, 4, 4, 4, 2, 2, 4, 2 2, 4 modex, y, y, z 'y' modex, x , y, y, z 'x', 'y' Gets values of modes | from typing import Any
def mode(input_list: list) -> list[Any]:
"""This function returns the mode(Mode as in the measures of
central tendency) of the input data.
The input list may contain any Datastructure or any Datatype.
>>> mode([2, 3, 4, 5, 3, 4, 2, 5, 2, 2, 4, 2, 2, 2])
[2]
>>> mode([3, 4, 5, 3, 4, 2, 5, 2, 2, 4, 4, 2, 2, 2])
[2]
>>> mode([3, 4, 5, 3, 4, 2, 5, 2, 2, 4, 4, 4, 2, 2, 4, 2])
[2, 4]
>>> mode(["x", "y", "y", "z"])
['y']
>>> mode(["x", "x" , "y", "y", "z"])
['x', 'y']
"""
if not input_list:
return []
result = [input_list.count(value) for value in input_list]
y = max(result) # Gets the maximum count in the input list.
# Gets values of modes
return sorted({input_list[i] for i, value in enumerate(result) if value == y})
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Implement a popular pidigitextraction algorithm known as the BaileyBorweinPlouffe BBP formula to calculate the nth hex digit of pi. Wikipedia page: https:en.wikipedia.orgwikiBaileyE28093BorweinE28093Plouffeformula param digitposition: a positive integer representing the position of the digit to extract. The digit immediately after the decimal point is located at position 1. param precision: number of terms in the second summation to calculate. A higher number reduces the chance of an error but increases the runtime. return: a hexadecimal digit representing the digit at the nth position in pi's decimal expansion. .joinbaileyborweinplouffei for i in range1, 11 '243f6a8885' baileyborweinplouffe5, 10000 '6' baileyborweinplouffe10 Traceback most recent call last: ... ValueError: Digit position must be a positive integer baileyborweinplouffe0 Traceback most recent call last: ... ValueError: Digit position must be a positive integer baileyborweinplouffe1.7 Traceback most recent call last: ... ValueError: Digit position must be a positive integer baileyborweinplouffe2, 10 Traceback most recent call last: ... ValueError: Precision must be a nonnegative integer baileyborweinplouffe2, 1.6 Traceback most recent call last: ... ValueError: Precision must be a nonnegative integer compute an approximation of 16 n 1 pi whose fractional part is mostly accurate return the first hex digit of the fractional part of the result only care about first digit of fractional part; don't need decimal Private helper function to implement the summation functionality. param digitpostoextract: digit position to extract param denominatoraddend: added to denominator of fractions in the formula param precision: same as precision in main function return: floatingpoint number whose integer part is not important if the exponential term is an integer and we mod it by the denominator before dividing, only the integer part of the sum will change; the fractional part will not | def bailey_borwein_plouffe(digit_position: int, precision: int = 1000) -> str:
"""
Implement a popular pi-digit-extraction algorithm known as the
Bailey-Borwein-Plouffe (BBP) formula to calculate the nth hex digit of pi.
Wikipedia page:
https://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula
@param digit_position: a positive integer representing the position of the digit to
extract.
The digit immediately after the decimal point is located at position 1.
@param precision: number of terms in the second summation to calculate.
A higher number reduces the chance of an error but increases the runtime.
@return: a hexadecimal digit representing the digit at the nth position
in pi's decimal expansion.
>>> "".join(bailey_borwein_plouffe(i) for i in range(1, 11))
'243f6a8885'
>>> bailey_borwein_plouffe(5, 10000)
'6'
>>> bailey_borwein_plouffe(-10)
Traceback (most recent call last):
...
ValueError: Digit position must be a positive integer
>>> bailey_borwein_plouffe(0)
Traceback (most recent call last):
...
ValueError: Digit position must be a positive integer
>>> bailey_borwein_plouffe(1.7)
Traceback (most recent call last):
...
ValueError: Digit position must be a positive integer
>>> bailey_borwein_plouffe(2, -10)
Traceback (most recent call last):
...
ValueError: Precision must be a nonnegative integer
>>> bailey_borwein_plouffe(2, 1.6)
Traceback (most recent call last):
...
ValueError: Precision must be a nonnegative integer
"""
if (not isinstance(digit_position, int)) or (digit_position <= 0):
raise ValueError("Digit position must be a positive integer")
elif (not isinstance(precision, int)) or (precision < 0):
raise ValueError("Precision must be a nonnegative integer")
# compute an approximation of (16 ** (n - 1)) * pi whose fractional part is mostly
# accurate
sum_result = (
4 * _subsum(digit_position, 1, precision)
- 2 * _subsum(digit_position, 4, precision)
- _subsum(digit_position, 5, precision)
- _subsum(digit_position, 6, precision)
)
# return the first hex digit of the fractional part of the result
return hex(int((sum_result % 1) * 16))[2:]
def _subsum(
digit_pos_to_extract: int, denominator_addend: int, precision: int
) -> float:
# only care about first digit of fractional part; don't need decimal
"""
Private helper function to implement the summation
functionality.
@param digit_pos_to_extract: digit position to extract
@param denominator_addend: added to denominator of fractions in the formula
@param precision: same as precision in main function
@return: floating-point number whose integer part is not important
"""
total = 0.0
for sum_index in range(digit_pos_to_extract + precision):
denominator = 8 * sum_index + denominator_addend
if sum_index < digit_pos_to_extract:
# if the exponential term is an integer and we mod it by the denominator
# before dividing, only the integer part of the sum will change;
# the fractional part will not
exponential_term = pow(
16, digit_pos_to_extract - 1 - sum_index, denominator
)
else:
exponential_term = pow(16, digit_pos_to_extract - 1 - sum_index)
total += exponential_term / denominator
return total
if __name__ == "__main__":
import doctest
doctest.testmod()
|
This function returns the number negative base 2 of the decimal number of the input data. Args: int: The decimal number to convert. Returns: int: The negative base 2 number. Examples: decimaltonegativebase20 0 decimaltonegativebase219 111101 decimaltonegativebase24 100 decimaltonegativebase27 11011 | def decimal_to_negative_base_2(num: int) -> int:
"""
This function returns the number negative base 2
of the decimal number of the input data.
Args:
int: The decimal number to convert.
Returns:
int: The negative base 2 number.
Examples:
>>> decimal_to_negative_base_2(0)
0
>>> decimal_to_negative_base_2(-19)
111101
>>> decimal_to_negative_base_2(4)
100
>>> decimal_to_negative_base_2(7)
11011
"""
if num == 0:
return 0
ans = ""
while num != 0:
num, rem = divmod(num, -2)
if rem < 0:
rem += 2
num += 1
ans = str(rem) + ans
return int(ans)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Implementation of Basic Math in Python. import math def primefactorsn: int list: if n 0: raise ValueErrorOnly positive integers have prime factors pf while n 2 0: pf.append2 n intn 2 for i in range3, intmath.sqrtn 1, 2: while n i 0: pf.appendi n intn i if n 2: pf.appendn return pf def numberofdivisorsn: int int: if n 0: raise ValueErrorOnly positive numbers are accepted div 1 temp 1 while n 2 0: temp 1 n intn 2 div temp for i in range3, intmath.sqrtn 1, 2: temp 1 while n i 0: temp 1 n intn i div temp if n 1: div 2 return div def sumofdivisorsn: int int: if n 0: raise ValueErrorOnly positive numbers are accepted s 1 temp 1 while n 2 0: temp 1 n intn 2 if temp 1: s 2temp 1 2 1 for i in range3, intmath.sqrtn 1, 2: temp 1 while n i 0: temp 1 n intn i if temp 1: s itemp 1 i 1 return ints def eulerphin: int int: if n 0: raise ValueErrorOnly positive numbers are accepted s n for x in setprimefactorsn: s x 1 x return ints if name main: import doctest doctest.testmod | import math
def prime_factors(n: int) -> list:
"""Find Prime Factors.
>>> prime_factors(100)
[2, 2, 5, 5]
>>> prime_factors(0)
Traceback (most recent call last):
...
ValueError: Only positive integers have prime factors
>>> prime_factors(-10)
Traceback (most recent call last):
...
ValueError: Only positive integers have prime factors
"""
if n <= 0:
raise ValueError("Only positive integers have prime factors")
pf = []
while n % 2 == 0:
pf.append(2)
n = int(n / 2)
for i in range(3, int(math.sqrt(n)) + 1, 2):
while n % i == 0:
pf.append(i)
n = int(n / i)
if n > 2:
pf.append(n)
return pf
def number_of_divisors(n: int) -> int:
"""Calculate Number of Divisors of an Integer.
>>> number_of_divisors(100)
9
>>> number_of_divisors(0)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
>>> number_of_divisors(-10)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
"""
if n <= 0:
raise ValueError("Only positive numbers are accepted")
div = 1
temp = 1
while n % 2 == 0:
temp += 1
n = int(n / 2)
div *= temp
for i in range(3, int(math.sqrt(n)) + 1, 2):
temp = 1
while n % i == 0:
temp += 1
n = int(n / i)
div *= temp
if n > 1:
div *= 2
return div
def sum_of_divisors(n: int) -> int:
"""Calculate Sum of Divisors.
>>> sum_of_divisors(100)
217
>>> sum_of_divisors(0)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
>>> sum_of_divisors(-10)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
"""
if n <= 0:
raise ValueError("Only positive numbers are accepted")
s = 1
temp = 1
while n % 2 == 0:
temp += 1
n = int(n / 2)
if temp > 1:
s *= (2**temp - 1) / (2 - 1)
for i in range(3, int(math.sqrt(n)) + 1, 2):
temp = 1
while n % i == 0:
temp += 1
n = int(n / i)
if temp > 1:
s *= (i**temp - 1) / (i - 1)
return int(s)
def euler_phi(n: int) -> int:
"""Calculate Euler's Phi Function.
>>> euler_phi(100)
40
>>> euler_phi(0)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
>>> euler_phi(-10)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
"""
if n <= 0:
raise ValueError("Only positive numbers are accepted")
s = n
for x in set(prime_factors(n)):
s *= (x - 1) / x
return int(s)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Binary Exponentiation This is a method to find ab in Olog b time complexity and is one of the most commonly used methods of exponentiation. The method is also useful for modular exponentiation, when the solution to ab c is required. To calculate ab: If b is even, then ab a ab 2 If b is odd, then ab a ab 1 Repeat until b 1 or b 0 For modular exponentiation, we use the fact that a b c a c b c c Computes ab recursively, where a is the base and b is the exponent binaryexprecursive3, 5 243 binaryexprecursive11, 13 34522712143931 binaryexprecursive1, 3 1 binaryexprecursive0, 5 0 binaryexprecursive3, 1 3 binaryexprecursive3, 0 1 binaryexprecursive1.5, 4 5.0625 binaryexprecursive3, 1 Traceback most recent call last: ... ValueError: Exponent must be a nonnegative integer Computes ab iteratively, where a is the base and b is the exponent binaryexpiterative3, 5 243 binaryexpiterative11, 13 34522712143931 binaryexpiterative1, 3 1 binaryexpiterative0, 5 0 binaryexpiterative3, 1 3 binaryexpiterative3, 0 1 binaryexpiterative1.5, 4 5.0625 binaryexpiterative3, 1 Traceback most recent call last: ... ValueError: Exponent must be a nonnegative integer Computes ab c recursively, where a is the base, b is the exponent, and c is the modulus binaryexpmodrecursive3, 4, 5 1 binaryexpmodrecursive11, 13, 7 4 binaryexpmodrecursive1.5, 4, 3 2.0625 binaryexpmodrecursive7, 1, 10 Traceback most recent call last: ... ValueError: Exponent must be a nonnegative integer binaryexpmodrecursive7, 13, 0 Traceback most recent call last: ... ValueError: Modulus must be a positive integer Computes ab c iteratively, where a is the base, b is the exponent, and c is the modulus binaryexpmoditerative3, 4, 5 1 binaryexpmoditerative11, 13, 7 4 binaryexpmoditerative1.5, 4, 3 2.0625 binaryexpmoditerative7, 1, 10 Traceback most recent call last: ... ValueError: Exponent must be a nonnegative integer binaryexpmoditerative7, 13, 0 Traceback most recent call last: ... ValueError: Modulus must be a positive integer | def binary_exp_recursive(base: float, exponent: int) -> float:
"""
Computes a^b recursively, where a is the base and b is the exponent
>>> binary_exp_recursive(3, 5)
243
>>> binary_exp_recursive(11, 13)
34522712143931
>>> binary_exp_recursive(-1, 3)
-1
>>> binary_exp_recursive(0, 5)
0
>>> binary_exp_recursive(3, 1)
3
>>> binary_exp_recursive(3, 0)
1
>>> binary_exp_recursive(1.5, 4)
5.0625
>>> binary_exp_recursive(3, -1)
Traceback (most recent call last):
...
ValueError: Exponent must be a non-negative integer
"""
if exponent < 0:
raise ValueError("Exponent must be a non-negative integer")
if exponent == 0:
return 1
if exponent % 2 == 1:
return binary_exp_recursive(base, exponent - 1) * base
b = binary_exp_recursive(base, exponent // 2)
return b * b
def binary_exp_iterative(base: float, exponent: int) -> float:
"""
Computes a^b iteratively, where a is the base and b is the exponent
>>> binary_exp_iterative(3, 5)
243
>>> binary_exp_iterative(11, 13)
34522712143931
>>> binary_exp_iterative(-1, 3)
-1
>>> binary_exp_iterative(0, 5)
0
>>> binary_exp_iterative(3, 1)
3
>>> binary_exp_iterative(3, 0)
1
>>> binary_exp_iterative(1.5, 4)
5.0625
>>> binary_exp_iterative(3, -1)
Traceback (most recent call last):
...
ValueError: Exponent must be a non-negative integer
"""
if exponent < 0:
raise ValueError("Exponent must be a non-negative integer")
res: int | float = 1
while exponent > 0:
if exponent & 1:
res *= base
base *= base
exponent >>= 1
return res
def binary_exp_mod_recursive(base: float, exponent: int, modulus: int) -> float:
"""
Computes a^b % c recursively, where a is the base, b is the exponent, and c is the
modulus
>>> binary_exp_mod_recursive(3, 4, 5)
1
>>> binary_exp_mod_recursive(11, 13, 7)
4
>>> binary_exp_mod_recursive(1.5, 4, 3)
2.0625
>>> binary_exp_mod_recursive(7, -1, 10)
Traceback (most recent call last):
...
ValueError: Exponent must be a non-negative integer
>>> binary_exp_mod_recursive(7, 13, 0)
Traceback (most recent call last):
...
ValueError: Modulus must be a positive integer
"""
if exponent < 0:
raise ValueError("Exponent must be a non-negative integer")
if modulus <= 0:
raise ValueError("Modulus must be a positive integer")
if exponent == 0:
return 1
if exponent % 2 == 1:
return (binary_exp_mod_recursive(base, exponent - 1, modulus) * base) % modulus
r = binary_exp_mod_recursive(base, exponent // 2, modulus)
return (r * r) % modulus
def binary_exp_mod_iterative(base: float, exponent: int, modulus: int) -> float:
"""
Computes a^b % c iteratively, where a is the base, b is the exponent, and c is the
modulus
>>> binary_exp_mod_iterative(3, 4, 5)
1
>>> binary_exp_mod_iterative(11, 13, 7)
4
>>> binary_exp_mod_iterative(1.5, 4, 3)
2.0625
>>> binary_exp_mod_iterative(7, -1, 10)
Traceback (most recent call last):
...
ValueError: Exponent must be a non-negative integer
>>> binary_exp_mod_iterative(7, 13, 0)
Traceback (most recent call last):
...
ValueError: Modulus must be a positive integer
"""
if exponent < 0:
raise ValueError("Exponent must be a non-negative integer")
if modulus <= 0:
raise ValueError("Modulus must be a positive integer")
res: int | float = 1
while exponent > 0:
if exponent & 1:
res = ((res % modulus) * (base % modulus)) % modulus
base *= base
exponent >>= 1
return res
if __name__ == "__main__":
from timeit import timeit
a = 1269380576
b = 374
c = 34
runs = 100_000
print(
timeit(
f"binary_exp_recursive({a}, {b})",
setup="from __main__ import binary_exp_recursive",
number=runs,
)
)
print(
timeit(
f"binary_exp_iterative({a}, {b})",
setup="from __main__ import binary_exp_iterative",
number=runs,
)
)
print(
timeit(
f"binary_exp_mod_recursive({a}, {b}, {c})",
setup="from __main__ import binary_exp_mod_recursive",
number=runs,
)
)
print(
timeit(
f"binary_exp_mod_iterative({a}, {b}, {c})",
setup="from __main__ import binary_exp_mod_iterative",
number=runs,
)
)
|
Binary Multiplication This is a method to find ab in a time complexity of Olog b This is one of the most commonly used methods of finding result of multiplication. Also useful in cases where solution to abc is required, where a,b,c can be numbers over the computers calculation limits. Done using iteration, can also be done using recursion Let's say you need to calculate a b RULE 1 : a b aa b2 example : 4 4 44 42 8 2 RULE 2 : IF b is odd, then a b a a b 1, where b 1 is even. Once b is even, repeat the process to get a b Repeat the process until b 1 or b 0, because a1 a and a0 0 As far as the modulo is concerned, the fact : ab c ac bc c Now apply RULE 1 or 2, whichever is required. author chinmoy159 Multiply 'a' and 'b' using bitwise multiplication. Parameters: a int: The first number. b int: The second number. Returns: int: a b Examples: binarymultiply2, 3 6 binarymultiply5, 0 0 binarymultiply3, 4 12 binarymultiply10, 5 50 binarymultiply0, 5 0 binarymultiply2, 1 2 binarymultiply1, 10 10 Calculate a b c using binary multiplication and modular arithmetic. Parameters: a int: The first number. b int: The second number. modulus int: The modulus. Returns: int: a b modulus. Examples: binarymodmultiply2, 3, 5 1 binarymodmultiply5, 0, 7 0 binarymodmultiply3, 4, 6 0 binarymodmultiply10, 5, 13 11 binarymodmultiply2, 1, 5 2 binarymodmultiply1, 10, 3 1 | def binary_multiply(a: int, b: int) -> int:
"""
Multiply 'a' and 'b' using bitwise multiplication.
Parameters:
a (int): The first number.
b (int): The second number.
Returns:
int: a * b
Examples:
>>> binary_multiply(2, 3)
6
>>> binary_multiply(5, 0)
0
>>> binary_multiply(3, 4)
12
>>> binary_multiply(10, 5)
50
>>> binary_multiply(0, 5)
0
>>> binary_multiply(2, 1)
2
>>> binary_multiply(1, 10)
10
"""
res = 0
while b > 0:
if b & 1:
res += a
a += a
b >>= 1
return res
def binary_mod_multiply(a: int, b: int, modulus: int) -> int:
"""
Calculate (a * b) % c using binary multiplication and modular arithmetic.
Parameters:
a (int): The first number.
b (int): The second number.
modulus (int): The modulus.
Returns:
int: (a * b) % modulus.
Examples:
>>> binary_mod_multiply(2, 3, 5)
1
>>> binary_mod_multiply(5, 0, 7)
0
>>> binary_mod_multiply(3, 4, 6)
0
>>> binary_mod_multiply(10, 5, 13)
11
>>> binary_mod_multiply(2, 1, 5)
2
>>> binary_mod_multiply(1, 10, 3)
1
"""
res = 0
while b > 0:
if b & 1:
res = ((res % modulus) + (a % modulus)) % modulus
a += a
b >>= 1
return res
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Find binomial coefficient using Pascal's triangle. Calculate Cn, r using Pascal's triangle. :param n: The total number of items. :param r: The number of items to choose. :return: The binomial coefficient Cn, r. binomialcoefficient10, 5 252 binomialcoefficient10, 0 1 binomialcoefficient0, 10 1 binomialcoefficient10, 10 1 binomialcoefficient5, 2 10 binomialcoefficient5, 6 0 binomialcoefficient3, 5 0 binomialcoefficient2, 3 Traceback most recent call last: ... ValueError: n and r must be nonnegative integers binomialcoefficient5, 1 Traceback most recent call last: ... ValueError: n and r must be nonnegative integers binomialcoefficient10.1, 5 Traceback most recent call last: ... TypeError: 'float' object cannot be interpreted as an integer binomialcoefficient10, 5.1 Traceback most recent call last: ... TypeError: 'float' object cannot be interpreted as an integer nc0 1 to compute current row from previous row. | def binomial_coefficient(n: int, r: int) -> int:
"""
Find binomial coefficient using Pascal's triangle.
Calculate C(n, r) using Pascal's triangle.
:param n: The total number of items.
:param r: The number of items to choose.
:return: The binomial coefficient C(n, r).
>>> binomial_coefficient(10, 5)
252
>>> binomial_coefficient(10, 0)
1
>>> binomial_coefficient(0, 10)
1
>>> binomial_coefficient(10, 10)
1
>>> binomial_coefficient(5, 2)
10
>>> binomial_coefficient(5, 6)
0
>>> binomial_coefficient(3, 5)
0
>>> binomial_coefficient(-2, 3)
Traceback (most recent call last):
...
ValueError: n and r must be non-negative integers
>>> binomial_coefficient(5, -1)
Traceback (most recent call last):
...
ValueError: n and r must be non-negative integers
>>> binomial_coefficient(10.1, 5)
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> binomial_coefficient(10, 5.1)
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
"""
if n < 0 or r < 0:
raise ValueError("n and r must be non-negative integers")
if 0 in (n, r):
return 1
c = [0 for i in range(r + 1)]
# nc0 = 1
c[0] = 1
for i in range(1, n + 1):
# to compute current row from previous row.
j = min(i, r)
while j > 0:
c[j] += c[j - 1]
j -= 1
return c[r]
if __name__ == "__main__":
from doctest import testmod
testmod()
print(binomial_coefficient(n=10, r=5))
|
For more information about the Binomial Distribution https:en.wikipedia.orgwikiBinomialdistribution from math import factorial def binomialdistributionsuccesses: int, trials: int, prob: float float: if successes trials: raise ValueErrorsuccesses must be lower or equal to trials if trials 0 or successes 0: raise ValueErrorthe function is defined for nonnegative integers if not isinstancesuccesses, int or not isinstancetrials, int: raise ValueErrorthe function is defined for nonnegative integers if not 0 prob 1: raise ValueErrorprob has to be in range of 1 0 probability probsuccesses 1 prob trials successes Calculate the binomial coefficient: n! k!nk! coefficient floatfactorialtrials coefficient factorialsuccesses factorialtrials successes return probability coefficient if name main: from doctest import testmod testmod printProbability of 2 successes out of 4 trails printwith probability of 0.75 is:, end printbinomialdistribution2, 4, 0.75 | from math import factorial
def binomial_distribution(successes: int, trials: int, prob: float) -> float:
"""
Return probability of k successes out of n tries, with p probability for one
success
The function uses the factorial function in order to calculate the binomial
coefficient
>>> binomial_distribution(3, 5, 0.7)
0.30870000000000003
>>> binomial_distribution (2, 4, 0.5)
0.375
"""
if successes > trials:
raise ValueError("""successes must be lower or equal to trials""")
if trials < 0 or successes < 0:
raise ValueError("the function is defined for non-negative integers")
if not isinstance(successes, int) or not isinstance(trials, int):
raise ValueError("the function is defined for non-negative integers")
if not 0 < prob < 1:
raise ValueError("prob has to be in range of 1 - 0")
probability = (prob**successes) * ((1 - prob) ** (trials - successes))
# Calculate the binomial coefficient: n! / k!(n-k)!
coefficient = float(factorial(trials))
coefficient /= factorial(successes) * factorial(trials - successes)
return probability * coefficient
if __name__ == "__main__":
from doctest import testmod
testmod()
print("Probability of 2 successes out of 4 trails")
print("with probability of 0.75 is:", end=" ")
print(binomial_distribution(2, 4, 0.75))
|
https:en.wikipedia.orgwikiFloorandceilingfunctions Return the ceiling of x as an Integral. :param x: the number :return: the smallest integer x. import math allceiln math.ceiln for n ... in 1, 1, 0, 0, 1.1, 1.1, 1.0, 1.0, 1000000000 True | def ceil(x: float) -> int:
"""
Return the ceiling of x as an Integral.
:param x: the number
:return: the smallest integer >= x.
>>> import math
>>> all(ceil(n) == math.ceil(n) for n
... in (1, -1, 0, -0, 1.1, -1.1, 1.0, -1.0, 1_000_000_000))
True
"""
return int(x) if x - int(x) <= 0 else int(x) + 1
if __name__ == "__main__":
import doctest
doctest.testmod()
|
This function calculates the Chebyshev distance also known as the Chessboard distance between two ndimensional points represented as lists. https:en.wikipedia.orgwikiChebyshevdistance chebyshevdistance1.0, 1.0, 2.0, 2.0 1.0 chebyshevdistance1.0, 1.0, 9.0, 2.0, 2.0, 5.2 14.2 chebyshevdistance1.0, 2.0, 2.0 Traceback most recent call last: ... ValueError: Both points must have the same dimension. | def chebyshev_distance(point_a: list[float], point_b: list[float]) -> float:
"""
This function calculates the Chebyshev distance (also known as the
Chessboard distance) between two n-dimensional points represented as lists.
https://en.wikipedia.org/wiki/Chebyshev_distance
>>> chebyshev_distance([1.0, 1.0], [2.0, 2.0])
1.0
>>> chebyshev_distance([1.0, 1.0, 9.0], [2.0, 2.0, -5.2])
14.2
>>> chebyshev_distance([1.0], [2.0, 2.0])
Traceback (most recent call last):
...
ValueError: Both points must have the same dimension.
"""
if len(point_a) != len(point_b):
raise ValueError("Both points must have the same dimension.")
return max(abs(a - b) for a, b in zip(point_a, point_b))
|
Takes list of possible side lengths and determines whether a twodimensional polygon with such side lengths can exist. Returns a boolean value for the comparison of the largest side length with sum of the rest. Wiki: https:en.wikipedia.orgwikiTriangleinequality checkpolygon6, 10, 5 True checkpolygon3, 7, 13, 2 False checkpolygon1, 4.3, 5.2, 12.2 False nums 3, 7, 13, 2 checkpolygonnums Run function, do not show answer in output nums Check numbers are not reordered 3, 7, 13, 2 checkpolygon Traceback most recent call last: ... ValueError: Monogons and Digons are not polygons in the Euclidean space checkpolygon2, 5, 6 Traceback most recent call last: ... ValueError: All values must be greater than 0 | from __future__ import annotations
def check_polygon(nums: list[float]) -> bool:
"""
Takes list of possible side lengths and determines whether a
two-dimensional polygon with such side lengths can exist.
Returns a boolean value for the < comparison
of the largest side length with sum of the rest.
Wiki: https://en.wikipedia.org/wiki/Triangle_inequality
>>> check_polygon([6, 10, 5])
True
>>> check_polygon([3, 7, 13, 2])
False
>>> check_polygon([1, 4.3, 5.2, 12.2])
False
>>> nums = [3, 7, 13, 2]
>>> _ = check_polygon(nums) # Run function, do not show answer in output
>>> nums # Check numbers are not reordered
[3, 7, 13, 2]
>>> check_polygon([])
Traceback (most recent call last):
...
ValueError: Monogons and Digons are not polygons in the Euclidean space
>>> check_polygon([-2, 5, 6])
Traceback (most recent call last):
...
ValueError: All values must be greater than 0
"""
if len(nums) < 2:
raise ValueError("Monogons and Digons are not polygons in the Euclidean space")
if any(i <= 0 for i in nums):
raise ValueError("All values must be greater than 0")
copy_nums = nums.copy()
copy_nums.sort()
return copy_nums[-1] < sum(copy_nums[:-1])
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Chinese Remainder Theorem: GCD Greatest Common Divisor or HCF Highest Common Factor If GCDa,b 1, then for any remainder ra modulo a and any remainder rb modulo b there exists integer n, such that n ra mod a and n ramod b. If n1 and n2 are two such integers, then n1n2mod ab Algorithm : 1. Use extended euclid algorithm to find x,y such that ax by 1 2. Take n raby rbax Extended Euclid extendedeuclid10, 6 1, 2 extendedeuclid7, 5 2, 3 Uses ExtendedEuclid to find inverses chineseremaindertheorem5,1,7,3 31 Explanation : 31 is the smallest number such that i When we divide it by 5, we get remainder 1 ii When we divide it by 7, we get remainder 3 chineseremaindertheorem6,1,4,3 14 SAME SOLUTION USING InvertModulo instead ExtendedEuclid This function find the inverses of a i.e., a1 invertmodulo2, 5 3 invertmodulo8,7 1 Same a above using InvertingModulo chineseremaindertheorem25,1,7,3 31 chineseremaindertheorem26,1,4,3 14 | from __future__ import annotations
# Extended Euclid
def extended_euclid(a: int, b: int) -> tuple[int, int]:
"""
>>> extended_euclid(10, 6)
(-1, 2)
>>> extended_euclid(7, 5)
(-2, 3)
"""
if b == 0:
return (1, 0)
(x, y) = extended_euclid(b, a % b)
k = a // b
return (y, x - k * y)
# Uses ExtendedEuclid to find inverses
def chinese_remainder_theorem(n1: int, r1: int, n2: int, r2: int) -> int:
"""
>>> chinese_remainder_theorem(5,1,7,3)
31
Explanation : 31 is the smallest number such that
(i) When we divide it by 5, we get remainder 1
(ii) When we divide it by 7, we get remainder 3
>>> chinese_remainder_theorem(6,1,4,3)
14
"""
(x, y) = extended_euclid(n1, n2)
m = n1 * n2
n = r2 * x * n1 + r1 * y * n2
return (n % m + m) % m
# ----------SAME SOLUTION USING InvertModulo instead ExtendedEuclid----------------
# This function find the inverses of a i.e., a^(-1)
def invert_modulo(a: int, n: int) -> int:
"""
>>> invert_modulo(2, 5)
3
>>> invert_modulo(8,7)
1
"""
(b, x) = extended_euclid(a, n)
if b < 0:
b = (b % n + n) % n
return b
# Same a above using InvertingModulo
def chinese_remainder_theorem2(n1: int, r1: int, n2: int, r2: int) -> int:
"""
>>> chinese_remainder_theorem2(5,1,7,3)
31
>>> chinese_remainder_theorem2(6,1,4,3)
14
"""
x, y = invert_modulo(n1, n2), invert_modulo(n2, n1)
m = n1 * n2
n = r2 * x * n1 + r1 * y * n2
return (n % m + m) % m
if __name__ == "__main__":
from doctest import testmod
testmod(name="chinese_remainder_theorem", verbose=True)
testmod(name="chinese_remainder_theorem2", verbose=True)
testmod(name="invert_modulo", verbose=True)
testmod(name="extended_euclid", verbose=True)
|
The Chudnovsky algorithm is a fast method for calculating the digits of PI, based on Ramanujans PI formulae. https:en.wikipedia.orgwikiChudnovskyalgorithm PI constantterm multinomialterm linearterm exponentialterm where constantterm 426880 sqrt10005 The linearterm and the exponentialterm can be defined iteratively as follows: Lk1 Lk 545140134 where L0 13591409 Xk1 Xk 262537412640768000 where X0 1 The multinomialterm is defined as follows: 6k! 3k! k! 3 where k is the kth iteration. This algorithm correctly calculates around 14 digits of PI per iteration pi10 '3.14159265' pi100 '3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706' pi'hello' Traceback most recent call last: ... TypeError: Undefined for nonintegers pi1 Traceback most recent call last: ... ValueError: Undefined for nonnatural numbers | from decimal import Decimal, getcontext
from math import ceil, factorial
def pi(precision: int) -> str:
"""
The Chudnovsky algorithm is a fast method for calculating the digits of PI,
based on Ramanujan’s PI formulae.
https://en.wikipedia.org/wiki/Chudnovsky_algorithm
PI = constant_term / ((multinomial_term * linear_term) / exponential_term)
where constant_term = 426880 * sqrt(10005)
The linear_term and the exponential_term can be defined iteratively as follows:
L_k+1 = L_k + 545140134 where L_0 = 13591409
X_k+1 = X_k * -262537412640768000 where X_0 = 1
The multinomial_term is defined as follows:
6k! / ((3k)! * (k!) ^ 3)
where k is the k_th iteration.
This algorithm correctly calculates around 14 digits of PI per iteration
>>> pi(10)
'3.14159265'
>>> pi(100)
'3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706'
>>> pi('hello')
Traceback (most recent call last):
...
TypeError: Undefined for non-integers
>>> pi(-1)
Traceback (most recent call last):
...
ValueError: Undefined for non-natural numbers
"""
if not isinstance(precision, int):
raise TypeError("Undefined for non-integers")
elif precision < 1:
raise ValueError("Undefined for non-natural numbers")
getcontext().prec = precision
num_iterations = ceil(precision / 14)
constant_term = 426880 * Decimal(10005).sqrt()
exponential_term = 1
linear_term = 13591409
partial_sum = Decimal(linear_term)
for k in range(1, num_iterations):
multinomial_term = factorial(6 * k) // (factorial(3 * k) * factorial(k) ** 3)
linear_term += 545140134
exponential_term *= -262537412640768000
partial_sum += Decimal(multinomial_term * linear_term) / exponential_term
return str(constant_term / partial_sum)[:-1]
if __name__ == "__main__":
n = 50
print(f"The first {n} digits of pi is: {pi(n)}")
|
The Collatz conjecture is a famous unsolved problem in mathematics. Given a starting positive integer, define the following sequence: If the current term n is even, then the next term is n2. If the current term n is odd, then the next term is 3n 1. The conjecture claims that this sequence will always reach 1 for any starting number. Other names for this problem include the 3n 1 problem, the Ulam conjecture, Kakutani's problem, the Thwaites conjecture, Hasse's algorithm, the Syracuse problem, and the hailstone sequence. Reference: https:en.wikipedia.orgwikiCollatzconjecture Generate the Collatz sequence starting at n. tuplecollatzsequence2.1 Traceback most recent call last: ... Exception: Sequence only defined for positive integers tuplecollatzsequence0 Traceback most recent call last: ... Exception: Sequence only defined for positive integers tuplecollatzsequence4 4, 2, 1 tuplecollatzsequence11 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1 tuplecollatzsequence31 doctest: NORMALIZEWHITESPACE 31, 94, 47, 142, 71, 214, 107, 322, 161, 484, 242, 121, 364, 182, 91, 274, 137, 412, 206, 103, 310, 155, 466, 233, 700, 350, 175, 526, 263, 790, 395, 1186, 593, 1780, 890, 445, 1336, 668, 334, 167, 502, 251, 754, 377, 1132, 566, 283, 850, 425, 1276, 638, 319, 958, 479, 1438, 719, 2158, 1079, 3238, 1619, 4858, 2429, 7288, 3644, 1822, 911, 2734, 1367, 4102, 2051, 6154, 3077, 9232, 4616, 2308, 1154, 577, 1732, 866, 433, 1300, 650, 325, 976, 488, 244, 122, 61, 184, 92, 46, 23, 70, 35, 106, 53, 160, 80, 40, 20, 10, 5, 16, 8, 4, 2, 1 tuplecollatzsequence43 doctest: NORMALIZEWHITESPACE 43, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1 | from __future__ import annotations
from collections.abc import Generator
def collatz_sequence(n: int) -> Generator[int, None, None]:
"""
Generate the Collatz sequence starting at n.
>>> tuple(collatz_sequence(2.1))
Traceback (most recent call last):
...
Exception: Sequence only defined for positive integers
>>> tuple(collatz_sequence(0))
Traceback (most recent call last):
...
Exception: Sequence only defined for positive integers
>>> tuple(collatz_sequence(4))
(4, 2, 1)
>>> tuple(collatz_sequence(11))
(11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1)
>>> tuple(collatz_sequence(31)) # doctest: +NORMALIZE_WHITESPACE
(31, 94, 47, 142, 71, 214, 107, 322, 161, 484, 242, 121, 364, 182, 91, 274, 137,
412, 206, 103, 310, 155, 466, 233, 700, 350, 175, 526, 263, 790, 395, 1186, 593,
1780, 890, 445, 1336, 668, 334, 167, 502, 251, 754, 377, 1132, 566, 283, 850, 425,
1276, 638, 319, 958, 479, 1438, 719, 2158, 1079, 3238, 1619, 4858, 2429, 7288, 3644,
1822, 911, 2734, 1367, 4102, 2051, 6154, 3077, 9232, 4616, 2308, 1154, 577, 1732,
866, 433, 1300, 650, 325, 976, 488, 244, 122, 61, 184, 92, 46, 23, 70, 35, 106, 53,
160, 80, 40, 20, 10, 5, 16, 8, 4, 2, 1)
>>> tuple(collatz_sequence(43)) # doctest: +NORMALIZE_WHITESPACE
(43, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56, 28, 14, 7, 22, 11, 34, 17, 52, 26,
13, 40, 20, 10, 5, 16, 8, 4, 2, 1)
"""
if not isinstance(n, int) or n < 1:
raise Exception("Sequence only defined for positive integers")
yield n
while n != 1:
if n % 2 == 0:
n //= 2
else:
n = 3 * n + 1
yield n
def main():
n = int(input("Your number: "))
sequence = tuple(collatz_sequence(n))
print(sequence)
print(f"Collatz sequence from {n} took {len(sequence)} steps.")
if __name__ == "__main__":
main()
|
https:en.wikipedia.orgwikiCombination Returns the number of different combinations of k length which can be made from n values, where n k. Examples: combinations10,5 252 combinations6,3 20 combinations20,5 15504 combinations52, 5 2598960 combinations0, 0 1 combinations4, 5 ... Traceback most recent call last: ValueError: Please enter positive integers for n and k where n k If either of the conditions are true, the function is being asked to calculate a factorial of a negative number, which is not possible | def combinations(n: int, k: int) -> int:
"""
Returns the number of different combinations of k length which can
be made from n values, where n >= k.
Examples:
>>> combinations(10,5)
252
>>> combinations(6,3)
20
>>> combinations(20,5)
15504
>>> combinations(52, 5)
2598960
>>> combinations(0, 0)
1
>>> combinations(-4, -5)
...
Traceback (most recent call last):
ValueError: Please enter positive integers for n and k where n >= k
"""
# If either of the conditions are true, the function is being asked
# to calculate a factorial of a negative number, which is not possible
if n < k or k < 0:
raise ValueError("Please enter positive integers for n and k where n >= k")
res = 1
for i in range(k):
res *= n - i
res //= i + 1
return res
if __name__ == "__main__":
print(
"The number of five-card hands possible from a standard",
f"fifty-two card deck is: {combinations(52, 5)}\n",
)
print(
"If a class of 40 students must be arranged into groups of",
f"4 for group projects, there are {combinations(40, 4)} ways",
"to arrange them.\n",
)
print(
"If 10 teams are competing in a Formula One race, there",
f"are {combinations(10, 3)} ways that first, second and",
"third place can be awarded.",
)
|
Finding the continuous fraction for a rational number using python https:en.wikipedia.orgwikiContinuedfraction :param num: Fraction of the number whose continued fractions to be found. Use Fractionstrnumber for more accurate results due to float inaccuracies. :return: The continued fraction of rational number. It is the all commas in the n 1tuple notation. continuedfractionFraction2 2 continuedfractionFraction3.245 3, 4, 12, 4 continuedfractionFraction2.25 2, 4 continuedfraction1Fraction2.25 0, 2, 4 continuedfractionFraction41593 4, 2, 6, 7 continuedfractionFraction0 0 continuedfractionFraction0.75 0, 1, 3 continuedfractionFraction2.25 2.25 3 0.75 3, 1, 3 | from fractions import Fraction
from math import floor
def continued_fraction(num: Fraction) -> list[int]:
"""
:param num:
Fraction of the number whose continued fractions to be found.
Use Fraction(str(number)) for more accurate results due to
float inaccuracies.
:return:
The continued fraction of rational number.
It is the all commas in the (n + 1)-tuple notation.
>>> continued_fraction(Fraction(2))
[2]
>>> continued_fraction(Fraction("3.245"))
[3, 4, 12, 4]
>>> continued_fraction(Fraction("2.25"))
[2, 4]
>>> continued_fraction(1/Fraction("2.25"))
[0, 2, 4]
>>> continued_fraction(Fraction("415/93"))
[4, 2, 6, 7]
>>> continued_fraction(Fraction(0))
[0]
>>> continued_fraction(Fraction(0.75))
[0, 1, 3]
>>> continued_fraction(Fraction("-2.25")) # -2.25 = -3 + 0.75
[-3, 1, 3]
"""
numerator, denominator = num.as_integer_ratio()
continued_fraction_list: list[int] = []
while True:
integer_part = floor(numerator / denominator)
continued_fraction_list.append(integer_part)
numerator -= integer_part * denominator
if numerator == 0:
break
numerator, denominator = denominator, numerator
return continued_fraction_list
if __name__ == "__main__":
import doctest
doctest.testmod()
print("Continued Fraction of 0.84375 is: ", continued_fraction(Fraction("0.84375")))
|
Isolate the Decimal part of a Number https:stackoverflow.comquestions3886402howtogetnumbersafterdecimalpoint Isolates the decimal part of a number. If digitAmount 0 round to that decimal place, else print the entire decimal. decimalisolate1.53, 0 0.53 decimalisolate35.345, 1 0.3 decimalisolate35.345, 2 0.34 decimalisolate35.345, 3 0.345 decimalisolate14.789, 3 0.789 decimalisolate0, 2 0 decimalisolate14.123, 1 0.1 decimalisolate14.123, 2 0.12 decimalisolate14.123, 3 0.123 | def decimal_isolate(number: float, digit_amount: int) -> float:
"""
Isolates the decimal part of a number.
If digitAmount > 0 round to that decimal place, else print the entire decimal.
>>> decimal_isolate(1.53, 0)
0.53
>>> decimal_isolate(35.345, 1)
0.3
>>> decimal_isolate(35.345, 2)
0.34
>>> decimal_isolate(35.345, 3)
0.345
>>> decimal_isolate(-14.789, 3)
-0.789
>>> decimal_isolate(0, 2)
0
>>> decimal_isolate(-14.123, 1)
-0.1
>>> decimal_isolate(-14.123, 2)
-0.12
>>> decimal_isolate(-14.123, 3)
-0.123
"""
if digit_amount > 0:
return round(number - int(number), digit_amount)
return number - int(number)
if __name__ == "__main__":
print(decimal_isolate(1.53, 0))
print(decimal_isolate(35.345, 1))
print(decimal_isolate(35.345, 2))
print(decimal_isolate(35.345, 3))
print(decimal_isolate(-14.789, 3))
print(decimal_isolate(0, 2))
print(decimal_isolate(-14.123, 1))
print(decimal_isolate(-14.123, 2))
print(decimal_isolate(-14.123, 3))
|
Return a decimal number in its simplest fraction form decimaltofraction2 2, 1 decimaltofraction89. 89, 1 decimaltofraction67 67, 1 decimaltofraction45.0 45, 1 decimaltofraction1.5 3, 2 decimaltofraction6.25 25, 4 decimaltofraction78td Traceback most recent call last: ValueError: Please enter a valid number | def decimal_to_fraction(decimal: float | str) -> tuple[int, int]:
"""
Return a decimal number in its simplest fraction form
>>> decimal_to_fraction(2)
(2, 1)
>>> decimal_to_fraction(89.)
(89, 1)
>>> decimal_to_fraction("67")
(67, 1)
>>> decimal_to_fraction("45.0")
(45, 1)
>>> decimal_to_fraction(1.5)
(3, 2)
>>> decimal_to_fraction("6.25")
(25, 4)
>>> decimal_to_fraction("78td")
Traceback (most recent call last):
ValueError: Please enter a valid number
"""
try:
decimal = float(decimal)
except ValueError:
raise ValueError("Please enter a valid number")
fractional_part = decimal - int(decimal)
if fractional_part == 0:
return int(decimal), 1
else:
number_of_frac_digits = len(str(decimal).split(".")[1])
numerator = int(decimal * (10**number_of_frac_digits))
denominator = 10**number_of_frac_digits
divisor, dividend = denominator, numerator
while True:
remainder = dividend % divisor
if remainder == 0:
break
dividend, divisor = divisor, remainder
numerator, denominator = numerator / divisor, denominator / divisor
return int(numerator), int(denominator)
if __name__ == "__main__":
print(f"{decimal_to_fraction(2) = }")
print(f"{decimal_to_fraction(89.0) = }")
print(f"{decimal_to_fraction('67') = }")
print(f"{decimal_to_fraction('45.0') = }")
print(f"{decimal_to_fraction(1.5) = }")
print(f"{decimal_to_fraction('6.25') = }")
print(f"{decimal_to_fraction('78td') = }")
|
dodecahedron.py A regular dodecahedron is a threedimensional figure made up of 12 pentagon faces having the same equal size. Calculates the surface area of a regular dodecahedron a 3 25 10 5 1 2 1 2 e2 where: a is the area of the dodecahedron e is the length of the edge referenceDodecahedron Study.com https:study.comacademylessondodecahedronvolumesurfaceareaformulas.html :param edge: length of the edge of the dodecahedron :type edge: float :return: the surface area of the dodecahedron as a float Tests: dodecahedronsurfacearea5 516.1432201766901 dodecahedronsurfacearea10 2064.5728807067603 dodecahedronsurfacearea1 Traceback most recent call last: ... ValueError: Length must be a positive. Calculates the volume of a regular dodecahedron v 15 7 5 1 2 4 e3 where: v is the volume of the dodecahedron e is the length of the edge referenceDodecahedron Study.com https:study.comacademylessondodecahedronvolumesurfaceareaformulas.html :param edge: length of the edge of the dodecahedron :type edge: float :return: the volume of the dodecahedron as a float Tests: dodecahedronvolume5 957.8898700780791 dodecahedronvolume10 7663.118960624633 dodecahedronvolume1 Traceback most recent call last: ... ValueError: Length must be a positive. | # dodecahedron.py
"""
A regular dodecahedron is a three-dimensional figure made up of
12 pentagon faces having the same equal size.
"""
def dodecahedron_surface_area(edge: float) -> float:
"""
Calculates the surface area of a regular dodecahedron
a = 3 * ((25 + 10 * (5** (1 / 2))) ** (1 / 2 )) * (e**2)
where:
a --> is the area of the dodecahedron
e --> is the length of the edge
reference-->"Dodecahedron" Study.com
<https://study.com/academy/lesson/dodecahedron-volume-surface-area-formulas.html>
:param edge: length of the edge of the dodecahedron
:type edge: float
:return: the surface area of the dodecahedron as a float
Tests:
>>> dodecahedron_surface_area(5)
516.1432201766901
>>> dodecahedron_surface_area(10)
2064.5728807067603
>>> dodecahedron_surface_area(-1)
Traceback (most recent call last):
...
ValueError: Length must be a positive.
"""
if edge <= 0 or not isinstance(edge, int):
raise ValueError("Length must be a positive.")
return 3 * ((25 + 10 * (5 ** (1 / 2))) ** (1 / 2)) * (edge**2)
def dodecahedron_volume(edge: float) -> float:
"""
Calculates the volume of a regular dodecahedron
v = ((15 + (7 * (5** (1 / 2)))) / 4) * (e**3)
where:
v --> is the volume of the dodecahedron
e --> is the length of the edge
reference-->"Dodecahedron" Study.com
<https://study.com/academy/lesson/dodecahedron-volume-surface-area-formulas.html>
:param edge: length of the edge of the dodecahedron
:type edge: float
:return: the volume of the dodecahedron as a float
Tests:
>>> dodecahedron_volume(5)
957.8898700780791
>>> dodecahedron_volume(10)
7663.118960624633
>>> dodecahedron_volume(-1)
Traceback (most recent call last):
...
ValueError: Length must be a positive.
"""
if edge <= 0 or not isinstance(edge, int):
raise ValueError("Length must be a positive.")
return ((15 + (7 * (5 ** (1 / 2)))) / 4) * (edge**3)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Compute double factorial using recursive method. Recursion can be costly for large numbers. To learn about the theory behind this algorithm: https:en.wikipedia.orgwikiDoublefactorial from math import prod alldoublefactorialrecursivei prodrangei, 0, 2 for i in range20 True doublefactorialrecursive0.1 Traceback most recent call last: ... ValueError: doublefactorialrecursive only accepts integral values doublefactorialrecursive1 Traceback most recent call last: ... ValueError: doublefactorialrecursive not defined for negative values Compute double factorial using iterative method. To learn about the theory behind this algorithm: https:en.wikipedia.orgwikiDoublefactorial from math import prod alldoublefactorialiterativei prodrangei, 0, 2 for i in range20 True doublefactorialiterative0.1 Traceback most recent call last: ... ValueError: doublefactorialiterative only accepts integral values doublefactorialiterative1 Traceback most recent call last: ... ValueError: doublefactorialiterative not defined for negative values | def double_factorial_recursive(n: int) -> int:
"""
Compute double factorial using recursive method.
Recursion can be costly for large numbers.
To learn about the theory behind this algorithm:
https://en.wikipedia.org/wiki/Double_factorial
>>> from math import prod
>>> all(double_factorial_recursive(i) == prod(range(i, 0, -2)) for i in range(20))
True
>>> double_factorial_recursive(0.1)
Traceback (most recent call last):
...
ValueError: double_factorial_recursive() only accepts integral values
>>> double_factorial_recursive(-1)
Traceback (most recent call last):
...
ValueError: double_factorial_recursive() not defined for negative values
"""
if not isinstance(n, int):
raise ValueError("double_factorial_recursive() only accepts integral values")
if n < 0:
raise ValueError("double_factorial_recursive() not defined for negative values")
return 1 if n <= 1 else n * double_factorial_recursive(n - 2)
def double_factorial_iterative(num: int) -> int:
"""
Compute double factorial using iterative method.
To learn about the theory behind this algorithm:
https://en.wikipedia.org/wiki/Double_factorial
>>> from math import prod
>>> all(double_factorial_iterative(i) == prod(range(i, 0, -2)) for i in range(20))
True
>>> double_factorial_iterative(0.1)
Traceback (most recent call last):
...
ValueError: double_factorial_iterative() only accepts integral values
>>> double_factorial_iterative(-1)
Traceback (most recent call last):
...
ValueError: double_factorial_iterative() not defined for negative values
"""
if not isinstance(num, int):
raise ValueError("double_factorial_iterative() only accepts integral values")
if num < 0:
raise ValueError("double_factorial_iterative() not defined for negative values")
value = 1
for i in range(num, 0, -2):
value *= i
return value
if __name__ == "__main__":
import doctest
doctest.testmod()
|
https:en.wikipedia.orgwikiAutomaticdifferentiationAutomaticdifferentiationusingdualnumbers https:blog.jliszka.org20131024exactnumericnthderivatives.html Note this only works for basic functions, fx where the power of x is positive. differentiatelambda x: x2, 2, 2 2 differentiatelambda x: x2 x4, 9, 2 196830 differentiatelambda y: 0.5 y 3 6, 3.5, 4 7605.0 differentiatelambda y: y 2, 4, 3 0 differentiate8, 8, 8 Traceback most recent call last: ... ValueError: differentiate requires a function as input for func differentiatelambda x: x 2, , 1 Traceback most recent call last: ... ValueError: differentiate requires a float as input for position differentiatelambda x: x2, 3, Traceback most recent call last: ... ValueError: differentiate requires an int as input for order | from math import factorial
"""
https://en.wikipedia.org/wiki/Automatic_differentiation#Automatic_differentiation_using_dual_numbers
https://blog.jliszka.org/2013/10/24/exact-numeric-nth-derivatives.html
Note this only works for basic functions, f(x) where the power of x is positive.
"""
class Dual:
def __init__(self, real, rank):
self.real = real
if isinstance(rank, int):
self.duals = [1] * rank
else:
self.duals = rank
def __repr__(self):
return (
f"{self.real}+"
f"{'+'.join(str(dual)+'E'+str(n+1)for n,dual in enumerate(self.duals))}"
)
def reduce(self):
cur = self.duals.copy()
while cur[-1] == 0:
cur.pop(-1)
return Dual(self.real, cur)
def __add__(self, other):
if not isinstance(other, Dual):
return Dual(self.real + other, self.duals)
s_dual = self.duals.copy()
o_dual = other.duals.copy()
if len(s_dual) > len(o_dual):
o_dual.extend([1] * (len(s_dual) - len(o_dual)))
elif len(s_dual) < len(o_dual):
s_dual.extend([1] * (len(o_dual) - len(s_dual)))
new_duals = []
for i in range(len(s_dual)):
new_duals.append(s_dual[i] + o_dual[i])
return Dual(self.real + other.real, new_duals)
__radd__ = __add__
def __sub__(self, other):
return self + other * -1
def __mul__(self, other):
if not isinstance(other, Dual):
new_duals = []
for i in self.duals:
new_duals.append(i * other)
return Dual(self.real * other, new_duals)
new_duals = [0] * (len(self.duals) + len(other.duals) + 1)
for i, item in enumerate(self.duals):
for j, jtem in enumerate(other.duals):
new_duals[i + j + 1] += item * jtem
for k in range(len(self.duals)):
new_duals[k] += self.duals[k] * other.real
for index in range(len(other.duals)):
new_duals[index] += other.duals[index] * self.real
return Dual(self.real * other.real, new_duals)
__rmul__ = __mul__
def __truediv__(self, other):
if not isinstance(other, Dual):
new_duals = []
for i in self.duals:
new_duals.append(i / other)
return Dual(self.real / other, new_duals)
raise ValueError
def __floordiv__(self, other):
if not isinstance(other, Dual):
new_duals = []
for i in self.duals:
new_duals.append(i // other)
return Dual(self.real // other, new_duals)
raise ValueError
def __pow__(self, n):
if n < 0 or isinstance(n, float):
raise ValueError("power must be a positive integer")
if n == 0:
return 1
if n == 1:
return self
x = self
for _ in range(n - 1):
x *= self
return x
def differentiate(func, position, order):
"""
>>> differentiate(lambda x: x**2, 2, 2)
2
>>> differentiate(lambda x: x**2 * x**4, 9, 2)
196830
>>> differentiate(lambda y: 0.5 * (y + 3) ** 6, 3.5, 4)
7605.0
>>> differentiate(lambda y: y ** 2, 4, 3)
0
>>> differentiate(8, 8, 8)
Traceback (most recent call last):
...
ValueError: differentiate() requires a function as input for func
>>> differentiate(lambda x: x **2, "", 1)
Traceback (most recent call last):
...
ValueError: differentiate() requires a float as input for position
>>> differentiate(lambda x: x**2, 3, "")
Traceback (most recent call last):
...
ValueError: differentiate() requires an int as input for order
"""
if not callable(func):
raise ValueError("differentiate() requires a function as input for func")
if not isinstance(position, (float, int)):
raise ValueError("differentiate() requires a float as input for position")
if not isinstance(order, int):
raise ValueError("differentiate() requires an int as input for order")
d = Dual(position, 1)
result = func(d)
if order == 0:
return result.real
return result.duals[order - 1] * factorial(order)
if __name__ == "__main__":
import doctest
doctest.testmod()
def f(y):
return y**2 * y**4
print(differentiate(f, 9, 2))
|
!usrbinenv python3 Implementation of entropy of information https:en.wikipedia.orgwikiEntropyinformationtheory This method takes path and two dict as argument and than calculates entropy of them. :param dict: :param dict: :return: Prints 1 Entropy of information based on 1 alphabet 2 Entropy of information based on couples of 2 alphabet 3 print Entropy of HX nXn1 Text from random books. Also, random quotes. text Behind Winstons back the voice ... from the telescreen was still ... babbling and the overfulfilment calculateprobtext 4.0 6.0 2.0 text The Ministry of TruthMinitrue, in Newspeak Newspeak was the official ... face in elegant lettering, the three calculateprobtext 4.0 5.0 1.0 text Had repulsive dashwoods suspicion sincerity but advantage now him. ... Remark easily garret nor nay. Civil those mrs enjoy shy fat merry. ... You greatest jointure saw horrible. He private he on be imagine ... suppose. Fertile beloved evident through no service elderly is. Blind ... there if every no so at. Own neglected you preferred way sincerity ... delivered his attempted. To of message cottage windows do besides ... against uncivil. Delightful unreserved impossible few estimating ... men favourable see entreaties. She propriety immediate was improving. ... He or entrance humoured likewise moderate. Much nor game son say ... feel. Fat make met can must form into gate. Me we offending prevailed ... discovery. calculateprobtext 4.0 7.0 3.0 what is our total sum of probabilities. one length string for each alpha we go in our dict and if it is in it we calculate entropy print entropy two len string for each alpha two in size calculate entropy. print second entropy print the difference between them Convert text input into two dicts of counts. The first dictionary stores the frequency of single character strings. The second dictionary stores the frequency of two character strings. first case when we have space at start. text Had repulsive dashwoods suspicion sincerity but advantage now him. Remark easily garret nor nay. Civil those mrs enjoy shy fat merry. You greatest jointure saw horrible. He private he on be imagine suppose. Fertile beloved evident through no service elderly is. Blind there if every no so at. Own neglected you preferred way sincerity delivered his attempted. To of message cottage windows do besides against uncivil. Delightful unreserved impossible few estimating men favourable see entreaties. She propriety immediate was improving. He or entrance humoured likewise moderate. Much nor game son say feel. Fat make met can must form into gate. Me we offending prevailed discovery. calculateprobtext | #!/usr/bin/env python3
"""
Implementation of entropy of information
https://en.wikipedia.org/wiki/Entropy_(information_theory)
"""
from __future__ import annotations
import math
from collections import Counter
from string import ascii_lowercase
def calculate_prob(text: str) -> None:
"""
This method takes path and two dict as argument
and than calculates entropy of them.
:param dict:
:param dict:
:return: Prints
1) Entropy of information based on 1 alphabet
2) Entropy of information based on couples of 2 alphabet
3) print Entropy of H(X n∣Xn−1)
Text from random books. Also, random quotes.
>>> text = ("Behind Winston’s back the voice "
... "from the telescreen was still "
... "babbling and the overfulfilment")
>>> calculate_prob(text)
4.0
6.0
2.0
>>> text = ("The Ministry of Truth—Minitrue, in Newspeak [Newspeak was the official"
... "face in elegant lettering, the three")
>>> calculate_prob(text)
4.0
5.0
1.0
>>> text = ("Had repulsive dashwoods suspicion sincerity but advantage now him. "
... "Remark easily garret nor nay. Civil those mrs enjoy shy fat merry. "
... "You greatest jointure saw horrible. He private he on be imagine "
... "suppose. Fertile beloved evident through no service elderly is. Blind "
... "there if every no so at. Own neglected you preferred way sincerity "
... "delivered his attempted. To of message cottage windows do besides "
... "against uncivil. Delightful unreserved impossible few estimating "
... "men favourable see entreaties. She propriety immediate was improving. "
... "He or entrance humoured likewise moderate. Much nor game son say "
... "feel. Fat make met can must form into gate. Me we offending prevailed "
... "discovery.")
>>> calculate_prob(text)
4.0
7.0
3.0
"""
single_char_strings, two_char_strings = analyze_text(text)
my_alphas = list(" " + ascii_lowercase)
# what is our total sum of probabilities.
all_sum = sum(single_char_strings.values())
# one length string
my_fir_sum = 0
# for each alpha we go in our dict and if it is in it we calculate entropy
for ch in my_alphas:
if ch in single_char_strings:
my_str = single_char_strings[ch]
prob = my_str / all_sum
my_fir_sum += prob * math.log2(prob) # entropy formula.
# print entropy
print(f"{round(-1 * my_fir_sum):.1f}")
# two len string
all_sum = sum(two_char_strings.values())
my_sec_sum = 0
# for each alpha (two in size) calculate entropy.
for ch0 in my_alphas:
for ch1 in my_alphas:
sequence = ch0 + ch1
if sequence in two_char_strings:
my_str = two_char_strings[sequence]
prob = int(my_str) / all_sum
my_sec_sum += prob * math.log2(prob)
# print second entropy
print(f"{round(-1 * my_sec_sum):.1f}")
# print the difference between them
print(f"{round((-1 * my_sec_sum) - (-1 * my_fir_sum)):.1f}")
def analyze_text(text: str) -> tuple[dict, dict]:
"""
Convert text input into two dicts of counts.
The first dictionary stores the frequency of single character strings.
The second dictionary stores the frequency of two character strings.
"""
single_char_strings = Counter() # type: ignore
two_char_strings = Counter() # type: ignore
single_char_strings[text[-1]] += 1
# first case when we have space at start.
two_char_strings[" " + text[0]] += 1
for i in range(len(text) - 1):
single_char_strings[text[i]] += 1
two_char_strings[text[i : i + 2]] += 1
return single_char_strings, two_char_strings
def main():
import doctest
doctest.testmod()
# text = (
# "Had repulsive dashwoods suspicion sincerity but advantage now him. Remark "
# "easily garret nor nay. Civil those mrs enjoy shy fat merry. You greatest "
# "jointure saw horrible. He private he on be imagine suppose. Fertile "
# "beloved evident through no service elderly is. Blind there if every no so "
# "at. Own neglected you preferred way sincerity delivered his attempted. To "
# "of message cottage windows do besides against uncivil. Delightful "
# "unreserved impossible few estimating men favourable see entreaties. She "
# "propriety immediate was improving. He or entrance humoured likewise "
# "moderate. Much nor game son say feel. Fat make met can must form into "
# "gate. Me we offending prevailed discovery. "
# )
# calculate_prob(text)
if __name__ == "__main__":
main()
|
Calculate the distance between the two endpoints of two vectors. A vector is defined as a list, tuple, or numpy 1D array. euclideandistance0, 0, 2, 2 2.8284271247461903 euclideandistancenp.array0, 0, 0, np.array2, 2, 2 3.4641016151377544 euclideandistancenp.array1, 2, 3, 4, np.array5, 6, 7, 8 8.0 euclideandistance1, 2, 3, 4, 5, 6, 7, 8 8.0 Calculate the distance between the two endpoints of two vectors without numpy. A vector is defined as a list, tuple, or numpy 1D array. euclideandistancenonp0, 0, 2, 2 2.8284271247461903 euclideandistancenonp1, 2, 3, 4, 5, 6, 7, 8 8.0 Benchmarks | from __future__ import annotations
import typing
from collections.abc import Iterable
import numpy as np
Vector = typing.Union[Iterable[float], Iterable[int], np.ndarray] # noqa: UP007
VectorOut = typing.Union[np.float64, int, float] # noqa: UP007
def euclidean_distance(vector_1: Vector, vector_2: Vector) -> VectorOut:
"""
Calculate the distance between the two endpoints of two vectors.
A vector is defined as a list, tuple, or numpy 1D array.
>>> euclidean_distance((0, 0), (2, 2))
2.8284271247461903
>>> euclidean_distance(np.array([0, 0, 0]), np.array([2, 2, 2]))
3.4641016151377544
>>> euclidean_distance(np.array([1, 2, 3, 4]), np.array([5, 6, 7, 8]))
8.0
>>> euclidean_distance([1, 2, 3, 4], [5, 6, 7, 8])
8.0
"""
return np.sqrt(np.sum((np.asarray(vector_1) - np.asarray(vector_2)) ** 2))
def euclidean_distance_no_np(vector_1: Vector, vector_2: Vector) -> VectorOut:
"""
Calculate the distance between the two endpoints of two vectors without numpy.
A vector is defined as a list, tuple, or numpy 1D array.
>>> euclidean_distance_no_np((0, 0), (2, 2))
2.8284271247461903
>>> euclidean_distance_no_np([1, 2, 3, 4], [5, 6, 7, 8])
8.0
"""
return sum((v1 - v2) ** 2 for v1, v2 in zip(vector_1, vector_2)) ** (1 / 2)
if __name__ == "__main__":
def benchmark() -> None:
"""
Benchmarks
"""
from timeit import timeit
print("Without Numpy")
print(
timeit(
"euclidean_distance_no_np([1, 2, 3], [4, 5, 6])",
number=10000,
globals=globals(),
)
)
print("With Numpy")
print(
timeit(
"euclidean_distance([1, 2, 3], [4, 5, 6])",
number=10000,
globals=globals(),
)
)
benchmark()
|
Calculate numeric solution at each step to an ODE using Euler's Method For reference to Euler's method refer to https:en.wikipedia.orgwikiEulermethod. Args: odefunc Callable: The ordinary differential equation as a function of x and y. y0 float: The initial value for y. x0 float: The initial value for x. stepsize float: The increment value for x. xend float: The final value of x to be calculated. Returns: np.ndarray: Solution of y for every step in x. the exact solution is math.expx def fx, y: ... return y y0 1 y expliciteulerf, y0, 0.0, 0.01, 5 y1 144.77277243257308 | from collections.abc import Callable
import numpy as np
def explicit_euler(
ode_func: Callable, y0: float, x0: float, step_size: float, x_end: float
) -> np.ndarray:
"""Calculate numeric solution at each step to an ODE using Euler's Method
For reference to Euler's method refer to https://en.wikipedia.org/wiki/Euler_method.
Args:
ode_func (Callable): The ordinary differential equation
as a function of x and y.
y0 (float): The initial value for y.
x0 (float): The initial value for x.
step_size (float): The increment value for x.
x_end (float): The final value of x to be calculated.
Returns:
np.ndarray: Solution of y for every step in x.
>>> # the exact solution is math.exp(x)
>>> def f(x, y):
... return y
>>> y0 = 1
>>> y = explicit_euler(f, y0, 0.0, 0.01, 5)
>>> y[-1]
144.77277243257308
"""
n = int(np.ceil((x_end - x0) / step_size))
y = np.zeros((n + 1,))
y[0] = y0
x = x0
for k in range(n):
y[k + 1] = y[k] + step_size * ode_func(x, y[k])
x += step_size
return y
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Calculate solution at each step to an ODE using Euler's Modified Method The Euler Method is straightforward to implement, but can't give accurate solutions. So, some changes were proposed to improve accuracy. https:en.wikipedia.orgwikiEulermethod Arguments: odefunc The ode as a function of x and y y0 the initial value for y x0 the initial value for x stepsize the increment value for x xend the end value for x the exact solution is math.expx def f1x, y: ... return 2xy2 y eulermodifiedf1, 1.0, 0.0, 0.2, 1.0 y1 0.503338255442106 import math def f2x, y: ... return 2y x3math.exp2x y eulermodifiedf2, 1.0, 0.0, 0.1, 0.3 y1 0.5525976431951775 | from collections.abc import Callable
import numpy as np
def euler_modified(
ode_func: Callable, y0: float, x0: float, step_size: float, x_end: float
) -> np.ndarray:
"""
Calculate solution at each step to an ODE using Euler's Modified Method
The Euler Method is straightforward to implement, but can't give accurate solutions.
So, some changes were proposed to improve accuracy.
https://en.wikipedia.org/wiki/Euler_method
Arguments:
ode_func -- The ode as a function of x and y
y0 -- the initial value for y
x0 -- the initial value for x
stepsize -- the increment value for x
x_end -- the end value for x
>>> # the exact solution is math.exp(x)
>>> def f1(x, y):
... return -2*x*(y**2)
>>> y = euler_modified(f1, 1.0, 0.0, 0.2, 1.0)
>>> y[-1]
0.503338255442106
>>> import math
>>> def f2(x, y):
... return -2*y + (x**3)*math.exp(-2*x)
>>> y = euler_modified(f2, 1.0, 0.0, 0.1, 0.3)
>>> y[-1]
0.5525976431951775
"""
n = int(np.ceil((x_end - x0) / step_size))
y = np.zeros((n + 1,))
y[0] = y0
x = x0
for k in range(n):
y_get = y[k] + step_size * ode_func(x, y[k])
y[k + 1] = y[k] + (
(step_size / 2) * (ode_func(x, y[k]) + ode_func(x + step_size, y_get))
)
x += step_size
return y
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Eulers Totient function finds the number of relative primes of a number n from 1 to n n 10 totientcalculation totientn for i in range1, n: ... printfi has totientcalculationi relative primes. 1 has 0 relative primes. 2 has 1 relative primes. 3 has 2 relative primes. 4 has 2 relative primes. 5 has 4 relative primes. 6 has 2 relative primes. 7 has 6 relative primes. 8 has 4 relative primes. 9 has 6 relative primes. | # Eulers Totient function finds the number of relative primes of a number n from 1 to n
def totient(n: int) -> list:
"""
>>> n = 10
>>> totient_calculation = totient(n)
>>> for i in range(1, n):
... print(f"{i} has {totient_calculation[i]} relative primes.")
1 has 0 relative primes.
2 has 1 relative primes.
3 has 2 relative primes.
4 has 2 relative primes.
5 has 4 relative primes.
6 has 2 relative primes.
7 has 6 relative primes.
8 has 4 relative primes.
9 has 6 relative primes.
"""
is_prime = [True for i in range(n + 1)]
totients = [i - 1 for i in range(n + 1)]
primes = []
for i in range(2, n + 1):
if is_prime[i]:
primes.append(i)
for j in range(len(primes)):
if i * primes[j] >= n:
break
is_prime[i * primes[j]] = False
if i % primes[j] == 0:
totients[i * primes[j]] = totients[i] * primes[j]
break
totients[i * primes[j]] = totients[i] * (primes[j] - 1)
return totients
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Extended Euclidean Algorithm. Finds 2 numbers a and b such that it satisfies the equation am bn gcdm, n a.k.a Bezout's Identity https:en.wikipedia.orgwikiExtendedEuclideanalgorithm Author: S. Sharma silentcat Date: 20190225T12:08:5306:00 Email: silentcatprotonmail.com Last modified by: pikulet Last modified time: 20201002 Extended Euclidean Algorithm. Finds 2 numbers a and b such that it satisfies the equation am bn gcdm, n a.k.a Bezout's Identity extendedeuclideanalgorithm1, 24 1, 0 extendedeuclideanalgorithm8, 14 2, 1 extendedeuclideanalgorithm240, 46 9, 47 extendedeuclideanalgorithm1, 4 1, 0 extendedeuclideanalgorithm2, 4 1, 0 extendedeuclideanalgorithm0, 4 0, 1 extendedeuclideanalgorithm2, 0 1, 0 base cases sign correction for negative numbers Call Extended Euclidean Algorithm. if lensys.argv 3: print2 integer arguments required return 1 a intsys.argv1 b intsys.argv2 printextendedeuclideanalgorithma, b return 0 if name main: raise SystemExitmain | # @Author: S. Sharma <silentcat>
# @Date: 2019-02-25T12:08:53-06:00
# @Email: silentcat@protonmail.com
# @Last modified by: pikulet
# @Last modified time: 2020-10-02
from __future__ import annotations
import sys
def extended_euclidean_algorithm(a: int, b: int) -> tuple[int, int]:
"""
Extended Euclidean Algorithm.
Finds 2 numbers a and b such that it satisfies
the equation am + bn = gcd(m, n) (a.k.a Bezout's Identity)
>>> extended_euclidean_algorithm(1, 24)
(1, 0)
>>> extended_euclidean_algorithm(8, 14)
(2, -1)
>>> extended_euclidean_algorithm(240, 46)
(-9, 47)
>>> extended_euclidean_algorithm(1, -4)
(1, 0)
>>> extended_euclidean_algorithm(-2, -4)
(-1, 0)
>>> extended_euclidean_algorithm(0, -4)
(0, -1)
>>> extended_euclidean_algorithm(2, 0)
(1, 0)
"""
# base cases
if abs(a) == 1:
return a, 0
elif abs(b) == 1:
return 0, b
old_remainder, remainder = a, b
old_coeff_a, coeff_a = 1, 0
old_coeff_b, coeff_b = 0, 1
while remainder != 0:
quotient = old_remainder // remainder
old_remainder, remainder = remainder, old_remainder - quotient * remainder
old_coeff_a, coeff_a = coeff_a, old_coeff_a - quotient * coeff_a
old_coeff_b, coeff_b = coeff_b, old_coeff_b - quotient * coeff_b
# sign correction for negative numbers
if a < 0:
old_coeff_a = -old_coeff_a
if b < 0:
old_coeff_b = -old_coeff_b
return old_coeff_a, old_coeff_b
def main():
"""Call Extended Euclidean Algorithm."""
if len(sys.argv) < 3:
print("2 integer arguments required")
return 1
a = int(sys.argv[1])
b = int(sys.argv[2])
print(extended_euclidean_algorithm(a, b))
return 0
if __name__ == "__main__":
raise SystemExit(main())
|
Factorial of a positive integer https:en.wikipedia.orgwikiFactorial Calculate the factorial of specified number n!. import math allfactoriali math.factoriali for i in range20 True factorial0.1 Traceback most recent call last: ... ValueError: factorial only accepts integral values factorial1 Traceback most recent call last: ... ValueError: factorial not defined for negative values factorial1 1 factorial6 720 factorial0 1 Calculate the factorial of a positive integer https:en.wikipedia.orgwikiFactorial import math allfactoriali math.factoriali for i in range20 True factorial0.1 Traceback most recent call last: ... ValueError: factorial only accepts integral values factorial1 Traceback most recent call last: ... ValueError: factorial not defined for negative values | def factorial(number: int) -> int:
"""
Calculate the factorial of specified number (n!).
>>> import math
>>> all(factorial(i) == math.factorial(i) for i in range(20))
True
>>> factorial(0.1)
Traceback (most recent call last):
...
ValueError: factorial() only accepts integral values
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: factorial() not defined for negative values
>>> factorial(1)
1
>>> factorial(6)
720
>>> factorial(0)
1
"""
if number != int(number):
raise ValueError("factorial() only accepts integral values")
if number < 0:
raise ValueError("factorial() not defined for negative values")
value = 1
for i in range(1, number + 1):
value *= i
return value
def factorial_recursive(n: int) -> int:
"""
Calculate the factorial of a positive integer
https://en.wikipedia.org/wiki/Factorial
>>> import math
>>> all(factorial(i) == math.factorial(i) for i in range(20))
True
>>> factorial(0.1)
Traceback (most recent call last):
...
ValueError: factorial() only accepts integral values
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: factorial() not defined for negative values
"""
if not isinstance(n, int):
raise ValueError("factorial() only accepts integral values")
if n < 0:
raise ValueError("factorial() not defined for negative values")
return 1 if n in {0, 1} else n * factorial(n - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
n = int(input("Enter a positive integer: ").strip() or 0)
print(f"factorial{n} is {factorial(n)}")
|
factorsofanumber1 1 factorsofanumber5 1, 5 factorsofanumber24 1, 2, 3, 4, 6, 8, 12, 24 factorsofanumber24 | from doctest import testmod
from math import sqrt
def factors_of_a_number(num: int) -> list:
"""
>>> factors_of_a_number(1)
[1]
>>> factors_of_a_number(5)
[1, 5]
>>> factors_of_a_number(24)
[1, 2, 3, 4, 6, 8, 12, 24]
>>> factors_of_a_number(-24)
[]
"""
facs: list[int] = []
if num < 1:
return facs
facs.append(1)
if num == 1:
return facs
facs.append(num)
for i in range(2, int(sqrt(num)) + 1):
if num % i == 0: # If i is a factor of num
facs.append(i)
d = num // i # num//i is the other factor of num
if d != i: # If d and i are distinct
facs.append(d) # we have found another factor
facs.sort()
return facs
if __name__ == "__main__":
testmod(name="factors_of_a_number", verbose=True)
|
Fast inverse square root 1sqrtx using the Quake III algorithm. Reference: https:en.wikipedia.orgwikiFastinversesquareroot Accuracy: https:en.wikipedia.orgwikiFastinversesquarerootAccuracy Compute the fast inverse square root of a floatingpoint number using the famous Quake III algorithm. :param float number: Input number for which to calculate the inverse square root. :return float: The fast inverse square root of the input number. Example: fastinversesqrt10 0.3156857923527257 fastinversesqrt4 0.49915357479239103 fastinversesqrt4.1 0.4932849504615651 fastinversesqrt0 Traceback most recent call last: ... ValueError: Input must be a positive number. fastinversesqrt1 Traceback most recent call last: ... ValueError: Input must be a positive number. from math import isclose, sqrt allisclosefastinversesqrti, 1 sqrti, reltol0.00132 ... for i in range50, 60 True https:en.wikipedia.orgwikiFastinversesquarerootAccuracy | import struct
def fast_inverse_sqrt(number: float) -> float:
"""
Compute the fast inverse square root of a floating-point number using the famous
Quake III algorithm.
:param float number: Input number for which to calculate the inverse square root.
:return float: The fast inverse square root of the input number.
Example:
>>> fast_inverse_sqrt(10)
0.3156857923527257
>>> fast_inverse_sqrt(4)
0.49915357479239103
>>> fast_inverse_sqrt(4.1)
0.4932849504615651
>>> fast_inverse_sqrt(0)
Traceback (most recent call last):
...
ValueError: Input must be a positive number.
>>> fast_inverse_sqrt(-1)
Traceback (most recent call last):
...
ValueError: Input must be a positive number.
>>> from math import isclose, sqrt
>>> all(isclose(fast_inverse_sqrt(i), 1 / sqrt(i), rel_tol=0.00132)
... for i in range(50, 60))
True
"""
if number <= 0:
raise ValueError("Input must be a positive number.")
i = struct.unpack(">i", struct.pack(">f", number))[0]
i = 0x5F3759DF - (i >> 1)
y = struct.unpack(">f", struct.pack(">i", i))[0]
return y * (1.5 - 0.5 * number * y * y)
if __name__ == "__main__":
from doctest import testmod
testmod()
# https://en.wikipedia.org/wiki/Fast_inverse_square_root#Accuracy
from math import sqrt
for i in range(5, 101, 5):
print(f"{i:>3}: {(1 / sqrt(i)) - fast_inverse_sqrt(i):.5f}")
|
Python program to show the usage of Fermat's little theorem in a division According to Fermat's little theorem, a b mod p always equals a b p 2 mod p Here we assume that p is a prime number, b divides a, and p doesn't divide b Wikipedia reference: https:en.wikipedia.orgwikiFermat27slittletheorem a prime number using binary exponentiation function, Ologp: using Python operators: | # Python program to show the usage of Fermat's little theorem in a division
# According to Fermat's little theorem, (a / b) mod p always equals
# a * (b ^ (p - 2)) mod p
# Here we assume that p is a prime number, b divides a, and p doesn't divide b
# Wikipedia reference: https://en.wikipedia.org/wiki/Fermat%27s_little_theorem
def binary_exponentiation(a: int, n: float, mod: int) -> int:
if n == 0:
return 1
elif n % 2 == 1:
return (binary_exponentiation(a, n - 1, mod) * a) % mod
else:
b = binary_exponentiation(a, n / 2, mod)
return (b * b) % mod
# a prime number
p = 701
a = 1000000000
b = 10
# using binary exponentiation function, O(log(p)):
print((a / b) % p == (a * binary_exponentiation(b, p - 2, p)) % p)
# using Python operators:
print((a / b) % p == (a * b ** (p - 2)) % p)
|
Calculates the Fibonacci sequence using iteration, recursion, memoization, and a simplified form of Binet's formula NOTE 1: the iterative, recursive, memoization functions are more accurate than the Binet's formula function because the Binet formula function uses floats NOTE 2: the Binet's formula function is much more limited in the size of inputs that it can handle due to the size limitations of Python floats See benchmark numbers in main for performance comparisons https:en.wikipedia.orgwikiFibonaccinumber for more information Times the execution of a function with parameters Calculates the first n 1indexed Fibonacci numbers using iteration with yield listfibiterativeyield0 0 tuplefibiterativeyield1 0, 1 tuplefibiterativeyield5 0, 1, 1, 2, 3, 5 tuplefibiterativeyield10 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 tuplefibiterativeyield1 Traceback most recent call last: ... ValueError: n is negative Calculates the first n 0indexed Fibonacci numbers using iteration fibiterative0 0 fibiterative1 0, 1 fibiterative5 0, 1, 1, 2, 3, 5 fibiterative10 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 fibiterative1 Traceback most recent call last: ... ValueError: n is negative Calculates the first n 0indexed Fibonacci numbers using recursion fibiterative0 0 fibiterative1 0, 1 fibiterative5 0, 1, 1, 2, 3, 5 fibiterative10 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 fibiterative1 Traceback most recent call last: ... ValueError: n is negative Calculates the ith 0indexed Fibonacci number using recursion fibrecursiveterm0 0 fibrecursiveterm1 1 fibrecursiveterm5 5 fibrecursiveterm10 55 fibrecursiveterm1 Traceback most recent call last: ... Exception: n is negative Calculates the first n 0indexed Fibonacci numbers using recursion fibiterative0 0 fibiterative1 0, 1 fibiterative5 0, 1, 1, 2, 3, 5 fibiterative10 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 fibiterative1 Traceback most recent call last: ... ValueError: n is negative Calculates the ith 0indexed Fibonacci number using recursion Calculates the first n 0indexed Fibonacci numbers using memoization fibmemoization0 0 fibmemoization1 0, 1 fibmemoization5 0, 1, 1, 2, 3, 5 fibmemoization10 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 fibiterative1 Traceback most recent call last: ... ValueError: n is negative Cache must be outside recursuive function other it will reset every time it calls itself. Calculates the first n 0indexed Fibonacci numbers using a simplified form of Binet's formula: https:en.m.wikipedia.orgwikiFibonaccinumberComputationbyrounding NOTE 1: this function diverges from fibiterative at around n 71, likely due to compounding floatingpoint arithmetic errors NOTE 2: this function doesn't accept n 1475 because it overflows thereafter due to the size limitations of Python floats fibbinet0 0 fibbinet1 0, 1 fibbinet5 0, 1, 1, 2, 3, 5 fibbinet10 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 fibbinet1 Traceback most recent call last: ... ValueError: n is negative fibbinet1475 Traceback most recent call last: ... ValueError: n is too large Time on an M1 MacBook Pro Fastest to slowest | import functools
from collections.abc import Iterator
from math import sqrt
from time import time
def time_func(func, *args, **kwargs):
"""
Times the execution of a function with parameters
"""
start = time()
output = func(*args, **kwargs)
end = time()
if int(end - start) > 0:
print(f"{func.__name__} runtime: {(end - start):0.4f} s")
else:
print(f"{func.__name__} runtime: {(end - start) * 1000:0.4f} ms")
return output
def fib_iterative_yield(n: int) -> Iterator[int]:
"""
Calculates the first n (1-indexed) Fibonacci numbers using iteration with yield
>>> list(fib_iterative_yield(0))
[0]
>>> tuple(fib_iterative_yield(1))
(0, 1)
>>> tuple(fib_iterative_yield(5))
(0, 1, 1, 2, 3, 5)
>>> tuple(fib_iterative_yield(10))
(0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55)
>>> tuple(fib_iterative_yield(-1))
Traceback (most recent call last):
...
ValueError: n is negative
"""
if n < 0:
raise ValueError("n is negative")
a, b = 0, 1
yield a
for _ in range(n):
yield b
a, b = b, a + b
def fib_iterative(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using iteration
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
ValueError: n is negative
"""
if n < 0:
raise ValueError("n is negative")
if n == 0:
return [0]
fib = [0, 1]
for _ in range(n - 1):
fib.append(fib[-1] + fib[-2])
return fib
def fib_recursive(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using recursion
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
ValueError: n is negative
"""
def fib_recursive_term(i: int) -> int:
"""
Calculates the i-th (0-indexed) Fibonacci number using recursion
>>> fib_recursive_term(0)
0
>>> fib_recursive_term(1)
1
>>> fib_recursive_term(5)
5
>>> fib_recursive_term(10)
55
>>> fib_recursive_term(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
if i < 0:
raise ValueError("n is negative")
if i < 2:
return i
return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)
if n < 0:
raise ValueError("n is negative")
return [fib_recursive_term(i) for i in range(n + 1)]
def fib_recursive_cached(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using recursion
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
ValueError: n is negative
"""
@functools.cache
def fib_recursive_term(i: int) -> int:
"""
Calculates the i-th (0-indexed) Fibonacci number using recursion
"""
if i < 0:
raise ValueError("n is negative")
if i < 2:
return i
return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)
if n < 0:
raise ValueError("n is negative")
return [fib_recursive_term(i) for i in range(n + 1)]
def fib_memoization(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using memoization
>>> fib_memoization(0)
[0]
>>> fib_memoization(1)
[0, 1]
>>> fib_memoization(5)
[0, 1, 1, 2, 3, 5]
>>> fib_memoization(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
ValueError: n is negative
"""
if n < 0:
raise ValueError("n is negative")
# Cache must be outside recursuive function
# other it will reset every time it calls itself.
cache: dict[int, int] = {0: 0, 1: 1, 2: 1} # Prefilled cache
def rec_fn_memoized(num: int) -> int:
if num in cache:
return cache[num]
value = rec_fn_memoized(num - 1) + rec_fn_memoized(num - 2)
cache[num] = value
return value
return [rec_fn_memoized(i) for i in range(n + 1)]
def fib_binet(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using a simplified form
of Binet's formula:
https://en.m.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding
NOTE 1: this function diverges from fib_iterative at around n = 71, likely
due to compounding floating-point arithmetic errors
NOTE 2: this function doesn't accept n >= 1475 because it overflows
thereafter due to the size limitations of Python floats
>>> fib_binet(0)
[0]
>>> fib_binet(1)
[0, 1]
>>> fib_binet(5)
[0, 1, 1, 2, 3, 5]
>>> fib_binet(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_binet(-1)
Traceback (most recent call last):
...
ValueError: n is negative
>>> fib_binet(1475)
Traceback (most recent call last):
...
ValueError: n is too large
"""
if n < 0:
raise ValueError("n is negative")
if n >= 1475:
raise ValueError("n is too large")
sqrt_5 = sqrt(5)
phi = (1 + sqrt_5) / 2
return [round(phi**i / sqrt_5) for i in range(n + 1)]
if __name__ == "__main__":
from doctest import testmod
testmod()
# Time on an M1 MacBook Pro -- Fastest to slowest
num = 30
time_func(fib_iterative_yield, num) # 0.0012 ms
time_func(fib_iterative, num) # 0.0031 ms
time_func(fib_binet, num) # 0.0062 ms
time_func(fib_memoization, num) # 0.0100 ms
time_func(fib_recursive_cached, num) # 0.0153 ms
time_func(fib_recursive, num) # 257.0910 ms
|
for nums in 3, 2, 1, 3, 2, 1, 3, 3, 0, 3.0, 3.1, 2.9: ... findmaxiterativenums maxnums True True True True findmaxiterative2, 4, 9, 7, 19, 94, 5 94 findmaxiterative Traceback most recent call last: ... ValueError: findmaxiterative arg is an empty sequence Divide and Conquer algorithm find max value in list :param nums: contains elements :param left: index of first element :param right: index of last element :return: max in nums for nums in 3, 2, 1, 3, 2, 1, 3, 3, 0, 3.0, 3.1, 2.9: ... findmaxrecursivenums, 0, lennums 1 maxnums True True True True nums 1, 3, 5, 7, 9, 2, 4, 6, 8, 10 findmaxrecursivenums, 0, lennums 1 maxnums True findmaxrecursive, 0, 0 Traceback most recent call last: ... ValueError: findmaxrecursive arg is an empty sequence findmaxrecursivenums, 0, lennums maxnums Traceback most recent call last: ... IndexError: list index out of range findmaxrecursivenums, lennums, 1 maxnums True findmaxrecursivenums, lennums 1, 1 maxnums Traceback most recent call last: ... IndexError: list index out of range | from __future__ import annotations
def find_max_iterative(nums: list[int | float]) -> int | float:
"""
>>> for nums in ([3, 2, 1], [-3, -2, -1], [3, -3, 0], [3.0, 3.1, 2.9]):
... find_max_iterative(nums) == max(nums)
True
True
True
True
>>> find_max_iterative([2, 4, 9, 7, 19, 94, 5])
94
>>> find_max_iterative([])
Traceback (most recent call last):
...
ValueError: find_max_iterative() arg is an empty sequence
"""
if len(nums) == 0:
raise ValueError("find_max_iterative() arg is an empty sequence")
max_num = nums[0]
for x in nums:
if x > max_num:
max_num = x
return max_num
# Divide and Conquer algorithm
def find_max_recursive(nums: list[int | float], left: int, right: int) -> int | float:
"""
find max value in list
:param nums: contains elements
:param left: index of first element
:param right: index of last element
:return: max in nums
>>> for nums in ([3, 2, 1], [-3, -2, -1], [3, -3, 0], [3.0, 3.1, 2.9]):
... find_max_recursive(nums, 0, len(nums) - 1) == max(nums)
True
True
True
True
>>> nums = [1, 3, 5, 7, 9, 2, 4, 6, 8, 10]
>>> find_max_recursive(nums, 0, len(nums) - 1) == max(nums)
True
>>> find_max_recursive([], 0, 0)
Traceback (most recent call last):
...
ValueError: find_max_recursive() arg is an empty sequence
>>> find_max_recursive(nums, 0, len(nums)) == max(nums)
Traceback (most recent call last):
...
IndexError: list index out of range
>>> find_max_recursive(nums, -len(nums), -1) == max(nums)
True
>>> find_max_recursive(nums, -len(nums) - 1, -1) == max(nums)
Traceback (most recent call last):
...
IndexError: list index out of range
"""
if len(nums) == 0:
raise ValueError("find_max_recursive() arg is an empty sequence")
if (
left >= len(nums)
or left < -len(nums)
or right >= len(nums)
or right < -len(nums)
):
raise IndexError("list index out of range")
if left == right:
return nums[left]
mid = (left + right) >> 1 # the middle
left_max = find_max_recursive(nums, left, mid) # find max in range[left, mid]
right_max = find_max_recursive(
nums, mid + 1, right
) # find max in range[mid + 1, right]
return left_max if left_max >= right_max else right_max
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
|
Find Minimum Number in a List :param nums: contains elements :return: min number in list for nums in 3, 2, 1, 3, 2, 1, 3, 3, 0, 3.0, 3.1, 2.9: ... findminiterativenums minnums True True True True findminiterative0, 1, 2, 3, 4, 5, 3, 24, 56 56 findminiterative Traceback most recent call last: ... ValueError: findminiterative arg is an empty sequence Divide and Conquer algorithm find min value in list :param nums: contains elements :param left: index of first element :param right: index of last element :return: min in nums for nums in 3, 2, 1, 3, 2, 1, 3, 3, 0, 3.0, 3.1, 2.9: ... findminrecursivenums, 0, lennums 1 minnums True True True True nums 1, 3, 5, 7, 9, 2, 4, 6, 8, 10 findminrecursivenums, 0, lennums 1 minnums True findminrecursive, 0, 0 Traceback most recent call last: ... ValueError: findminrecursive arg is an empty sequence findminrecursivenums, 0, lennums minnums Traceback most recent call last: ... IndexError: list index out of range findminrecursivenums, lennums, 1 minnums True findminrecursivenums, lennums 1, 1 minnums Traceback most recent call last: ... IndexError: list index out of range | from __future__ import annotations
def find_min_iterative(nums: list[int | float]) -> int | float:
"""
Find Minimum Number in a List
:param nums: contains elements
:return: min number in list
>>> for nums in ([3, 2, 1], [-3, -2, -1], [3, -3, 0], [3.0, 3.1, 2.9]):
... find_min_iterative(nums) == min(nums)
True
True
True
True
>>> find_min_iterative([0, 1, 2, 3, 4, 5, -3, 24, -56])
-56
>>> find_min_iterative([])
Traceback (most recent call last):
...
ValueError: find_min_iterative() arg is an empty sequence
"""
if len(nums) == 0:
raise ValueError("find_min_iterative() arg is an empty sequence")
min_num = nums[0]
for num in nums:
min_num = min(min_num, num)
return min_num
# Divide and Conquer algorithm
def find_min_recursive(nums: list[int | float], left: int, right: int) -> int | float:
"""
find min value in list
:param nums: contains elements
:param left: index of first element
:param right: index of last element
:return: min in nums
>>> for nums in ([3, 2, 1], [-3, -2, -1], [3, -3, 0], [3.0, 3.1, 2.9]):
... find_min_recursive(nums, 0, len(nums) - 1) == min(nums)
True
True
True
True
>>> nums = [1, 3, 5, 7, 9, 2, 4, 6, 8, 10]
>>> find_min_recursive(nums, 0, len(nums) - 1) == min(nums)
True
>>> find_min_recursive([], 0, 0)
Traceback (most recent call last):
...
ValueError: find_min_recursive() arg is an empty sequence
>>> find_min_recursive(nums, 0, len(nums)) == min(nums)
Traceback (most recent call last):
...
IndexError: list index out of range
>>> find_min_recursive(nums, -len(nums), -1) == min(nums)
True
>>> find_min_recursive(nums, -len(nums) - 1, -1) == min(nums)
Traceback (most recent call last):
...
IndexError: list index out of range
"""
if len(nums) == 0:
raise ValueError("find_min_recursive() arg is an empty sequence")
if (
left >= len(nums)
or left < -len(nums)
or right >= len(nums)
or right < -len(nums)
):
raise IndexError("list index out of range")
if left == right:
return nums[left]
mid = (left + right) >> 1 # the middle
left_min = find_min_recursive(nums, left, mid) # find min in range[left, mid]
right_min = find_min_recursive(
nums, mid + 1, right
) # find min in range[mid + 1, right]
return left_min if left_min <= right_min else right_min
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
|
https:en.wikipedia.orgwikiFloorandceilingfunctions Return the floor of x as an Integral. :param x: the number :return: the largest integer x. import math allfloorn math.floorn for n ... in 1, 1, 0, 0, 1.1, 1.1, 1.0, 1.0, 1000000000 True | def floor(x: float) -> int:
"""
Return the floor of x as an Integral.
:param x: the number
:return: the largest integer <= x.
>>> import math
>>> all(floor(n) == math.floor(n) for n
... in (1, -1, 0, -0, 1.1, -1.1, 1.0, -1.0, 1_000_000_000))
True
"""
return int(x) if x - int(x) >= 0 else int(x) - 1
if __name__ == "__main__":
import doctest
doctest.testmod()
|
Gamma function is a very useful tool in math and physics. It helps calculating complex integral in a convenient way. for more info: https:en.wikipedia.orgwikiGammafunction In mathematics, the gamma function is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except the nonpositive integers Python's Standard Library math.gamma function overflows around gamma171.624. Calculates the value of Gamma function of num where num is either an integer 1, 2, 3.. or a halfinteger 0.5, 1.5, 2.5 .... gammaiterative1 Traceback most recent call last: ... ValueError: math domain error gammaiterative0 Traceback most recent call last: ... ValueError: math domain error gammaiterative9 40320.0 from math import gamma as mathgamma all.99999999 gammaiterativei mathgammai 1.000000001 ... for i in range1, 50 True gammaiterative1mathgamma1 1.000000001 Traceback most recent call last: ... ValueError: math domain error gammaiterative3.3 mathgamma3.3 0.00000001 True Calculates the value of Gamma function of num where num is either an integer 1, 2, 3.. or a halfinteger 0.5, 1.5, 2.5 .... Implemented using recursion Examples: from math import isclose, gamma as mathgamma gammarecursive0.5 1.7724538509055159 gammarecursive1 1.0 gammarecursive2 1.0 gammarecursive3.5 3.3233509704478426 gammarecursive171.5 9.483367566824795e307 allisclosegammarecursivenum, mathgammanum ... for num in 0.5, 2, 3.5, 171.5 True gammarecursive0 Traceback most recent call last: ... ValueError: math domain error gammarecursive1.1 Traceback most recent call last: ... ValueError: math domain error gammarecursive4 Traceback most recent call last: ... ValueError: math domain error gammarecursive172 Traceback most recent call last: ... OverflowError: math range error gammarecursive1.1 Traceback most recent call last: ... NotImplementedError: num must be an integer or a halfinteger | import math
from numpy import inf
from scipy.integrate import quad
def gamma_iterative(num: float) -> float:
"""
Calculates the value of Gamma function of num
where num is either an integer (1, 2, 3..) or a half-integer (0.5, 1.5, 2.5 ...).
>>> gamma_iterative(-1)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma_iterative(0)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma_iterative(9)
40320.0
>>> from math import gamma as math_gamma
>>> all(.99999999 < gamma_iterative(i) / math_gamma(i) <= 1.000000001
... for i in range(1, 50))
True
>>> gamma_iterative(-1)/math_gamma(-1) <= 1.000000001
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma_iterative(3.3) - math_gamma(3.3) <= 0.00000001
True
"""
if num <= 0:
raise ValueError("math domain error")
return quad(integrand, 0, inf, args=(num))[0]
def integrand(x: float, z: float) -> float:
return math.pow(x, z - 1) * math.exp(-x)
def gamma_recursive(num: float) -> float:
"""
Calculates the value of Gamma function of num
where num is either an integer (1, 2, 3..) or a half-integer (0.5, 1.5, 2.5 ...).
Implemented using recursion
Examples:
>>> from math import isclose, gamma as math_gamma
>>> gamma_recursive(0.5)
1.7724538509055159
>>> gamma_recursive(1)
1.0
>>> gamma_recursive(2)
1.0
>>> gamma_recursive(3.5)
3.3233509704478426
>>> gamma_recursive(171.5)
9.483367566824795e+307
>>> all(isclose(gamma_recursive(num), math_gamma(num))
... for num in (0.5, 2, 3.5, 171.5))
True
>>> gamma_recursive(0)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma_recursive(-1.1)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma_recursive(-4)
Traceback (most recent call last):
...
ValueError: math domain error
>>> gamma_recursive(172)
Traceback (most recent call last):
...
OverflowError: math range error
>>> gamma_recursive(1.1)
Traceback (most recent call last):
...
NotImplementedError: num must be an integer or a half-integer
"""
if num <= 0:
raise ValueError("math domain error")
if num > 171.5:
raise OverflowError("math range error")
elif num - int(num) not in (0, 0.5):
raise NotImplementedError("num must be an integer or a half-integer")
elif num == 0.5:
return math.sqrt(math.pi)
else:
return 1.0 if num == 1 else (num - 1) * gamma_recursive(num - 1)
if __name__ == "__main__":
from doctest import testmod
testmod()
num = 1.0
while num:
num = float(input("Gamma of: "))
print(f"gamma_iterative({num}) = {gamma_iterative(num)}")
print(f"gamma_recursive({num}) = {gamma_recursive(num)}")
print("\nEnter 0 to exit...")
|
Reference: https:en.wikipedia.orgwikiGaussianfunction gaussian1 0.24197072451914337 gaussian24 3.342714441794458e126 gaussian1, 4, 2 0.06475879783294587 gaussian1, 5, 3 0.05467002489199788 Supports NumPy Arrays Use numpy.meshgrid with this to generate gaussian blur on images. import numpy as np x np.arange15 gaussianx array3.98942280e01, 2.41970725e01, 5.39909665e02, 4.43184841e03, 1.33830226e04, 1.48671951e06, 6.07588285e09, 9.13472041e12, 5.05227108e15, 1.02797736e18, 7.69459863e23, 2.11881925e27, 2.14638374e32, 7.99882776e38, 1.09660656e43 gaussian15 5.530709549844416e50 gaussian1,2, 'string' Traceback most recent call last: ... TypeError: unsupported operand types for : 'list' and 'float' gaussian'hello world' Traceback most recent call last: ... TypeError: unsupported operand types for : 'str' and 'float' gaussian10234 doctest: IGNOREEXCEPTIONDETAIL Traceback most recent call last: ... OverflowError: 34, 'Result too large' gaussian10326 0.3989422804014327 gaussian2523, mu234234, sigma3425 0.0 | from numpy import exp, pi, sqrt
def gaussian(x, mu: float = 0.0, sigma: float = 1.0) -> int:
"""
>>> gaussian(1)
0.24197072451914337
>>> gaussian(24)
3.342714441794458e-126
>>> gaussian(1, 4, 2)
0.06475879783294587
>>> gaussian(1, 5, 3)
0.05467002489199788
Supports NumPy Arrays
Use numpy.meshgrid with this to generate gaussian blur on images.
>>> import numpy as np
>>> x = np.arange(15)
>>> gaussian(x)
array([3.98942280e-01, 2.41970725e-01, 5.39909665e-02, 4.43184841e-03,
1.33830226e-04, 1.48671951e-06, 6.07588285e-09, 9.13472041e-12,
5.05227108e-15, 1.02797736e-18, 7.69459863e-23, 2.11881925e-27,
2.14638374e-32, 7.99882776e-38, 1.09660656e-43])
>>> gaussian(15)
5.530709549844416e-50
>>> gaussian([1,2, 'string'])
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for -: 'list' and 'float'
>>> gaussian('hello world')
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for -: 'str' and 'float'
>>> gaussian(10**234) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
OverflowError: (34, 'Result too large')
>>> gaussian(10**-326)
0.3989422804014327
>>> gaussian(2523, mu=234234, sigma=3425)
0.0
"""
return 1 / sqrt(2 * pi * sigma**2) * exp(-((x - mu) ** 2) / (2 * sigma**2))
if __name__ == "__main__":
import doctest
doctest.testmod()
|
This script demonstrates an implementation of the Gaussian Error Linear Unit function. https:en.wikipedia.orgwikiActivationfunctionComparisonofactivationfunctions The function takes a vector of K real numbers as input and returns x sigmoid1.702x. Gaussian Error Linear Unit GELU is a highperforming neural network activation function. This script is inspired by a corresponding research paper. https:arxiv.orgabs1606.08415 Mathematical function sigmoid takes a vector x of K real numbers as input and returns 1 1 ex. https:en.wikipedia.orgwikiSigmoidfunction sigmoidnp.array1.0, 1.0, 2.0 array0.26894142, 0.73105858, 0.88079708 Implements the Gaussian Error Linear Unit GELU function Parameters: vector np.ndarray: A numpy array of shape 1, n consisting of real values Returns: geluvec np.ndarray: The input numpy array, after applying gelu Examples: gaussianerrorlinearunitnp.array1.0, 1.0, 2.0 array0.15420423, 0.84579577, 1.93565862 gaussianerrorlinearunitnp.array3 array0.01807131 | import numpy as np
def sigmoid(vector: np.ndarray) -> np.ndarray:
"""
Mathematical function sigmoid takes a vector x of K real numbers as input and
returns 1/ (1 + e^-x).
https://en.wikipedia.org/wiki/Sigmoid_function
>>> sigmoid(np.array([-1.0, 1.0, 2.0]))
array([0.26894142, 0.73105858, 0.88079708])
"""
return 1 / (1 + np.exp(-vector))
def gaussian_error_linear_unit(vector: np.ndarray) -> np.ndarray:
"""
Implements the Gaussian Error Linear Unit (GELU) function
Parameters:
vector (np.ndarray): A numpy array of shape (1, n) consisting of real values
Returns:
gelu_vec (np.ndarray): The input numpy array, after applying gelu
Examples:
>>> gaussian_error_linear_unit(np.array([-1.0, 1.0, 2.0]))
array([-0.15420423, 0.84579577, 1.93565862])
>>> gaussian_error_linear_unit(np.array([-3]))
array([-0.01807131])
"""
return vector * sigmoid(1.702 * vector)
if __name__ == "__main__":
import doctest
doctest.testmod()
|
A Sophie Germain prime is any prime p, where 2p 1 is also prime. The second number, 2p 1 is called a safe prime. Examples of Germain primes include: 2, 3, 5, 11, 23 Their corresponding safe primes: 5, 7, 11, 23, 47 https:en.wikipedia.orgwikiSafeandSophieGermainprimes Checks if input number and 2number 1 are prime. isgermainprime3 True isgermainprime11 True isgermainprime4 False isgermainprime23 True isgermainprime13 False isgermainprime20 False isgermainprime'abc' Traceback most recent call last: ... TypeError: Input value must be a positive integer. Input value: abc Checks if input number and number 12 are prime. The smallest safe prime is 5, with the Germain prime is 2. issafeprime5 True issafeprime11 True issafeprime1 False issafeprime2 False issafeprime3 False issafeprime47 True issafeprime'abc' Traceback most recent call last: ... TypeError: Input value must be a positive integer. Input value: abc | from maths.prime_check import is_prime
def is_germain_prime(number: int) -> bool:
"""Checks if input number and 2*number + 1 are prime.
>>> is_germain_prime(3)
True
>>> is_germain_prime(11)
True
>>> is_germain_prime(4)
False
>>> is_germain_prime(23)
True
>>> is_germain_prime(13)
False
>>> is_germain_prime(20)
False
>>> is_germain_prime('abc')
Traceback (most recent call last):
...
TypeError: Input value must be a positive integer. Input value: abc
"""
if not isinstance(number, int) or number < 1:
msg = f"Input value must be a positive integer. Input value: {number}"
raise TypeError(msg)
return is_prime(number) and is_prime(2 * number + 1)
def is_safe_prime(number: int) -> bool:
"""Checks if input number and (number - 1)/2 are prime.
The smallest safe prime is 5, with the Germain prime is 2.
>>> is_safe_prime(5)
True
>>> is_safe_prime(11)
True
>>> is_safe_prime(1)
False
>>> is_safe_prime(2)
False
>>> is_safe_prime(3)
False
>>> is_safe_prime(47)
True
>>> is_safe_prime('abc')
Traceback (most recent call last):
...
TypeError: Input value must be a positive integer. Input value: abc
"""
if not isinstance(number, int) or number < 1:
msg = f"Input value must be a positive integer. Input value: {number}"
raise TypeError(msg)
return (number - 1) % 2 == 0 and is_prime(number) and is_prime((number - 1) // 2)
if __name__ == "__main__":
from doctest import testmod
testmod()
|
Greatest Common Divisor. Wikipedia reference: https:en.wikipedia.orgwikiGreatestcommondivisor gcda, b gcda, b gcda, b gcda, b by definition of divisibility Calculate Greatest Common Divisor GCD. greatestcommondivisor24, 40 8 greatestcommondivisor1, 1 1 greatestcommondivisor1, 800 1 greatestcommondivisor11, 37 1 greatestcommondivisor3, 5 1 greatestcommondivisor16, 4 4 greatestcommondivisor3, 9 3 greatestcommondivisor9, 3 3 greatestcommondivisor3, 9 3 greatestcommondivisor3, 9 3 Below method is more memory efficient because it does not create additional stack frames for recursive functions calls as done in the above method. gcdbyiterative24, 40 8 greatestcommondivisor24, 40 gcdbyiterative24, 40 True gcdbyiterative3, 9 3 gcdbyiterative3, 9 3 gcdbyiterative1, 800 1 gcdbyiterative11, 37 1 Call Greatest Common Divisor function. | def greatest_common_divisor(a: int, b: int) -> int:
"""
Calculate Greatest Common Divisor (GCD).
>>> greatest_common_divisor(24, 40)
8
>>> greatest_common_divisor(1, 1)
1
>>> greatest_common_divisor(1, 800)
1
>>> greatest_common_divisor(11, 37)
1
>>> greatest_common_divisor(3, 5)
1
>>> greatest_common_divisor(16, 4)
4
>>> greatest_common_divisor(-3, 9)
3
>>> greatest_common_divisor(9, -3)
3
>>> greatest_common_divisor(3, -9)
3
>>> greatest_common_divisor(-3, -9)
3
"""
return abs(b) if a == 0 else greatest_common_divisor(b % a, a)
def gcd_by_iterative(x: int, y: int) -> int:
"""
Below method is more memory efficient because it does not create additional
stack frames for recursive functions calls (as done in the above method).
>>> gcd_by_iterative(24, 40)
8
>>> greatest_common_divisor(24, 40) == gcd_by_iterative(24, 40)
True
>>> gcd_by_iterative(-3, -9)
3
>>> gcd_by_iterative(3, -9)
3
>>> gcd_by_iterative(1, -800)
1
>>> gcd_by_iterative(11, 37)
1
"""
while y: # --> when y=0 then loop will terminate and return x as final GCD.
x, y = y, x % y
return abs(x)
def main():
"""
Call Greatest Common Divisor function.
"""
try:
nums = input("Enter two integers separated by comma (,): ").split(",")
num_1 = int(nums[0])
num_2 = int(nums[1])
print(
f"greatest_common_divisor({num_1}, {num_2}) = "
f"{greatest_common_divisor(num_1, num_2)}"
)
print(f"By iterative gcd({num_1}, {num_2}) = {gcd_by_iterative(num_1, num_2)}")
except (IndexError, UnboundLocalError, ValueError):
print("Wrong input")
if __name__ == "__main__":
main()
|
This theorem states that the number of prime factors of n will be approximately loglogn for most natural numbers n exactprimefactorcount51242183 3 the n input value must be odd so that we can skip one element ie i 2 this condition checks the prime number n is greater than 2 The number of distinct prime factors isare 3 The value of loglogn is 2.8765 | # This theorem states that the number of prime factors of n
# will be approximately log(log(n)) for most natural numbers n
import math
def exact_prime_factor_count(n: int) -> int:
"""
>>> exact_prime_factor_count(51242183)
3
"""
count = 0
if n % 2 == 0:
count += 1
while n % 2 == 0:
n = int(n / 2)
# the n input value must be odd so that
# we can skip one element (ie i += 2)
i = 3
while i <= int(math.sqrt(n)):
if n % i == 0:
count += 1
while n % i == 0:
n = int(n / i)
i = i + 2
# this condition checks the prime
# number n is greater than 2
if n > 2:
count += 1
return count
if __name__ == "__main__":
n = 51242183
print(f"The number of distinct prime factors is/are {exact_prime_factor_count(n)}")
print(f"The value of log(log(n)) is {math.log(math.log(n)):.4f}")
"""
The number of distinct prime factors is/are 3
The value of log(log(n)) is 2.8765
"""
|