From 77b4fa8b3f2070ff708405cca1381b7860e316ab Mon Sep 17 00:00:00 2001
From: Damon Gregory <46330424+SheriffHobo@users.noreply.github.com>
Date: Sun, 12 Feb 2023 07:55:25 -0800
Subject: [PATCH 001/808] fix_ci_badge (#8134)
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index da80c012b0c6..68a6e5e6fbce 100644
--- a/README.md
+++ b/README.md
@@ -22,7 +22,7 @@
-
+
From 126e89d8a3983c1ffc9b3eefa1fbaff0f6fe4ead Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Mon, 13 Feb 2023 22:05:56 +0100
Subject: [PATCH 002/808] [pre-commit.ci] pre-commit autoupdate (#8141)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
updates:
- [github.com/tox-dev/pyproject-fmt: 0.6.0 → 0.8.0](https://github.com/tox-dev/pyproject-fmt/compare/0.6.0...0.8.0)
- [github.com/pre-commit/mirrors-mypy: v0.991 → v1.0.0](https://github.com/pre-commit/mirrors-mypy/compare/v0.991...v1.0.0)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.pre-commit-config.yaml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index f8d1a65db27b..a1496984f950 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -27,7 +27,7 @@ repos:
- --profile=black
- repo: https://github.com/tox-dev/pyproject-fmt
- rev: "0.6.0"
+ rev: "0.8.0"
hooks:
- id: pyproject-fmt
@@ -62,7 +62,7 @@ repos:
*flake8-plugins
- repo: https://github.com/pre-commit/mirrors-mypy
- rev: v0.991
+ rev: v1.0.0
hooks:
- id: mypy
args:
From 1bf03889c5e34420001e72b5d26cc0846dcd122a Mon Sep 17 00:00:00 2001
From: Jan Wojciechowski <96974442+yanvoi@users.noreply.github.com>
Date: Sun, 19 Feb 2023 23:14:01 +0100
Subject: [PATCH 003/808] Update bogo_sort.py (#8144)
---
sorts/bogo_sort.py | 2 --
1 file changed, 2 deletions(-)
diff --git a/sorts/bogo_sort.py b/sorts/bogo_sort.py
index b72f2089f3d2..9c133f0d8a55 100644
--- a/sorts/bogo_sort.py
+++ b/sorts/bogo_sort.py
@@ -31,8 +31,6 @@ def bogo_sort(collection):
"""
def is_sorted(collection):
- if len(collection) < 2:
- return True
for i in range(len(collection) - 1):
if collection[i] > collection[i + 1]:
return False
From 67676c3b790d9631ea99c89f71dc2bf65e9aa2ca Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 21 Feb 2023 08:33:44 +0100
Subject: [PATCH 004/808] [pre-commit.ci] pre-commit autoupdate (#8149)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/tox-dev/pyproject-fmt: 0.8.0 → 0.9.1](https://github.com/tox-dev/pyproject-fmt/compare/0.8.0...0.9.1)
- [github.com/pre-commit/mirrors-mypy: v1.0.0 → v1.0.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.0...v1.0.1)
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.pre-commit-config.yaml | 4 ++--
pyproject.toml | 1 -
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index a1496984f950..93064949e194 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -27,7 +27,7 @@ repos:
- --profile=black
- repo: https://github.com/tox-dev/pyproject-fmt
- rev: "0.8.0"
+ rev: "0.9.1"
hooks:
- id: pyproject-fmt
@@ -62,7 +62,7 @@ repos:
*flake8-plugins
- repo: https://github.com/pre-commit/mirrors-mypy
- rev: v1.0.0
+ rev: v1.0.1
hooks:
- id: mypy
args:
diff --git a/pyproject.toml b/pyproject.toml
index 410e7655b2b5..5f9b1aa06c0e 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -8,7 +8,6 @@ addopts = [
"--showlocals",
]
-
[tool.coverage.report]
omit = [".env/*"]
sort = "Cover"
From 1c15cdff70893bc27ced2b390959e1d9cc493628 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Mon, 27 Feb 2023 23:08:40 +0100
Subject: [PATCH 005/808] [pre-commit.ci] pre-commit autoupdate (#8160)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/tox-dev/pyproject-fmt: 0.9.1 → 0.9.2](https://github.com/tox-dev/pyproject-fmt/compare/0.9.1...0.9.2)
* pre-commit: Add ruff
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.pre-commit-config.yaml | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 93064949e194..9f27f985bb6a 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -27,7 +27,7 @@ repos:
- --profile=black
- repo: https://github.com/tox-dev/pyproject-fmt
- rev: "0.9.1"
+ rev: "0.9.2"
hooks:
- id: pyproject-fmt
@@ -43,6 +43,13 @@ repos:
args:
- --py311-plus
+ - repo: https://github.com/charliermarsh/ruff-pre-commit
+ rev: v0.0.253
+ hooks:
+ - id: ruff
+ args:
+ - --ignore=E741
+
- repo: https://github.com/PyCQA/flake8
rev: 6.0.0
hooks:
From 64543faa980b526f79d287a073ebb7554749faf9 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Wed, 1 Mar 2023 17:23:33 +0100
Subject: [PATCH 006/808] Make some ruff fixes (#8154)
* Make some ruff fixes
* Undo manual fix
* Undo manual fix
* Updates from ruff=0.0.251
---
audio_filters/iir_filter.py | 2 +-
backtracking/n_queens_math.py | 6 +++---
backtracking/sum_of_subsets.py | 2 +-
ciphers/bifid.py | 2 +-
ciphers/diffie_hellman.py | 16 ++++++++--------
ciphers/polybius.py | 2 +-
ciphers/xor_cipher.py | 18 ++++++++----------
computer_vision/mosaic_augmentation.py | 2 +-
.../binary_tree/binary_search_tree.py | 2 +-
.../binary_tree/binary_tree_traversals.py | 4 ++--
.../binary_tree/inorder_tree_traversal_2022.py | 2 +-
data_structures/binary_tree/red_black_tree.py | 5 ++---
.../hashing/number_theory/prime_numbers.py | 2 +-
data_structures/heap/binomial_heap.py | 4 ++--
.../linked_list/doubly_linked_list_two.py | 2 +-
.../linked_list/singly_linked_list.py | 1 +
data_structures/linked_list/skip_list.py | 5 +----
.../queue/circular_queue_linked_list.py | 2 +-
.../dilation_operation.py | 2 +-
.../erosion_operation.py | 2 +-
dynamic_programming/all_construct.py | 2 +-
dynamic_programming/fizz_buzz.py | 2 +-
.../longest_common_subsequence.py | 10 ++--------
.../longest_increasing_subsequence.py | 2 +-
graphs/basic_graphs.py | 14 ++++++--------
graphs/check_cycle.py | 9 ++++-----
graphs/connected_components.py | 2 +-
graphs/dijkstra_algorithm.py | 2 +-
.../edmonds_karp_multiple_source_and_sink.py | 5 ++---
graphs/frequent_pattern_graph_miner.py | 6 +++---
graphs/minimum_spanning_tree_boruvka.py | 1 +
graphs/minimum_spanning_tree_prims.py | 5 +----
graphs/minimum_spanning_tree_prims2.py | 16 +++++++---------
hashes/hamming_code.py | 5 ++---
linear_algebra/src/lib.py | 7 ++++---
machine_learning/gradient_descent.py | 2 ++
machine_learning/k_means_clust.py | 4 ++--
.../sequential_minimum_optimization.py | 9 ++++-----
maths/abs.py | 6 +++---
maths/binary_exp_mod.py | 2 +-
maths/jaccard_similarity.py | 1 +
maths/largest_of_very_large_numbers.py | 1 +
maths/radix2_fft.py | 5 +----
.../back_propagation_neural_network.py | 1 +
other/graham_scan.py | 7 +++----
other/nested_brackets.py | 9 ++++-----
physics/hubble_parameter.py | 4 ++--
project_euler/problem_005/sol1.py | 1 +
project_euler/problem_009/sol1.py | 5 ++---
project_euler/problem_014/sol2.py | 5 +----
project_euler/problem_018/solution.py | 10 ++--------
project_euler/problem_019/sol1.py | 2 +-
project_euler/problem_033/sol1.py | 8 +++-----
project_euler/problem_064/sol1.py | 5 ++---
project_euler/problem_067/sol1.py | 10 ++--------
project_euler/problem_109/sol1.py | 2 +-
project_euler/problem_203/sol1.py | 4 ++--
scheduling/shortest_job_first.py | 11 +++++------
scripts/build_directory_md.py | 5 ++---
searches/binary_tree_traversal.py | 1 +
sorts/circle_sort.py | 13 ++++++-------
sorts/counting_sort.py | 2 +-
sorts/msd_radix_sort.py | 2 +-
sorts/quick_sort.py | 2 +-
sorts/recursive_quick_sort.py | 10 +++++-----
sorts/tim_sort.py | 4 ++--
strings/autocomplete_using_trie.py | 5 +----
strings/check_anagrams.py | 5 +----
strings/is_palindrome.py | 5 +----
strings/snake_case_to_camel_pascal_case.py | 2 +-
web_programming/convert_number_to_words.py | 6 +++---
web_programming/instagram_crawler.py | 2 +-
web_programming/open_google_results.py | 5 +----
73 files changed, 151 insertions(+), 203 deletions(-)
diff --git a/audio_filters/iir_filter.py b/audio_filters/iir_filter.py
index aae320365012..bd448175f6f3 100644
--- a/audio_filters/iir_filter.py
+++ b/audio_filters/iir_filter.py
@@ -47,7 +47,7 @@ def set_coefficients(self, a_coeffs: list[float], b_coeffs: list[float]) -> None
>>> filt.set_coefficients(a_coeffs, b_coeffs)
"""
if len(a_coeffs) < self.order:
- a_coeffs = [1.0] + a_coeffs
+ a_coeffs = [1.0, *a_coeffs]
if len(a_coeffs) != self.order + 1:
raise ValueError(
diff --git a/backtracking/n_queens_math.py b/backtracking/n_queens_math.py
index 23bd1590618b..f3b08ab0a05f 100644
--- a/backtracking/n_queens_math.py
+++ b/backtracking/n_queens_math.py
@@ -129,9 +129,9 @@ def depth_first_search(
# If it is False we call dfs function again and we update the inputs
depth_first_search(
- possible_board + [col],
- diagonal_right_collisions + [row - col],
- diagonal_left_collisions + [row + col],
+ [*possible_board, col],
+ [*diagonal_right_collisions, row - col],
+ [*diagonal_left_collisions, row + col],
boards,
n,
)
diff --git a/backtracking/sum_of_subsets.py b/backtracking/sum_of_subsets.py
index 128e290718cd..c5e23321cb0c 100644
--- a/backtracking/sum_of_subsets.py
+++ b/backtracking/sum_of_subsets.py
@@ -44,7 +44,7 @@ def create_state_space_tree(
nums,
max_sum,
index + 1,
- path + [nums[index]],
+ [*path, nums[index]],
result,
remaining_nums_sum - nums[index],
)
diff --git a/ciphers/bifid.py b/ciphers/bifid.py
index c005e051a6ba..a15b381640aa 100644
--- a/ciphers/bifid.py
+++ b/ciphers/bifid.py
@@ -33,7 +33,7 @@ def letter_to_numbers(self, letter: str) -> np.ndarray:
>>> np.array_equal(BifidCipher().letter_to_numbers('u'), [4,5])
True
"""
- index1, index2 = np.where(self.SQUARE == letter)
+ index1, index2 = np.where(letter == self.SQUARE)
indexes = np.concatenate([index1 + 1, index2 + 1])
return indexes
diff --git a/ciphers/diffie_hellman.py b/ciphers/diffie_hellman.py
index 072f4aaaa6da..cd40a6b9c3b3 100644
--- a/ciphers/diffie_hellman.py
+++ b/ciphers/diffie_hellman.py
@@ -228,10 +228,10 @@ def generate_public_key(self) -> str:
def is_valid_public_key(self, key: int) -> bool:
# check if the other public key is valid based on NIST SP800-56
- if 2 <= key and key <= self.prime - 2:
- if pow(key, (self.prime - 1) // 2, self.prime) == 1:
- return True
- return False
+ return (
+ 2 <= key <= self.prime - 2
+ and pow(key, (self.prime - 1) // 2, self.prime) == 1
+ )
def generate_shared_key(self, other_key_str: str) -> str:
other_key = int(other_key_str, base=16)
@@ -243,10 +243,10 @@ def generate_shared_key(self, other_key_str: str) -> str:
@staticmethod
def is_valid_public_key_static(remote_public_key_str: int, prime: int) -> bool:
# check if the other public key is valid based on NIST SP800-56
- if 2 <= remote_public_key_str and remote_public_key_str <= prime - 2:
- if pow(remote_public_key_str, (prime - 1) // 2, prime) == 1:
- return True
- return False
+ return (
+ 2 <= remote_public_key_str <= prime - 2
+ and pow(remote_public_key_str, (prime - 1) // 2, prime) == 1
+ )
@staticmethod
def generate_shared_key_static(
diff --git a/ciphers/polybius.py b/ciphers/polybius.py
index 3539ab70c303..d83badf4ac0a 100644
--- a/ciphers/polybius.py
+++ b/ciphers/polybius.py
@@ -31,7 +31,7 @@ def letter_to_numbers(self, letter: str) -> np.ndarray:
>>> np.array_equal(PolybiusCipher().letter_to_numbers('u'), [4,5])
True
"""
- index1, index2 = np.where(self.SQUARE == letter)
+ index1, index2 = np.where(letter == self.SQUARE)
indexes = np.concatenate([index1 + 1, index2 + 1])
return indexes
diff --git a/ciphers/xor_cipher.py b/ciphers/xor_cipher.py
index 379ef0ef7e50..0f369e38f85f 100644
--- a/ciphers/xor_cipher.py
+++ b/ciphers/xor_cipher.py
@@ -128,11 +128,10 @@ def encrypt_file(self, file: str, key: int = 0) -> bool:
assert isinstance(file, str) and isinstance(key, int)
try:
- with open(file) as fin:
- with open("encrypt.out", "w+") as fout:
- # actual encrypt-process
- for line in fin:
- fout.write(self.encrypt_string(line, key))
+ with open(file) as fin, open("encrypt.out", "w+") as fout:
+ # actual encrypt-process
+ for line in fin:
+ fout.write(self.encrypt_string(line, key))
except OSError:
return False
@@ -152,11 +151,10 @@ def decrypt_file(self, file: str, key: int) -> bool:
assert isinstance(file, str) and isinstance(key, int)
try:
- with open(file) as fin:
- with open("decrypt.out", "w+") as fout:
- # actual encrypt-process
- for line in fin:
- fout.write(self.decrypt_string(line, key))
+ with open(file) as fin, open("decrypt.out", "w+") as fout:
+ # actual encrypt-process
+ for line in fin:
+ fout.write(self.decrypt_string(line, key))
except OSError:
return False
diff --git a/computer_vision/mosaic_augmentation.py b/computer_vision/mosaic_augmentation.py
index e2953749753f..c150126d6bfb 100644
--- a/computer_vision/mosaic_augmentation.py
+++ b/computer_vision/mosaic_augmentation.py
@@ -159,7 +159,7 @@ def update_image_and_anno(
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
# Remove bounding box small than scale of filter
- if 0 < filter_scale:
+ if filter_scale > 0:
new_anno = [
anno
for anno in new_anno
diff --git a/data_structures/binary_tree/binary_search_tree.py b/data_structures/binary_tree/binary_search_tree.py
index fc512944eb50..cd88cc10e697 100644
--- a/data_structures/binary_tree/binary_search_tree.py
+++ b/data_structures/binary_tree/binary_search_tree.py
@@ -60,7 +60,7 @@ def __insert(self, value) -> None:
else: # Tree is not empty
parent_node = self.root # from root
if parent_node is None:
- return None
+ return
while True: # While we don't get to a leaf
if value < parent_node.value: # We go left
if parent_node.left is None:
diff --git a/data_structures/binary_tree/binary_tree_traversals.py b/data_structures/binary_tree/binary_tree_traversals.py
index 24dd1bd8cdc8..71a895e76ce4 100644
--- a/data_structures/binary_tree/binary_tree_traversals.py
+++ b/data_structures/binary_tree/binary_tree_traversals.py
@@ -37,7 +37,7 @@ def preorder(root: Node | None) -> list[int]:
>>> preorder(make_tree())
[1, 2, 4, 5, 3]
"""
- return [root.data] + preorder(root.left) + preorder(root.right) if root else []
+ return [root.data, *preorder(root.left), *preorder(root.right)] if root else []
def postorder(root: Node | None) -> list[int]:
@@ -55,7 +55,7 @@ def inorder(root: Node | None) -> list[int]:
>>> inorder(make_tree())
[4, 2, 5, 1, 3]
"""
- return inorder(root.left) + [root.data] + inorder(root.right) if root else []
+ return [*inorder(root.left), root.data, *inorder(root.right)] if root else []
def height(root: Node | None) -> int:
diff --git a/data_structures/binary_tree/inorder_tree_traversal_2022.py b/data_structures/binary_tree/inorder_tree_traversal_2022.py
index e94ba7013a82..1357527d2953 100644
--- a/data_structures/binary_tree/inorder_tree_traversal_2022.py
+++ b/data_structures/binary_tree/inorder_tree_traversal_2022.py
@@ -50,7 +50,7 @@ def inorder(node: None | BinaryTreeNode) -> list[int]: # if node is None,return
"""
if node:
inorder_array = inorder(node.left_child)
- inorder_array = inorder_array + [node.data]
+ inorder_array = [*inorder_array, node.data]
inorder_array = inorder_array + inorder(node.right_child)
else:
inorder_array = []
diff --git a/data_structures/binary_tree/red_black_tree.py b/data_structures/binary_tree/red_black_tree.py
index a9dbd699c3c1..b50d75d33689 100644
--- a/data_structures/binary_tree/red_black_tree.py
+++ b/data_structures/binary_tree/red_black_tree.py
@@ -319,9 +319,8 @@ def check_coloring(self) -> bool:
"""A helper function to recursively check Property 4 of a
Red-Black Tree. See check_color_properties for more info.
"""
- if self.color == 1:
- if color(self.left) == 1 or color(self.right) == 1:
- return False
+ if self.color == 1 and 1 in (color(self.left), color(self.right)):
+ return False
if self.left and not self.left.check_coloring():
return False
if self.right and not self.right.check_coloring():
diff --git a/data_structures/hashing/number_theory/prime_numbers.py b/data_structures/hashing/number_theory/prime_numbers.py
index b88ab76ecc23..0c25896f9880 100644
--- a/data_structures/hashing/number_theory/prime_numbers.py
+++ b/data_structures/hashing/number_theory/prime_numbers.py
@@ -52,7 +52,7 @@ def next_prime(value, factor=1, **kwargs):
first_value_val = value
while not is_prime(value):
- value += 1 if not ("desc" in kwargs.keys() and kwargs["desc"] is True) else -1
+ value += 1 if not ("desc" in kwargs and kwargs["desc"] is True) else -1
if value == first_value_val:
return next_prime(value + 1, **kwargs)
diff --git a/data_structures/heap/binomial_heap.py b/data_structures/heap/binomial_heap.py
index 2e05c5c80a22..099bd2871023 100644
--- a/data_structures/heap/binomial_heap.py
+++ b/data_structures/heap/binomial_heap.py
@@ -136,12 +136,12 @@ def merge_heaps(self, other):
# Empty heaps corner cases
if other.size == 0:
- return
+ return None
if self.size == 0:
self.size = other.size
self.bottom_root = other.bottom_root
self.min_node = other.min_node
- return
+ return None
# Update size
self.size = self.size + other.size
diff --git a/data_structures/linked_list/doubly_linked_list_two.py b/data_structures/linked_list/doubly_linked_list_two.py
index c19309c9f5a7..e993cc5a20af 100644
--- a/data_structures/linked_list/doubly_linked_list_two.py
+++ b/data_structures/linked_list/doubly_linked_list_two.py
@@ -128,7 +128,7 @@ def insert_at_position(self, position: int, value: int) -> None:
while node:
if current_position == position:
self.insert_before_node(node, new_node)
- return None
+ return
current_position += 1
node = node.next
self.insert_after_node(self.tail, new_node)
diff --git a/data_structures/linked_list/singly_linked_list.py b/data_structures/linked_list/singly_linked_list.py
index 3e52c7e43cf5..bdeb5922ac67 100644
--- a/data_structures/linked_list/singly_linked_list.py
+++ b/data_structures/linked_list/singly_linked_list.py
@@ -107,6 +107,7 @@ def __getitem__(self, index: int) -> Any:
for i, node in enumerate(self):
if i == index:
return node
+ return None
# Used to change the data of a particular node
def __setitem__(self, index: int, data: Any) -> None:
diff --git a/data_structures/linked_list/skip_list.py b/data_structures/linked_list/skip_list.py
index 96b0db7c896b..4413c53e520e 100644
--- a/data_structures/linked_list/skip_list.py
+++ b/data_structures/linked_list/skip_list.py
@@ -388,10 +388,7 @@ def traverse_keys(node):
def test_iter_always_yields_sorted_values():
def is_sorted(lst):
- for item, next_item in zip(lst, lst[1:]):
- if next_item < item:
- return False
- return True
+ return all(next_item >= item for item, next_item in zip(lst, lst[1:]))
skip_list = SkipList()
for i in range(10):
diff --git a/data_structures/queue/circular_queue_linked_list.py b/data_structures/queue/circular_queue_linked_list.py
index e8c2b8bffc06..62042c4bce96 100644
--- a/data_structures/queue/circular_queue_linked_list.py
+++ b/data_structures/queue/circular_queue_linked_list.py
@@ -127,7 +127,7 @@ def dequeue(self) -> Any:
"""
self.check_can_perform_operation()
if self.rear is None or self.front is None:
- return
+ return None
if self.front == self.rear:
data = self.front.data
self.front.data = None
diff --git a/digital_image_processing/morphological_operations/dilation_operation.py b/digital_image_processing/morphological_operations/dilation_operation.py
index 274880b0a50a..c8380737d219 100644
--- a/digital_image_processing/morphological_operations/dilation_operation.py
+++ b/digital_image_processing/morphological_operations/dilation_operation.py
@@ -32,7 +32,7 @@ def gray2binary(gray: np.array) -> np.array:
[False, True, False],
[False, True, False]])
"""
- return (127 < gray) & (gray <= 255)
+ return (gray > 127) & (gray <= 255)
def dilation(image: np.array, kernel: np.array) -> np.array:
diff --git a/digital_image_processing/morphological_operations/erosion_operation.py b/digital_image_processing/morphological_operations/erosion_operation.py
index 4b0a5eee8c03..c2cde2ea6990 100644
--- a/digital_image_processing/morphological_operations/erosion_operation.py
+++ b/digital_image_processing/morphological_operations/erosion_operation.py
@@ -32,7 +32,7 @@ def gray2binary(gray: np.array) -> np.array:
[False, True, False],
[False, True, False]])
"""
- return (127 < gray) & (gray <= 255)
+ return (gray > 127) & (gray <= 255)
def erosion(image: np.array, kernel: np.array) -> np.array:
diff --git a/dynamic_programming/all_construct.py b/dynamic_programming/all_construct.py
index 3839d01e6db0..6e53a702cbb1 100644
--- a/dynamic_programming/all_construct.py
+++ b/dynamic_programming/all_construct.py
@@ -34,7 +34,7 @@ def all_construct(target: str, word_bank: list[str] | None = None) -> list[list[
# slice condition
if target[i : i + len(word)] == word:
new_combinations: list[list[str]] = [
- [word] + way for way in table[i]
+ [word, *way] for way in table[i]
]
# adds the word to every combination the current position holds
# now,push that combination to the table[i+len(word)]
diff --git a/dynamic_programming/fizz_buzz.py b/dynamic_programming/fizz_buzz.py
index e77ab3de7b4b..e29116437a93 100644
--- a/dynamic_programming/fizz_buzz.py
+++ b/dynamic_programming/fizz_buzz.py
@@ -49,7 +49,7 @@ def fizz_buzz(number: int, iterations: int) -> str:
out += "Fizz"
if number % 5 == 0:
out += "Buzz"
- if not number % 3 == 0 and not number % 5 == 0:
+ if 0 not in (number % 3, number % 5):
out += str(number)
# print(out)
diff --git a/dynamic_programming/longest_common_subsequence.py b/dynamic_programming/longest_common_subsequence.py
index 3468fd87da8d..178b4169b213 100644
--- a/dynamic_programming/longest_common_subsequence.py
+++ b/dynamic_programming/longest_common_subsequence.py
@@ -42,20 +42,14 @@ def longest_common_subsequence(x: str, y: str):
for i in range(1, m + 1):
for j in range(1, n + 1):
- if x[i - 1] == y[j - 1]:
- match = 1
- else:
- match = 0
+ match = 1 if x[i - 1] == y[j - 1] else 0
l[i][j] = max(l[i - 1][j], l[i][j - 1], l[i - 1][j - 1] + match)
seq = ""
i, j = m, n
while i > 0 and j > 0:
- if x[i - 1] == y[j - 1]:
- match = 1
- else:
- match = 0
+ match = 1 if x[i - 1] == y[j - 1] else 0
if l[i][j] == l[i - 1][j - 1] + match:
if match == 1:
diff --git a/dynamic_programming/longest_increasing_subsequence.py b/dynamic_programming/longest_increasing_subsequence.py
index 6feed23529f1..d827893763c5 100644
--- a/dynamic_programming/longest_increasing_subsequence.py
+++ b/dynamic_programming/longest_increasing_subsequence.py
@@ -48,7 +48,7 @@ def longest_subsequence(array: list[int]) -> list[int]: # This function is recu
i += 1
temp_array = [element for element in array[1:] if element >= pivot]
- temp_array = [pivot] + longest_subsequence(temp_array)
+ temp_array = [pivot, *longest_subsequence(temp_array)]
if len(temp_array) > len(longest_subseq):
return temp_array
else:
diff --git a/graphs/basic_graphs.py b/graphs/basic_graphs.py
index 298a97bf0e17..065b6185c123 100644
--- a/graphs/basic_graphs.py
+++ b/graphs/basic_graphs.py
@@ -139,10 +139,9 @@ def dijk(g, s):
u = i
known.add(u)
for v in g[u]:
- if v[0] not in known:
- if dist[u] + v[1] < dist.get(v[0], 100000):
- dist[v[0]] = dist[u] + v[1]
- path[v[0]] = u
+ if v[0] not in known and dist[u] + v[1] < dist.get(v[0], 100000):
+ dist[v[0]] = dist[u] + v[1]
+ path[v[0]] = u
for i in dist:
if i != s:
print(dist[i])
@@ -243,10 +242,9 @@ def prim(g, s):
u = i
known.add(u)
for v in g[u]:
- if v[0] not in known:
- if v[1] < dist.get(v[0], 100000):
- dist[v[0]] = v[1]
- path[v[0]] = u
+ if v[0] not in known and v[1] < dist.get(v[0], 100000):
+ dist[v[0]] = v[1]
+ path[v[0]] = u
return dist
diff --git a/graphs/check_cycle.py b/graphs/check_cycle.py
index dcc864988ca5..9fd1cd80f116 100644
--- a/graphs/check_cycle.py
+++ b/graphs/check_cycle.py
@@ -15,11 +15,10 @@ def check_cycle(graph: dict) -> bool:
visited: set[int] = set()
# To detect a back edge, keep track of vertices currently in the recursion stack
rec_stk: set[int] = set()
- for node in graph:
- if node not in visited:
- if depth_first_search(graph, node, visited, rec_stk):
- return True
- return False
+ return any(
+ node not in visited and depth_first_search(graph, node, visited, rec_stk)
+ for node in graph
+ )
def depth_first_search(graph: dict, vertex: int, visited: set, rec_stk: set) -> bool:
diff --git a/graphs/connected_components.py b/graphs/connected_components.py
index 4af7803d74a7..15c7633e13e8 100644
--- a/graphs/connected_components.py
+++ b/graphs/connected_components.py
@@ -27,7 +27,7 @@ def dfs(graph: dict, vert: int, visited: list) -> list:
if not visited[neighbour]:
connected_verts += dfs(graph, neighbour, visited)
- return [vert] + connected_verts
+ return [vert, *connected_verts]
def connected_components(graph: dict) -> list:
diff --git a/graphs/dijkstra_algorithm.py b/graphs/dijkstra_algorithm.py
index 1845dad05db2..452138fe904b 100644
--- a/graphs/dijkstra_algorithm.py
+++ b/graphs/dijkstra_algorithm.py
@@ -112,7 +112,7 @@ def dijkstra(self, src):
self.dist[src] = 0
q = PriorityQueue()
q.insert((0, src)) # (dist from src, node)
- for u in self.adjList.keys():
+ for u in self.adjList:
if u != src:
self.dist[u] = sys.maxsize # Infinity
self.par[u] = -1
diff --git a/graphs/edmonds_karp_multiple_source_and_sink.py b/graphs/edmonds_karp_multiple_source_and_sink.py
index 070d758e63b6..d0610804109f 100644
--- a/graphs/edmonds_karp_multiple_source_and_sink.py
+++ b/graphs/edmonds_karp_multiple_source_and_sink.py
@@ -163,9 +163,8 @@ def relabel(self, vertex_index):
self.graph[vertex_index][to_index]
- self.preflow[vertex_index][to_index]
> 0
- ):
- if min_height is None or self.heights[to_index] < min_height:
- min_height = self.heights[to_index]
+ ) and (min_height is None or self.heights[to_index] < min_height):
+ min_height = self.heights[to_index]
if min_height is not None:
self.heights[vertex_index] = min_height + 1
diff --git a/graphs/frequent_pattern_graph_miner.py b/graphs/frequent_pattern_graph_miner.py
index 87d5605a0bc8..208e57f9b32f 100644
--- a/graphs/frequent_pattern_graph_miner.py
+++ b/graphs/frequent_pattern_graph_miner.py
@@ -130,11 +130,11 @@ def create_edge(nodes, graph, cluster, c1):
"""
create edge between the nodes
"""
- for i in cluster[c1].keys():
+ for i in cluster[c1]:
count = 0
c2 = c1 + 1
while c2 < max(cluster.keys()):
- for j in cluster[c2].keys():
+ for j in cluster[c2]:
"""
creates edge only if the condition satisfies
"""
@@ -185,7 +185,7 @@ def find_freq_subgraph_given_support(s, cluster, graph):
find edges of multiple frequent subgraphs
"""
k = int(s / 100 * (len(cluster) - 1))
- for i in cluster[k].keys():
+ for i in cluster[k]:
my_dfs(graph, tuple(cluster[k][i]), (["Header"],))
diff --git a/graphs/minimum_spanning_tree_boruvka.py b/graphs/minimum_spanning_tree_boruvka.py
index 663d8e26cfad..3c6888037948 100644
--- a/graphs/minimum_spanning_tree_boruvka.py
+++ b/graphs/minimum_spanning_tree_boruvka.py
@@ -144,6 +144,7 @@ def union(self, item1, item2):
self.rank[root1] += 1
self.parent[root2] = root1
return root1
+ return None
@staticmethod
def boruvka_mst(graph):
diff --git a/graphs/minimum_spanning_tree_prims.py b/graphs/minimum_spanning_tree_prims.py
index f577866f0da6..5a08ec57ff4d 100644
--- a/graphs/minimum_spanning_tree_prims.py
+++ b/graphs/minimum_spanning_tree_prims.py
@@ -44,10 +44,7 @@ def bottom_to_top(self, val, index, heap, position):
temp = position[index]
while index != 0:
- if index % 2 == 0:
- parent = int((index - 2) / 2)
- else:
- parent = int((index - 1) / 2)
+ parent = int((index - 2) / 2) if index % 2 == 0 else int((index - 1) / 2)
if val < heap[parent]:
heap[index] = heap[parent]
diff --git a/graphs/minimum_spanning_tree_prims2.py b/graphs/minimum_spanning_tree_prims2.py
index 707be783d087..81f30ef615fe 100644
--- a/graphs/minimum_spanning_tree_prims2.py
+++ b/graphs/minimum_spanning_tree_prims2.py
@@ -135,14 +135,14 @@ def _bubble_up(self, elem: T) -> None:
# only]
curr_pos = self.position_map[elem]
if curr_pos == 0:
- return
+ return None
parent_position = get_parent_position(curr_pos)
_, weight = self.heap[curr_pos]
_, parent_weight = self.heap[parent_position]
if parent_weight > weight:
self._swap_nodes(parent_position, curr_pos)
return self._bubble_up(elem)
- return
+ return None
def _bubble_down(self, elem: T) -> None:
# Place a node at the proper position (downward movement) [to be used
@@ -154,24 +154,22 @@ def _bubble_down(self, elem: T) -> None:
if child_left_position < self.elements and child_right_position < self.elements:
_, child_left_weight = self.heap[child_left_position]
_, child_right_weight = self.heap[child_right_position]
- if child_right_weight < child_left_weight:
- if child_right_weight < weight:
- self._swap_nodes(child_right_position, curr_pos)
- return self._bubble_down(elem)
+ if child_right_weight < child_left_weight and child_right_weight < weight:
+ self._swap_nodes(child_right_position, curr_pos)
+ return self._bubble_down(elem)
if child_left_position < self.elements:
_, child_left_weight = self.heap[child_left_position]
if child_left_weight < weight:
self._swap_nodes(child_left_position, curr_pos)
return self._bubble_down(elem)
else:
- return
+ return None
if child_right_position < self.elements:
_, child_right_weight = self.heap[child_right_position]
if child_right_weight < weight:
self._swap_nodes(child_right_position, curr_pos)
return self._bubble_down(elem)
- else:
- return
+ return None
def _swap_nodes(self, node1_pos: int, node2_pos: int) -> None:
# Swap the nodes at the given positions
diff --git a/hashes/hamming_code.py b/hashes/hamming_code.py
index 481a6750773a..dc93032183e0 100644
--- a/hashes/hamming_code.py
+++ b/hashes/hamming_code.py
@@ -126,9 +126,8 @@ def emitter_converter(size_par, data):
aux = (bin_pos[cont_loop])[-1 * (bp)]
except IndexError:
aux = "0"
- if aux == "1":
- if x == "1":
- cont_bo += 1
+ if aux == "1" and x == "1":
+ cont_bo += 1
cont_loop += 1
parity.append(cont_bo % 2)
diff --git a/linear_algebra/src/lib.py b/linear_algebra/src/lib.py
index ac0398a31a07..e3556e74c3f3 100644
--- a/linear_algebra/src/lib.py
+++ b/linear_algebra/src/lib.py
@@ -108,7 +108,7 @@ def __mul__(self, other: float | Vector) -> float | Vector:
mul implements the scalar multiplication
and the dot-product
"""
- if isinstance(other, float) or isinstance(other, int):
+ if isinstance(other, (float, int)):
ans = [c * other for c in self.__components]
return Vector(ans)
elif isinstance(other, Vector) and len(self) == len(other):
@@ -216,7 +216,7 @@ def axpy(scalar: float, x: Vector, y: Vector) -> Vector:
assert (
isinstance(x, Vector)
and isinstance(y, Vector)
- and (isinstance(scalar, int) or isinstance(scalar, float))
+ and (isinstance(scalar, (int, float)))
)
return x * scalar + y
@@ -337,12 +337,13 @@ def __mul__(self, other: float | Vector) -> Vector | Matrix:
"vector must have the same size as the "
"number of columns of the matrix!"
)
- elif isinstance(other, int) or isinstance(other, float): # matrix-scalar
+ elif isinstance(other, (int, float)): # matrix-scalar
matrix = [
[self.__matrix[i][j] * other for j in range(self.__width)]
for i in range(self.__height)
]
return Matrix(matrix, self.__width, self.__height)
+ return None
def height(self) -> int:
"""
diff --git a/machine_learning/gradient_descent.py b/machine_learning/gradient_descent.py
index 9fa460a07562..5b74dad082e7 100644
--- a/machine_learning/gradient_descent.py
+++ b/machine_learning/gradient_descent.py
@@ -55,6 +55,7 @@ def output(example_no, data_set):
return train_data[example_no][1]
elif data_set == "test":
return test_data[example_no][1]
+ return None
def calculate_hypothesis_value(example_no, data_set):
@@ -68,6 +69,7 @@ def calculate_hypothesis_value(example_no, data_set):
return _hypothesis_value(train_data[example_no][0])
elif data_set == "test":
return _hypothesis_value(test_data[example_no][0])
+ return None
def summation_of_cost_derivative(index, end=m):
diff --git a/machine_learning/k_means_clust.py b/machine_learning/k_means_clust.py
index b6305469ed7d..7c8142aab878 100644
--- a/machine_learning/k_means_clust.py
+++ b/machine_learning/k_means_clust.py
@@ -229,7 +229,7 @@ def report_generator(
"""
# Fill missing values with given rules
if fill_missing_report:
- df.fillna(value=fill_missing_report, inplace=True)
+ df = df.fillna(value=fill_missing_report)
df["dummy"] = 1
numeric_cols = df.select_dtypes(np.number).columns
report = (
@@ -338,7 +338,7 @@ def report_generator(
)
report.columns.name = ""
report = report.reset_index()
- report.drop(columns=["index"], inplace=True)
+ report = report.drop(columns=["index"])
return report
diff --git a/machine_learning/sequential_minimum_optimization.py b/machine_learning/sequential_minimum_optimization.py
index 9c45c351272f..37172c8e9bf6 100644
--- a/machine_learning/sequential_minimum_optimization.py
+++ b/machine_learning/sequential_minimum_optimization.py
@@ -129,7 +129,7 @@ def fit(self):
# error
self._unbound = [i for i in self._all_samples if self._is_unbound(i)]
for s in self.unbound:
- if s == i1 or s == i2:
+ if s in (i1, i2):
continue
self._error[s] += (
y1 * (a1_new - a1) * k(i1, s)
@@ -225,7 +225,7 @@ def _predict(self, sample):
def _choose_alphas(self):
locis = yield from self._choose_a1()
if not locis:
- return
+ return None
return locis
def _choose_a1(self):
@@ -423,9 +423,8 @@ def _rbf(self, v1, v2):
return np.exp(-1 * (self.gamma * np.linalg.norm(v1 - v2) ** 2))
def _check(self):
- if self._kernel == self._rbf:
- if self.gamma < 0:
- raise ValueError("gamma value must greater than 0")
+ if self._kernel == self._rbf and self.gamma < 0:
+ raise ValueError("gamma value must greater than 0")
def _get_kernel(self, kernel_name):
maps = {"linear": self._linear, "poly": self._polynomial, "rbf": self._rbf}
diff --git a/maths/abs.py b/maths/abs.py
index cb0ffc8a5b61..b357e98d8680 100644
--- a/maths/abs.py
+++ b/maths/abs.py
@@ -75,9 +75,9 @@ def test_abs_val():
"""
>>> test_abs_val()
"""
- assert 0 == abs_val(0)
- assert 34 == abs_val(34)
- assert 100000000000 == abs_val(-100000000000)
+ assert abs_val(0) == 0
+ assert abs_val(34) == 34
+ assert abs_val(-100000000000) == 100000000000
a = [-3, -1, 2, -11]
assert abs_max(a) == -11
diff --git a/maths/binary_exp_mod.py b/maths/binary_exp_mod.py
index 67dd1e728b18..df688892d690 100644
--- a/maths/binary_exp_mod.py
+++ b/maths/binary_exp_mod.py
@@ -6,7 +6,7 @@ def bin_exp_mod(a, n, b):
7
"""
# mod b
- assert not (b == 0), "This cannot accept modulo that is == 0"
+ assert b != 0, "This cannot accept modulo that is == 0"
if n == 0:
return 1
diff --git a/maths/jaccard_similarity.py b/maths/jaccard_similarity.py
index eab25188b2fd..32054414c0c2 100644
--- a/maths/jaccard_similarity.py
+++ b/maths/jaccard_similarity.py
@@ -71,6 +71,7 @@ def jaccard_similarity(set_a, set_b, alternative_union=False):
return len(intersection) / len(union)
return len(intersection) / len(union)
+ return None
if __name__ == "__main__":
diff --git a/maths/largest_of_very_large_numbers.py b/maths/largest_of_very_large_numbers.py
index d2dc0af18126..7e7fea004958 100644
--- a/maths/largest_of_very_large_numbers.py
+++ b/maths/largest_of_very_large_numbers.py
@@ -12,6 +12,7 @@ def res(x, y):
return 0
elif y == 0:
return 1 # any number raised to 0 is 1
+ raise AssertionError("This should never happen")
if __name__ == "__main__": # Main function
diff --git a/maths/radix2_fft.py b/maths/radix2_fft.py
index 1def58e1f226..af98f24f9538 100644
--- a/maths/radix2_fft.py
+++ b/maths/radix2_fft.py
@@ -80,10 +80,7 @@ def __init__(self, poly_a=None, poly_b=None):
# Discrete fourier transform of A and B
def __dft(self, which):
- if which == "A":
- dft = [[x] for x in self.polyA]
- else:
- dft = [[x] for x in self.polyB]
+ dft = [[x] for x in self.polyA] if which == "A" else [[x] for x in self.polyB]
# Corner case
if len(dft) <= 1:
return dft[0]
diff --git a/neural_network/back_propagation_neural_network.py b/neural_network/back_propagation_neural_network.py
index cb47b829010c..9dd112115f5e 100644
--- a/neural_network/back_propagation_neural_network.py
+++ b/neural_network/back_propagation_neural_network.py
@@ -153,6 +153,7 @@ def train(self, xdata, ydata, train_round, accuracy):
if mse < self.accuracy:
print("----达到精度----")
return mse
+ return None
def cal_loss(self, ydata, ydata_):
self.loss = np.sum(np.power((ydata - ydata_), 2))
diff --git a/other/graham_scan.py b/other/graham_scan.py
index 8e83bfcf4c49..2eadb4e56668 100644
--- a/other/graham_scan.py
+++ b/other/graham_scan.py
@@ -125,10 +125,9 @@ def graham_scan(points: list[tuple[int, int]]) -> list[tuple[int, int]]:
miny = y
minx = x
minidx = i
- if y == miny:
- if x < minx:
- minx = x
- minidx = i
+ if y == miny and x < minx:
+ minx = x
+ minidx = i
# remove the lowest and the most left point from points for preparing for sort
points.pop(minidx)
diff --git a/other/nested_brackets.py b/other/nested_brackets.py
index 3f61a4e7006c..ea48c0a5f532 100644
--- a/other/nested_brackets.py
+++ b/other/nested_brackets.py
@@ -24,11 +24,10 @@ def is_balanced(s):
if s[i] in open_brackets:
stack.append(s[i])
- elif s[i] in closed_brackets:
- if len(stack) == 0 or (
- len(stack) > 0 and open_to_closed[stack.pop()] != s[i]
- ):
- return False
+ elif s[i] in closed_brackets and (
+ len(stack) == 0 or (len(stack) > 0 and open_to_closed[stack.pop()] != s[i])
+ ):
+ return False
return len(stack) == 0
diff --git a/physics/hubble_parameter.py b/physics/hubble_parameter.py
index 6bc62e7131c5..f7b2d28a6716 100644
--- a/physics/hubble_parameter.py
+++ b/physics/hubble_parameter.py
@@ -70,10 +70,10 @@ def hubble_parameter(
68.3
"""
parameters = [redshift, radiation_density, matter_density, dark_energy]
- if any(0 > p for p in parameters):
+ if any(p < 0 for p in parameters):
raise ValueError("All input parameters must be positive")
- if any(1 < p for p in parameters[1:4]):
+ if any(p > 1 for p in parameters[1:4]):
raise ValueError("Relative densities cannot be greater than one")
else:
curvature = 1 - (matter_density + radiation_density + dark_energy)
diff --git a/project_euler/problem_005/sol1.py b/project_euler/problem_005/sol1.py
index f272c102d2bb..01cbd0e15ff7 100644
--- a/project_euler/problem_005/sol1.py
+++ b/project_euler/problem_005/sol1.py
@@ -63,6 +63,7 @@ def solution(n: int = 20) -> int:
if i == 0:
i = 1
return i
+ return None
if __name__ == "__main__":
diff --git a/project_euler/problem_009/sol1.py b/project_euler/problem_009/sol1.py
index 1d908402b6b1..e65c9b857990 100644
--- a/project_euler/problem_009/sol1.py
+++ b/project_euler/problem_009/sol1.py
@@ -32,9 +32,8 @@ def solution() -> int:
for a in range(300):
for b in range(a + 1, 400):
for c in range(b + 1, 500):
- if (a + b + c) == 1000:
- if (a**2) + (b**2) == (c**2):
- return a * b * c
+ if (a + b + c) == 1000 and (a**2) + (b**2) == (c**2):
+ return a * b * c
return -1
diff --git a/project_euler/problem_014/sol2.py b/project_euler/problem_014/sol2.py
index d2a1d9f0e468..2448e652ce5b 100644
--- a/project_euler/problem_014/sol2.py
+++ b/project_euler/problem_014/sol2.py
@@ -34,10 +34,7 @@ def collatz_sequence_length(n: int) -> int:
"""Returns the Collatz sequence length for n."""
if n in COLLATZ_SEQUENCE_LENGTHS:
return COLLATZ_SEQUENCE_LENGTHS[n]
- if n % 2 == 0:
- next_n = n // 2
- else:
- next_n = 3 * n + 1
+ next_n = n // 2 if n % 2 == 0 else 3 * n + 1
sequence_length = collatz_sequence_length(next_n) + 1
COLLATZ_SEQUENCE_LENGTHS[n] = sequence_length
return sequence_length
diff --git a/project_euler/problem_018/solution.py b/project_euler/problem_018/solution.py
index 82fc3ce3c9db..70306148bb9e 100644
--- a/project_euler/problem_018/solution.py
+++ b/project_euler/problem_018/solution.py
@@ -48,14 +48,8 @@ def solution():
for i in range(1, len(a)):
for j in range(len(a[i])):
- if j != len(a[i - 1]):
- number1 = a[i - 1][j]
- else:
- number1 = 0
- if j > 0:
- number2 = a[i - 1][j - 1]
- else:
- number2 = 0
+ number1 = a[i - 1][j] if j != len(a[i - 1]) else 0
+ number2 = a[i - 1][j - 1] if j > 0 else 0
a[i][j] += max(number1, number2)
return max(a[-1])
diff --git a/project_euler/problem_019/sol1.py b/project_euler/problem_019/sol1.py
index ab59365843b2..0e38137d4f01 100644
--- a/project_euler/problem_019/sol1.py
+++ b/project_euler/problem_019/sol1.py
@@ -39,7 +39,7 @@ def solution():
while year < 2001:
day += 7
- if (year % 4 == 0 and not year % 100 == 0) or (year % 400 == 0):
+ if (year % 4 == 0 and year % 100 != 0) or (year % 400 == 0):
if day > days_per_month[month - 1] and month != 2:
month += 1
day = day - days_per_month[month - 2]
diff --git a/project_euler/problem_033/sol1.py b/project_euler/problem_033/sol1.py
index e0c9a058af53..32be424b6a7b 100644
--- a/project_euler/problem_033/sol1.py
+++ b/project_euler/problem_033/sol1.py
@@ -20,11 +20,9 @@
def is_digit_cancelling(num: int, den: int) -> bool:
- if num != den:
- if num % 10 == den // 10:
- if (num // 10) / (den % 10) == num / den:
- return True
- return False
+ return (
+ num != den and num % 10 == den // 10 and (num // 10) / (den % 10) == num / den
+ )
def fraction_list(digit_len: int) -> list[str]:
diff --git a/project_euler/problem_064/sol1.py b/project_euler/problem_064/sol1.py
index 81ebcc7b73c3..12769decc62f 100644
--- a/project_euler/problem_064/sol1.py
+++ b/project_euler/problem_064/sol1.py
@@ -67,9 +67,8 @@ def solution(n: int = 10000) -> int:
count_odd_periods = 0
for i in range(2, n + 1):
sr = sqrt(i)
- if sr - floor(sr) != 0:
- if continuous_fraction_period(i) % 2 == 1:
- count_odd_periods += 1
+ if sr - floor(sr) != 0 and continuous_fraction_period(i) % 2 == 1:
+ count_odd_periods += 1
return count_odd_periods
diff --git a/project_euler/problem_067/sol1.py b/project_euler/problem_067/sol1.py
index f20c206cca11..2b41fedc6784 100644
--- a/project_euler/problem_067/sol1.py
+++ b/project_euler/problem_067/sol1.py
@@ -37,14 +37,8 @@ def solution():
for i in range(1, len(a)):
for j in range(len(a[i])):
- if j != len(a[i - 1]):
- number1 = a[i - 1][j]
- else:
- number1 = 0
- if j > 0:
- number2 = a[i - 1][j - 1]
- else:
- number2 = 0
+ number1 = a[i - 1][j] if j != len(a[i - 1]) else 0
+ number2 = a[i - 1][j - 1] if j > 0 else 0
a[i][j] += max(number1, number2)
return max(a[-1])
diff --git a/project_euler/problem_109/sol1.py b/project_euler/problem_109/sol1.py
index 852f001d38af..ef145dda590b 100644
--- a/project_euler/problem_109/sol1.py
+++ b/project_euler/problem_109/sol1.py
@@ -65,7 +65,7 @@ def solution(limit: int = 100) -> int:
>>> solution(50)
12577
"""
- singles: list[int] = list(range(1, 21)) + [25]
+ singles: list[int] = [*list(range(1, 21)), 25]
doubles: list[int] = [2 * x for x in range(1, 21)] + [50]
triples: list[int] = [3 * x for x in range(1, 21)]
all_values: list[int] = singles + doubles + triples + [0]
diff --git a/project_euler/problem_203/sol1.py b/project_euler/problem_203/sol1.py
index 713b530b6af2..da9436246a7c 100644
--- a/project_euler/problem_203/sol1.py
+++ b/project_euler/problem_203/sol1.py
@@ -50,8 +50,8 @@ def get_pascal_triangle_unique_coefficients(depth: int) -> set[int]:
coefficients = {1}
previous_coefficients = [1]
for _ in range(2, depth + 1):
- coefficients_begins_one = previous_coefficients + [0]
- coefficients_ends_one = [0] + previous_coefficients
+ coefficients_begins_one = [*previous_coefficients, 0]
+ coefficients_ends_one = [0, *previous_coefficients]
previous_coefficients = []
for x, y in zip(coefficients_begins_one, coefficients_ends_one):
coefficients.add(x + y)
diff --git a/scheduling/shortest_job_first.py b/scheduling/shortest_job_first.py
index b3f81bfd10e7..871de8207308 100644
--- a/scheduling/shortest_job_first.py
+++ b/scheduling/shortest_job_first.py
@@ -36,12 +36,11 @@ def calculate_waitingtime(
# Process until all processes are completed
while complete != no_of_processes:
for j in range(no_of_processes):
- if arrival_time[j] <= increment_time:
- if remaining_time[j] > 0:
- if remaining_time[j] < minm:
- minm = remaining_time[j]
- short = j
- check = True
+ if arrival_time[j] <= increment_time and remaining_time[j] > 0:
+ if remaining_time[j] < minm:
+ minm = remaining_time[j]
+ short = j
+ check = True
if not check:
increment_time += 1
diff --git a/scripts/build_directory_md.py b/scripts/build_directory_md.py
index 7572ce342720..b95be9ebc254 100755
--- a/scripts/build_directory_md.py
+++ b/scripts/build_directory_md.py
@@ -21,9 +21,8 @@ def md_prefix(i):
def print_path(old_path: str, new_path: str) -> str:
old_parts = old_path.split(os.sep)
for i, new_part in enumerate(new_path.split(os.sep)):
- if i + 1 > len(old_parts) or old_parts[i] != new_part:
- if new_part:
- print(f"{md_prefix(i)} {new_part.replace('_', ' ').title()}")
+ if (i + 1 > len(old_parts) or old_parts[i] != new_part) and new_part:
+ print(f"{md_prefix(i)} {new_part.replace('_', ' ').title()}")
return new_path
diff --git a/searches/binary_tree_traversal.py b/searches/binary_tree_traversal.py
index 66814b47883d..76e80df25a13 100644
--- a/searches/binary_tree_traversal.py
+++ b/searches/binary_tree_traversal.py
@@ -37,6 +37,7 @@ def build_tree():
right_node = TreeNode(int(check))
node_found.right = right_node
q.put(right_node)
+ return None
def pre_order(node: TreeNode) -> None:
diff --git a/sorts/circle_sort.py b/sorts/circle_sort.py
index da3c59059516..271fa1e8d58a 100644
--- a/sorts/circle_sort.py
+++ b/sorts/circle_sort.py
@@ -58,14 +58,13 @@ def circle_sort_util(collection: list, low: int, high: int) -> bool:
left += 1
right -= 1
- if left == right:
- if collection[left] > collection[right + 1]:
- collection[left], collection[right + 1] = (
- collection[right + 1],
- collection[left],
- )
+ if left == right and collection[left] > collection[right + 1]:
+ collection[left], collection[right + 1] = (
+ collection[right + 1],
+ collection[left],
+ )
- swapped = True
+ swapped = True
mid = low + int((high - low) / 2)
left_swap = circle_sort_util(collection, low, mid)
diff --git a/sorts/counting_sort.py b/sorts/counting_sort.py
index 892ec5d5f344..18c4b0323dcb 100644
--- a/sorts/counting_sort.py
+++ b/sorts/counting_sort.py
@@ -66,7 +66,7 @@ def counting_sort_string(string):
if __name__ == "__main__":
# Test string sort
- assert "eghhiiinrsssttt" == counting_sort_string("thisisthestring")
+ assert counting_sort_string("thisisthestring") == "eghhiiinrsssttt"
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
diff --git a/sorts/msd_radix_sort.py b/sorts/msd_radix_sort.py
index 74ce21762906..03f84c75b9d8 100644
--- a/sorts/msd_radix_sort.py
+++ b/sorts/msd_radix_sort.py
@@ -147,7 +147,7 @@ def _msd_radix_sort_inplace(
list_of_ints[i], list_of_ints[j] = list_of_ints[j], list_of_ints[i]
j -= 1
- if not j == i:
+ if j != i:
i += 1
_msd_radix_sort_inplace(list_of_ints, bit_position, begin_index, i)
diff --git a/sorts/quick_sort.py b/sorts/quick_sort.py
index 70cd19d7afe0..b79d3eac3e48 100644
--- a/sorts/quick_sort.py
+++ b/sorts/quick_sort.py
@@ -39,7 +39,7 @@ def quick_sort(collection: list) -> list:
for element in collection[pivot_index + 1 :]:
(greater if element > pivot else lesser).append(element)
- return quick_sort(lesser) + [pivot] + quick_sort(greater)
+ return [*quick_sort(lesser), pivot, *quick_sort(greater)]
if __name__ == "__main__":
diff --git a/sorts/recursive_quick_sort.py b/sorts/recursive_quick_sort.py
index c28a14e37ebd..c29009aca673 100644
--- a/sorts/recursive_quick_sort.py
+++ b/sorts/recursive_quick_sort.py
@@ -9,11 +9,11 @@ def quick_sort(data: list) -> list:
if len(data) <= 1:
return data
else:
- return (
- quick_sort([e for e in data[1:] if e <= data[0]])
- + [data[0]]
- + quick_sort([e for e in data[1:] if e > data[0]])
- )
+ return [
+ *quick_sort([e for e in data[1:] if e <= data[0]]),
+ data[0],
+ *quick_sort([e for e in data[1:] if e > data[0]]),
+ ]
if __name__ == "__main__":
diff --git a/sorts/tim_sort.py b/sorts/tim_sort.py
index c90c7e80390b..138f11c71bcc 100644
--- a/sorts/tim_sort.py
+++ b/sorts/tim_sort.py
@@ -32,9 +32,9 @@ def merge(left, right):
return left
if left[0] < right[0]:
- return [left[0]] + merge(left[1:], right)
+ return [left[0], *merge(left[1:], right)]
- return [right[0]] + merge(left, right[1:])
+ return [right[0], *merge(left, right[1:])]
def tim_sort(lst):
diff --git a/strings/autocomplete_using_trie.py b/strings/autocomplete_using_trie.py
index 758260292a30..77a3050ab15f 100644
--- a/strings/autocomplete_using_trie.py
+++ b/strings/autocomplete_using_trie.py
@@ -27,10 +27,7 @@ def find_word(self, prefix: str) -> tuple | list:
def _elements(self, d: dict) -> tuple:
result = []
for c, v in d.items():
- if c == END:
- sub_result = [" "]
- else:
- sub_result = [c + s for s in self._elements(v)]
+ sub_result = [" "] if c == END else [(c + s) for s in self._elements(v)]
result.extend(sub_result)
return tuple(result)
diff --git a/strings/check_anagrams.py b/strings/check_anagrams.py
index 0d2f8091a3f0..a364b98212ad 100644
--- a/strings/check_anagrams.py
+++ b/strings/check_anagrams.py
@@ -38,10 +38,7 @@ def check_anagrams(first_str: str, second_str: str) -> bool:
count[first_str[i]] += 1
count[second_str[i]] -= 1
- for _count in count.values():
- if _count != 0:
- return False
- return True
+ return all(_count == 0 for _count in count.values())
if __name__ == "__main__":
diff --git a/strings/is_palindrome.py b/strings/is_palindrome.py
index 9bf2abd98486..406aa2e8d3c3 100644
--- a/strings/is_palindrome.py
+++ b/strings/is_palindrome.py
@@ -30,10 +30,7 @@ def is_palindrome(s: str) -> bool:
# with the help of 1st index (i==n-i-1)
# where n is length of string
- for i in range(end):
- if s[i] != s[n - i - 1]:
- return False
- return True
+ return all(s[i] == s[n - i - 1] for i in range(end))
if __name__ == "__main__":
diff --git a/strings/snake_case_to_camel_pascal_case.py b/strings/snake_case_to_camel_pascal_case.py
index eaabdcb87a0f..28a28b517a01 100644
--- a/strings/snake_case_to_camel_pascal_case.py
+++ b/strings/snake_case_to_camel_pascal_case.py
@@ -43,7 +43,7 @@ def snake_to_camel_case(input_str: str, use_pascal: bool = False) -> str:
initial_word = "" if use_pascal else words[0]
- return "".join([initial_word] + capitalized_words)
+ return "".join([initial_word, *capitalized_words])
if __name__ == "__main__":
diff --git a/web_programming/convert_number_to_words.py b/web_programming/convert_number_to_words.py
index 50612dec20dd..1e293df9660c 100644
--- a/web_programming/convert_number_to_words.py
+++ b/web_programming/convert_number_to_words.py
@@ -63,7 +63,7 @@ def convert(number: int) -> str:
current = temp_num % 10
if counter % 2 == 0:
addition = ""
- if counter in placevalue.keys() and current != 0:
+ if counter in placevalue and current != 0:
addition = placevalue[counter]
if counter == 2:
words = singles[current] + addition + words
@@ -84,12 +84,12 @@ def convert(number: int) -> str:
words = teens[number % 10] + words
else:
addition = ""
- if counter in placevalue.keys():
+ if counter in placevalue:
addition = placevalue[counter]
words = doubles[current] + addition + words
else:
addition = ""
- if counter in placevalue.keys():
+ if counter in placevalue:
if current == 0 and ((temp_num % 100) // 10) == 0:
addition = ""
else:
diff --git a/web_programming/instagram_crawler.py b/web_programming/instagram_crawler.py
index 4536257a984e..0816cd181051 100644
--- a/web_programming/instagram_crawler.py
+++ b/web_programming/instagram_crawler.py
@@ -105,7 +105,7 @@ def test_instagram_user(username: str = "github") -> None:
import os
if os.environ.get("CI"):
- return None # test failing on GitHub Actions
+ return # test failing on GitHub Actions
instagram_user = InstagramUser(username)
assert instagram_user.user_data
assert isinstance(instagram_user.user_data, dict)
diff --git a/web_programming/open_google_results.py b/web_programming/open_google_results.py
index 2685bf62114d..f61e3666dd7e 100644
--- a/web_programming/open_google_results.py
+++ b/web_programming/open_google_results.py
@@ -7,10 +7,7 @@
from fake_useragent import UserAgent
if __name__ == "__main__":
- if len(argv) > 1:
- query = "%20".join(argv[1:])
- else:
- query = quote(str(input("Search: ")))
+ query = "%20".join(argv[1:]) if len(argv) > 1 else quote(str(input("Search: ")))
print("Googling.....")
From 069a14b1c55112bc4f4e08571fc3c2156bb69e5a Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Thu, 2 Mar 2023 07:55:47 +0300
Subject: [PATCH 007/808] Add Project Euler problem 082 solution 1 (#6282)
Update DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 2 +
project_euler/problem_082/__init__.py | 0
project_euler/problem_082/input.txt | 80 +++++++++++++++++++++++
project_euler/problem_082/sol1.py | 65 ++++++++++++++++++
project_euler/problem_082/test_matrix.txt | 5 ++
5 files changed, 152 insertions(+)
create mode 100644 project_euler/problem_082/__init__.py
create mode 100644 project_euler/problem_082/input.txt
create mode 100644 project_euler/problem_082/sol1.py
create mode 100644 project_euler/problem_082/test_matrix.txt
diff --git a/DIRECTORY.md b/DIRECTORY.md
index a8786cc2591f..3d1bc967e4b5 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -918,6 +918,8 @@
* [Sol1](project_euler/problem_080/sol1.py)
* Problem 081
* [Sol1](project_euler/problem_081/sol1.py)
+ * Problem 082
+ * [Sol1](project_euler/problem_082/sol1.py)
* Problem 085
* [Sol1](project_euler/problem_085/sol1.py)
* Problem 086
diff --git a/project_euler/problem_082/__init__.py b/project_euler/problem_082/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/project_euler/problem_082/input.txt b/project_euler/problem_082/input.txt
new file mode 100644
index 000000000000..f65322a7e541
--- /dev/null
+++ b/project_euler/problem_082/input.txt
@@ -0,0 +1,80 @@
+4445,2697,5115,718,2209,2212,654,4348,3079,6821,7668,3276,8874,4190,3785,2752,9473,7817,9137,496,7338,3434,7152,4355,4552,7917,7827,2460,2350,691,3514,5880,3145,7633,7199,3783,5066,7487,3285,1084,8985,760,872,8609,8051,1134,9536,5750,9716,9371,7619,5617,275,9721,2997,2698,1887,8825,6372,3014,2113,7122,7050,6775,5948,2758,1219,3539,348,7989,2735,9862,1263,8089,6401,9462,3168,2758,3748,5870
+1096,20,1318,7586,5167,2642,1443,5741,7621,7030,5526,4244,2348,4641,9827,2448,6918,5883,3737,300,7116,6531,567,5997,3971,6623,820,6148,3287,1874,7981,8424,7672,7575,6797,6717,1078,5008,4051,8795,5820,346,1851,6463,2117,6058,3407,8211,117,4822,1317,4377,4434,5925,8341,4800,1175,4173,690,8978,7470,1295,3799,8724,3509,9849,618,3320,7068,9633,2384,7175,544,6583,1908,9983,481,4187,9353,9377
+9607,7385,521,6084,1364,8983,7623,1585,6935,8551,2574,8267,4781,3834,2764,2084,2669,4656,9343,7709,2203,9328,8004,6192,5856,3555,2260,5118,6504,1839,9227,1259,9451,1388,7909,5733,6968,8519,9973,1663,5315,7571,3035,4325,4283,2304,6438,3815,9213,9806,9536,196,5542,6907,2475,1159,5820,9075,9470,2179,9248,1828,4592,9167,3713,4640,47,3637,309,7344,6955,346,378,9044,8635,7466,5036,9515,6385,9230
+7206,3114,7760,1094,6150,5182,7358,7387,4497,955,101,1478,7777,6966,7010,8417,6453,4955,3496,107,449,8271,131,2948,6185,784,5937,8001,6104,8282,4165,3642,710,2390,575,715,3089,6964,4217,192,5949,7006,715,3328,1152,66,8044,4319,1735,146,4818,5456,6451,4113,1063,4781,6799,602,1504,6245,6550,1417,1343,2363,3785,5448,4545,9371,5420,5068,4613,4882,4241,5043,7873,8042,8434,3939,9256,2187
+3620,8024,577,9997,7377,7682,1314,1158,6282,6310,1896,2509,5436,1732,9480,706,496,101,6232,7375,2207,2306,110,6772,3433,2878,8140,5933,8688,1399,2210,7332,6172,6403,7333,4044,2291,1790,2446,7390,8698,5723,3678,7104,1825,2040,140,3982,4905,4160,2200,5041,2512,1488,2268,1175,7588,8321,8078,7312,977,5257,8465,5068,3453,3096,1651,7906,253,9250,6021,8791,8109,6651,3412,345,4778,5152,4883,7505
+1074,5438,9008,2679,5397,5429,2652,3403,770,9188,4248,2493,4361,8327,9587,707,9525,5913,93,1899,328,2876,3604,673,8576,6908,7659,2544,3359,3883,5273,6587,3065,1749,3223,604,9925,6941,2823,8767,7039,3290,3214,1787,7904,3421,7137,9560,8451,2669,9219,6332,1576,5477,6755,8348,4164,4307,2984,4012,6629,1044,2874,6541,4942,903,1404,9125,5160,8836,4345,2581,460,8438,1538,5507,668,3352,2678,6942
+4295,1176,5596,1521,3061,9868,7037,7129,8933,6659,5947,5063,3653,9447,9245,2679,767,714,116,8558,163,3927,8779,158,5093,2447,5782,3967,1716,931,7772,8164,1117,9244,5783,7776,3846,8862,6014,2330,6947,1777,3112,6008,3491,1906,5952,314,4602,8994,5919,9214,3995,5026,7688,6809,5003,3128,2509,7477,110,8971,3982,8539,2980,4689,6343,5411,2992,5270,5247,9260,2269,7474,1042,7162,5206,1232,4556,4757
+510,3556,5377,1406,5721,4946,2635,7847,4251,8293,8281,6351,4912,287,2870,3380,3948,5322,3840,4738,9563,1906,6298,3234,8959,1562,6297,8835,7861,239,6618,1322,2553,2213,5053,5446,4402,6500,5182,8585,6900,5756,9661,903,5186,7687,5998,7997,8081,8955,4835,6069,2621,1581,732,9564,1082,1853,5442,1342,520,1737,3703,5321,4793,2776,1508,1647,9101,2499,6891,4336,7012,3329,3212,1442,9993,3988,4930,7706
+9444,3401,5891,9716,1228,7107,109,3563,2700,6161,5039,4992,2242,8541,7372,2067,1294,3058,1306,320,8881,5756,9326,411,8650,8824,5495,8282,8397,2000,1228,7817,2099,6473,3571,5994,4447,1299,5991,543,7874,2297,1651,101,2093,3463,9189,6872,6118,872,1008,1779,2805,9084,4048,2123,5877,55,3075,1737,9459,4535,6453,3644,108,5982,4437,5213,1340,6967,9943,5815,669,8074,1838,6979,9132,9315,715,5048
+3327,4030,7177,6336,9933,5296,2621,4785,2755,4832,2512,2118,2244,4407,2170,499,7532,9742,5051,7687,970,6924,3527,4694,5145,1306,2165,5940,2425,8910,3513,1909,6983,346,6377,4304,9330,7203,6605,3709,3346,970,369,9737,5811,4427,9939,3693,8436,5566,1977,3728,2399,3985,8303,2492,5366,9802,9193,7296,1033,5060,9144,2766,1151,7629,5169,5995,58,7619,7565,4208,1713,6279,3209,4908,9224,7409,1325,8540
+6882,1265,1775,3648,4690,959,5837,4520,5394,1378,9485,1360,4018,578,9174,2932,9890,3696,116,1723,1178,9355,7063,1594,1918,8574,7594,7942,1547,6166,7888,354,6932,4651,1010,7759,6905,661,7689,6092,9292,3845,9605,8443,443,8275,5163,7720,7265,6356,7779,1798,1754,5225,6661,1180,8024,5666,88,9153,1840,3508,1193,4445,2648,3538,6243,6375,8107,5902,5423,2520,1122,5015,6113,8859,9370,966,8673,2442
+7338,3423,4723,6533,848,8041,7921,8277,4094,5368,7252,8852,9166,2250,2801,6125,8093,5738,4038,9808,7359,9494,601,9116,4946,2702,5573,2921,9862,1462,1269,2410,4171,2709,7508,6241,7522,615,2407,8200,4189,5492,5649,7353,2590,5203,4274,710,7329,9063,956,8371,3722,4253,4785,1194,4828,4717,4548,940,983,2575,4511,2938,1827,2027,2700,1236,841,5760,1680,6260,2373,3851,1841,4968,1172,5179,7175,3509
+4420,1327,3560,2376,6260,2988,9537,4064,4829,8872,9598,3228,1792,7118,9962,9336,4368,9189,6857,1829,9863,6287,7303,7769,2707,8257,2391,2009,3975,4993,3068,9835,3427,341,8412,2134,4034,8511,6421,3041,9012,2983,7289,100,1355,7904,9186,6920,5856,2008,6545,8331,3655,5011,839,8041,9255,6524,3862,8788,62,7455,3513,5003,8413,3918,2076,7960,6108,3638,6999,3436,1441,4858,4181,1866,8731,7745,3744,1000
+356,8296,8325,1058,1277,4743,3850,2388,6079,6462,2815,5620,8495,5378,75,4324,3441,9870,1113,165,1544,1179,2834,562,6176,2313,6836,8839,2986,9454,5199,6888,1927,5866,8760,320,1792,8296,7898,6121,7241,5886,5814,2815,8336,1576,4314,3109,2572,6011,2086,9061,9403,3947,5487,9731,7281,3159,1819,1334,3181,5844,5114,9898,4634,2531,4412,6430,4262,8482,4546,4555,6804,2607,9421,686,8649,8860,7794,6672
+9870,152,1558,4963,8750,4754,6521,6256,8818,5208,5691,9659,8377,9725,5050,5343,2539,6101,1844,9700,7750,8114,5357,3001,8830,4438,199,9545,8496,43,2078,327,9397,106,6090,8181,8646,6414,7499,5450,4850,6273,5014,4131,7639,3913,6571,8534,9703,4391,7618,445,1320,5,1894,6771,7383,9191,4708,9706,6939,7937,8726,9382,5216,3685,2247,9029,8154,1738,9984,2626,9438,4167,6351,5060,29,1218,1239,4785
+192,5213,8297,8974,4032,6966,5717,1179,6523,4679,9513,1481,3041,5355,9303,9154,1389,8702,6589,7818,6336,3539,5538,3094,6646,6702,6266,2759,4608,4452,617,9406,8064,6379,444,5602,4950,1810,8391,1536,316,8714,1178,5182,5863,5110,5372,4954,1978,2971,5680,4863,2255,4630,5723,2168,538,1692,1319,7540,440,6430,6266,7712,7385,5702,620,641,3136,7350,1478,3155,2820,9109,6261,1122,4470,14,8493,2095
+1046,4301,6082,474,4974,7822,2102,5161,5172,6946,8074,9716,6586,9962,9749,5015,2217,995,5388,4402,7652,6399,6539,1349,8101,3677,1328,9612,7922,2879,231,5887,2655,508,4357,4964,3554,5930,6236,7384,4614,280,3093,9600,2110,7863,2631,6626,6620,68,1311,7198,7561,1768,5139,1431,221,230,2940,968,5283,6517,2146,1646,869,9402,7068,8645,7058,1765,9690,4152,2926,9504,2939,7504,6074,2944,6470,7859
+4659,736,4951,9344,1927,6271,8837,8711,3241,6579,7660,5499,5616,3743,5801,4682,9748,8796,779,1833,4549,8138,4026,775,4170,2432,4174,3741,7540,8017,2833,4027,396,811,2871,1150,9809,2719,9199,8504,1224,540,2051,3519,7982,7367,2761,308,3358,6505,2050,4836,5090,7864,805,2566,2409,6876,3361,8622,5572,5895,3280,441,7893,8105,1634,2929,274,3926,7786,6123,8233,9921,2674,5340,1445,203,4585,3837
+5759,338,7444,7968,7742,3755,1591,4839,1705,650,7061,2461,9230,9391,9373,2413,1213,431,7801,4994,2380,2703,6161,6878,8331,2538,6093,1275,5065,5062,2839,582,1014,8109,3525,1544,1569,8622,7944,2905,6120,1564,1839,5570,7579,1318,2677,5257,4418,5601,7935,7656,5192,1864,5886,6083,5580,6202,8869,1636,7907,4759,9082,5854,3185,7631,6854,5872,5632,5280,1431,2077,9717,7431,4256,8261,9680,4487,4752,4286
+1571,1428,8599,1230,7772,4221,8523,9049,4042,8726,7567,6736,9033,2104,4879,4967,6334,6716,3994,1269,8995,6539,3610,7667,6560,6065,874,848,4597,1711,7161,4811,6734,5723,6356,6026,9183,2586,5636,1092,7779,7923,8747,6887,7505,9909,1792,3233,4526,3176,1508,8043,720,5212,6046,4988,709,5277,8256,3642,1391,5803,1468,2145,3970,6301,7767,2359,8487,9771,8785,7520,856,1605,8972,2402,2386,991,1383,5963
+1822,4824,5957,6511,9868,4113,301,9353,6228,2881,2966,6956,9124,9574,9233,1601,7340,973,9396,540,4747,8590,9535,3650,7333,7583,4806,3593,2738,8157,5215,8472,2284,9473,3906,6982,5505,6053,7936,6074,7179,6688,1564,1103,6860,5839,2022,8490,910,7551,7805,881,7024,1855,9448,4790,1274,3672,2810,774,7623,4223,4850,6071,9975,4935,1915,9771,6690,3846,517,463,7624,4511,614,6394,3661,7409,1395,8127
+8738,3850,9555,3695,4383,2378,87,6256,6740,7682,9546,4255,6105,2000,1851,4073,8957,9022,6547,5189,2487,303,9602,7833,1628,4163,6678,3144,8589,7096,8913,5823,4890,7679,1212,9294,5884,2972,3012,3359,7794,7428,1579,4350,7246,4301,7779,7790,3294,9547,4367,3549,1958,8237,6758,3497,3250,3456,6318,1663,708,7714,6143,6890,3428,6853,9334,7992,591,6449,9786,1412,8500,722,5468,1371,108,3939,4199,2535
+7047,4323,1934,5163,4166,461,3544,2767,6554,203,6098,2265,9078,2075,4644,6641,8412,9183,487,101,7566,5622,1975,5726,2920,5374,7779,5631,3753,3725,2672,3621,4280,1162,5812,345,8173,9785,1525,955,5603,2215,2580,5261,2765,2990,5979,389,3907,2484,1232,5933,5871,3304,1138,1616,5114,9199,5072,7442,7245,6472,4760,6359,9053,7876,2564,9404,3043,9026,2261,3374,4460,7306,2326,966,828,3274,1712,3446
+3975,4565,8131,5800,4570,2306,8838,4392,9147,11,3911,7118,9645,4994,2028,6062,5431,2279,8752,2658,7836,994,7316,5336,7185,3289,1898,9689,2331,5737,3403,1124,2679,3241,7748,16,2724,5441,6640,9368,9081,5618,858,4969,17,2103,6035,8043,7475,2181,939,415,1617,8500,8253,2155,7843,7974,7859,1746,6336,3193,2617,8736,4079,6324,6645,8891,9396,5522,6103,1857,8979,3835,2475,1310,7422,610,8345,7615
+9248,5397,5686,2988,3446,4359,6634,9141,497,9176,6773,7448,1907,8454,916,1596,2241,1626,1384,2741,3649,5362,8791,7170,2903,2475,5325,6451,924,3328,522,90,4813,9737,9557,691,2388,1383,4021,1609,9206,4707,5200,7107,8104,4333,9860,5013,1224,6959,8527,1877,4545,7772,6268,621,4915,9349,5970,706,9583,3071,4127,780,8231,3017,9114,3836,7503,2383,1977,4870,8035,2379,9704,1037,3992,3642,1016,4303
+5093,138,4639,6609,1146,5565,95,7521,9077,2272,974,4388,2465,2650,722,4998,3567,3047,921,2736,7855,173,2065,4238,1048,5,6847,9548,8632,9194,5942,4777,7910,8971,6279,7253,2516,1555,1833,3184,9453,9053,6897,7808,8629,4877,1871,8055,4881,7639,1537,7701,2508,7564,5845,5023,2304,5396,3193,2955,1088,3801,6203,1748,3737,1276,13,4120,7715,8552,3047,2921,106,7508,304,1280,7140,2567,9135,5266
+6237,4607,7527,9047,522,7371,4883,2540,5867,6366,5301,1570,421,276,3361,527,6637,4861,2401,7522,5808,9371,5298,2045,5096,5447,7755,5115,7060,8529,4078,1943,1697,1764,5453,7085,960,2405,739,2100,5800,728,9737,5704,5693,1431,8979,6428,673,7540,6,7773,5857,6823,150,5869,8486,684,5816,9626,7451,5579,8260,3397,5322,6920,1879,2127,2884,5478,4977,9016,6165,6292,3062,5671,5968,78,4619,4763
+9905,7127,9390,5185,6923,3721,9164,9705,4341,1031,1046,5127,7376,6528,3248,4941,1178,7889,3364,4486,5358,9402,9158,8600,1025,874,1839,1783,309,9030,1843,845,8398,1433,7118,70,8071,2877,3904,8866,6722,4299,10,1929,5897,4188,600,1889,3325,2485,6473,4474,7444,6992,4846,6166,4441,2283,2629,4352,7775,1101,2214,9985,215,8270,9750,2740,8361,7103,5930,8664,9690,8302,9267,344,2077,1372,1880,9550
+5825,8517,7769,2405,8204,1060,3603,7025,478,8334,1997,3692,7433,9101,7294,7498,9415,5452,3850,3508,6857,9213,6807,4412,7310,854,5384,686,4978,892,8651,3241,2743,3801,3813,8588,6701,4416,6990,6490,3197,6838,6503,114,8343,5844,8646,8694,65,791,5979,2687,2621,2019,8097,1423,3644,9764,4921,3266,3662,5561,2476,8271,8138,6147,1168,3340,1998,9874,6572,9873,6659,5609,2711,3931,9567,4143,7833,8887
+6223,2099,2700,589,4716,8333,1362,5007,2753,2848,4441,8397,7192,8191,4916,9955,6076,3370,6396,6971,3156,248,3911,2488,4930,2458,7183,5455,170,6809,6417,3390,1956,7188,577,7526,2203,968,8164,479,8699,7915,507,6393,4632,1597,7534,3604,618,3280,6061,9793,9238,8347,568,9645,2070,5198,6482,5000,9212,6655,5961,7513,1323,3872,6170,3812,4146,2736,67,3151,5548,2781,9679,7564,5043,8587,1893,4531
+5826,3690,6724,2121,9308,6986,8106,6659,2142,1642,7170,2877,5757,6494,8026,6571,8387,9961,6043,9758,9607,6450,8631,8334,7359,5256,8523,2225,7487,1977,9555,8048,5763,2414,4948,4265,2427,8978,8088,8841,9208,9601,5810,9398,8866,9138,4176,5875,7212,3272,6759,5678,7649,4922,5422,1343,8197,3154,3600,687,1028,4579,2084,9467,4492,7262,7296,6538,7657,7134,2077,1505,7332,6890,8964,4879,7603,7400,5973,739
+1861,1613,4879,1884,7334,966,2000,7489,2123,4287,1472,3263,4726,9203,1040,4103,6075,6049,330,9253,4062,4268,1635,9960,577,1320,3195,9628,1030,4092,4979,6474,6393,2799,6967,8687,7724,7392,9927,2085,3200,6466,8702,265,7646,8665,7986,7266,4574,6587,612,2724,704,3191,8323,9523,3002,704,5064,3960,8209,2027,2758,8393,4875,4641,9584,6401,7883,7014,768,443,5490,7506,1852,2005,8850,5776,4487,4269
+4052,6687,4705,7260,6645,6715,3706,5504,8672,2853,1136,8187,8203,4016,871,1809,1366,4952,9294,5339,6872,2645,6083,7874,3056,5218,7485,8796,7401,3348,2103,426,8572,4163,9171,3176,948,7654,9344,3217,1650,5580,7971,2622,76,2874,880,2034,9929,1546,2659,5811,3754,7096,7436,9694,9960,7415,2164,953,2360,4194,2397,1047,2196,6827,575,784,2675,8821,6802,7972,5996,6699,2134,7577,2887,1412,4349,4380
+4629,2234,6240,8132,7592,3181,6389,1214,266,1910,2451,8784,2790,1127,6932,1447,8986,2492,5476,397,889,3027,7641,5083,5776,4022,185,3364,5701,2442,2840,4160,9525,4828,6602,2614,7447,3711,4505,7745,8034,6514,4907,2605,7753,6958,7270,6936,3006,8968,439,2326,4652,3085,3425,9863,5049,5361,8688,297,7580,8777,7916,6687,8683,7141,306,9569,2384,1500,3346,4601,7329,9040,6097,2727,6314,4501,4974,2829
+8316,4072,2025,6884,3027,1808,5714,7624,7880,8528,4205,8686,7587,3230,1139,7273,6163,6986,3914,9309,1464,9359,4474,7095,2212,7302,2583,9462,7532,6567,1606,4436,8981,5612,6796,4385,5076,2007,6072,3678,8331,1338,3299,8845,4783,8613,4071,1232,6028,2176,3990,2148,3748,103,9453,538,6745,9110,926,3125,473,5970,8728,7072,9062,1404,1317,5139,9862,6496,6062,3338,464,1600,2532,1088,8232,7739,8274,3873
+2341,523,7096,8397,8301,6541,9844,244,4993,2280,7689,4025,4196,5522,7904,6048,2623,9258,2149,9461,6448,8087,7245,1917,8340,7127,8466,5725,6996,3421,5313,512,9164,9837,9794,8369,4185,1488,7210,1524,1016,4620,9435,2478,7765,8035,697,6677,3724,6988,5853,7662,3895,9593,1185,4727,6025,5734,7665,3070,138,8469,6748,6459,561,7935,8646,2378,462,7755,3115,9690,8877,3946,2728,8793,244,6323,8666,4271
+6430,2406,8994,56,1267,3826,9443,7079,7579,5232,6691,3435,6718,5698,4144,7028,592,2627,217,734,6194,8156,9118,58,2640,8069,4127,3285,694,3197,3377,4143,4802,3324,8134,6953,7625,3598,3584,4289,7065,3434,2106,7132,5802,7920,9060,7531,3321,1725,1067,3751,444,5503,6785,7937,6365,4803,198,6266,8177,1470,6390,1606,2904,7555,9834,8667,2033,1723,5167,1666,8546,8152,473,4475,6451,7947,3062,3281
+2810,3042,7759,1741,2275,2609,7676,8640,4117,1958,7500,8048,1757,3954,9270,1971,4796,2912,660,5511,3553,1012,5757,4525,6084,7198,8352,5775,7726,8591,7710,9589,3122,4392,6856,5016,749,2285,3356,7482,9956,7348,2599,8944,495,3462,3578,551,4543,7207,7169,7796,1247,4278,6916,8176,3742,8385,2310,1345,8692,2667,4568,1770,8319,3585,4920,3890,4928,7343,5385,9772,7947,8786,2056,9266,3454,2807,877,2660
+6206,8252,5928,5837,4177,4333,207,7934,5581,9526,8906,1498,8411,2984,5198,5134,2464,8435,8514,8674,3876,599,5327,826,2152,4084,2433,9327,9697,4800,2728,3608,3849,3861,3498,9943,1407,3991,7191,9110,5666,8434,4704,6545,5944,2357,1163,4995,9619,6754,4200,9682,6654,4862,4744,5953,6632,1054,293,9439,8286,2255,696,8709,1533,1844,6441,430,1999,6063,9431,7018,8057,2920,6266,6799,356,3597,4024,6665
+3847,6356,8541,7225,2325,2946,5199,469,5450,7508,2197,9915,8284,7983,6341,3276,3321,16,1321,7608,5015,3362,8491,6968,6818,797,156,2575,706,9516,5344,5457,9210,5051,8099,1617,9951,7663,8253,9683,2670,1261,4710,1068,8753,4799,1228,2621,3275,6188,4699,1791,9518,8701,5932,4275,6011,9877,2933,4182,6059,2930,6687,6682,9771,654,9437,3169,8596,1827,5471,8909,2352,123,4394,3208,8756,5513,6917,2056
+5458,8173,3138,3290,4570,4892,3317,4251,9699,7973,1163,1935,5477,6648,9614,5655,9592,975,9118,2194,7322,8248,8413,3462,8560,1907,7810,6650,7355,2939,4973,6894,3933,3784,3200,2419,9234,4747,2208,2207,1945,2899,1407,6145,8023,3484,5688,7686,2737,3828,3704,9004,5190,9740,8643,8650,5358,4426,1522,1707,3613,9887,6956,2447,2762,833,1449,9489,2573,1080,4167,3456,6809,2466,227,7125,2759,6250,6472,8089
+3266,7025,9756,3914,1265,9116,7723,9788,6805,5493,2092,8688,6592,9173,4431,4028,6007,7131,4446,4815,3648,6701,759,3312,8355,4485,4187,5188,8746,7759,3528,2177,5243,8379,3838,7233,4607,9187,7216,2190,6967,2920,6082,7910,5354,3609,8958,6949,7731,494,8753,8707,1523,4426,3543,7085,647,6771,9847,646,5049,824,8417,5260,2730,5702,2513,9275,4279,2767,8684,1165,9903,4518,55,9682,8963,6005,2102,6523
+1998,8731,936,1479,5259,7064,4085,91,7745,7136,3773,3810,730,8255,2705,2653,9790,6807,2342,355,9344,2668,3690,2028,9679,8102,574,4318,6481,9175,5423,8062,2867,9657,7553,3442,3920,7430,3945,7639,3714,3392,2525,4995,4850,2867,7951,9667,486,9506,9888,781,8866,1702,3795,90,356,1483,4200,2131,6969,5931,486,6880,4404,1084,5169,4910,6567,8335,4686,5043,2614,3352,2667,4513,6472,7471,5720,1616
+8878,1613,1716,868,1906,2681,564,665,5995,2474,7496,3432,9491,9087,8850,8287,669,823,347,6194,2264,2592,7871,7616,8508,4827,760,2676,4660,4881,7572,3811,9032,939,4384,929,7525,8419,5556,9063,662,8887,7026,8534,3111,1454,2082,7598,5726,6687,9647,7608,73,3014,5063,670,5461,5631,3367,9796,8475,7908,5073,1565,5008,5295,4457,1274,4788,1728,338,600,8415,8535,9351,7750,6887,5845,1741,125
+3637,6489,9634,9464,9055,2413,7824,9517,7532,3577,7050,6186,6980,9365,9782,191,870,2497,8498,2218,2757,5420,6468,586,3320,9230,1034,1393,9886,5072,9391,1178,8464,8042,6869,2075,8275,3601,7715,9470,8786,6475,8373,2159,9237,2066,3264,5000,679,355,3069,4073,494,2308,5512,4334,9438,8786,8637,9774,1169,1949,6594,6072,4270,9158,7916,5752,6794,9391,6301,5842,3285,2141,3898,8027,4310,8821,7079,1307
+8497,6681,4732,7151,7060,5204,9030,7157,833,5014,8723,3207,9796,9286,4913,119,5118,7650,9335,809,3675,2597,5144,3945,5090,8384,187,4102,1260,2445,2792,4422,8389,9290,50,1765,1521,6921,8586,4368,1565,5727,7855,2003,4834,9897,5911,8630,5070,1330,7692,7557,7980,6028,5805,9090,8265,3019,3802,698,9149,5748,1965,9658,4417,5994,5584,8226,2937,272,5743,1278,5698,8736,2595,6475,5342,6596,1149,6920
+8188,8009,9546,6310,8772,2500,9846,6592,6872,3857,1307,8125,7042,1544,6159,2330,643,4604,7899,6848,371,8067,2062,3200,7295,1857,9505,6936,384,2193,2190,301,8535,5503,1462,7380,5114,4824,8833,1763,4974,8711,9262,6698,3999,2645,6937,7747,1128,2933,3556,7943,2885,3122,9105,5447,418,2899,5148,3699,9021,9501,597,4084,175,1621,1,1079,6067,5812,4326,9914,6633,5394,4233,6728,9084,1864,5863,1225
+9935,8793,9117,1825,9542,8246,8437,3331,9128,9675,6086,7075,319,1334,7932,3583,7167,4178,1726,7720,695,8277,7887,6359,5912,1719,2780,8529,1359,2013,4498,8072,1129,9998,1147,8804,9405,6255,1619,2165,7491,1,8882,7378,3337,503,5758,4109,3577,985,3200,7615,8058,5032,1080,6410,6873,5496,1466,2412,9885,5904,4406,3605,8770,4361,6205,9193,1537,9959,214,7260,9566,1685,100,4920,7138,9819,5637,976
+3466,9854,985,1078,7222,8888,5466,5379,3578,4540,6853,8690,3728,6351,7147,3134,6921,9692,857,3307,4998,2172,5783,3931,9417,2541,6299,13,787,2099,9131,9494,896,8600,1643,8419,7248,2660,2609,8579,91,6663,5506,7675,1947,6165,4286,1972,9645,3805,1663,1456,8853,5705,9889,7489,1107,383,4044,2969,3343,152,7805,4980,9929,5033,1737,9953,7197,9158,4071,1324,473,9676,3984,9680,3606,8160,7384,5432
+1005,4512,5186,3953,2164,3372,4097,3247,8697,3022,9896,4101,3871,6791,3219,2742,4630,6967,7829,5991,6134,1197,1414,8923,8787,1394,8852,5019,7768,5147,8004,8825,5062,9625,7988,1110,3992,7984,9966,6516,6251,8270,421,3723,1432,4830,6935,8095,9059,2214,6483,6846,3120,1587,6201,6691,9096,9627,6671,4002,3495,9939,7708,7465,5879,6959,6634,3241,3401,2355,9061,2611,7830,3941,2177,2146,5089,7079,519,6351
+7280,8586,4261,2831,7217,3141,9994,9940,5462,2189,4005,6942,9848,5350,8060,6665,7519,4324,7684,657,9453,9296,2944,6843,7499,7847,1728,9681,3906,6353,5529,2822,3355,3897,7724,4257,7489,8672,4356,3983,1948,6892,7415,4153,5893,4190,621,1736,4045,9532,7701,3671,1211,1622,3176,4524,9317,7800,5638,6644,6943,5463,3531,2821,1347,5958,3436,1438,2999,994,850,4131,2616,1549,3465,5946,690,9273,6954,7991
+9517,399,3249,2596,7736,2142,1322,968,7350,1614,468,3346,3265,7222,6086,1661,5317,2582,7959,4685,2807,2917,1037,5698,1529,3972,8716,2634,3301,3412,8621,743,8001,4734,888,7744,8092,3671,8941,1487,5658,7099,2781,99,1932,4443,4756,4652,9328,1581,7855,4312,5976,7255,6480,3996,2748,1973,9731,4530,2790,9417,7186,5303,3557,351,7182,9428,1342,9020,7599,1392,8304,2070,9138,7215,2008,9937,1106,7110
+7444,769,9688,632,1571,6820,8743,4338,337,3366,3073,1946,8219,104,4210,6986,249,5061,8693,7960,6546,1004,8857,5997,9352,4338,6105,5008,2556,6518,6694,4345,3727,7956,20,3954,8652,4424,9387,2035,8358,5962,5304,5194,8650,8282,1256,1103,2138,6679,1985,3653,2770,2433,4278,615,2863,1715,242,3790,2636,6998,3088,1671,2239,957,5411,4595,6282,2881,9974,2401,875,7574,2987,4587,3147,6766,9885,2965
+3287,3016,3619,6818,9073,6120,5423,557,2900,2015,8111,3873,1314,4189,1846,4399,7041,7583,2427,2864,3525,5002,2069,748,1948,6015,2684,438,770,8367,1663,7887,7759,1885,157,7770,4520,4878,3857,1137,3525,3050,6276,5569,7649,904,4533,7843,2199,5648,7628,9075,9441,3600,7231,2388,5640,9096,958,3058,584,5899,8150,1181,9616,1098,8162,6819,8171,1519,1140,7665,8801,2632,1299,9192,707,9955,2710,7314
+1772,2963,7578,3541,3095,1488,7026,2634,6015,4633,4370,2762,1650,2174,909,8158,2922,8467,4198,4280,9092,8856,8835,5457,2790,8574,9742,5054,9547,4156,7940,8126,9824,7340,8840,6574,3547,1477,3014,6798,7134,435,9484,9859,3031,4,1502,4133,1738,1807,4825,463,6343,9701,8506,9822,9555,8688,8168,3467,3234,6318,1787,5591,419,6593,7974,8486,9861,6381,6758,194,3061,4315,2863,4665,3789,2201,1492,4416
+126,8927,6608,5682,8986,6867,1715,6076,3159,788,3140,4744,830,9253,5812,5021,7616,8534,1546,9590,1101,9012,9821,8132,7857,4086,1069,7491,2988,1579,2442,4321,2149,7642,6108,250,6086,3167,24,9528,7663,2685,1220,9196,1397,5776,1577,1730,5481,977,6115,199,6326,2183,3767,5928,5586,7561,663,8649,9688,949,5913,9160,1870,5764,9887,4477,6703,1413,4995,5494,7131,2192,8969,7138,3997,8697,646,1028
+8074,1731,8245,624,4601,8706,155,8891,309,2552,8208,8452,2954,3124,3469,4246,3352,1105,4509,8677,9901,4416,8191,9283,5625,7120,2952,8881,7693,830,4580,8228,9459,8611,4499,1179,4988,1394,550,2336,6089,6872,269,7213,1848,917,6672,4890,656,1478,6536,3165,4743,4990,1176,6211,7207,5284,9730,4738,1549,4986,4942,8645,3698,9429,1439,2175,6549,3058,6513,1574,6988,8333,3406,5245,5431,7140,7085,6407
+7845,4694,2530,8249,290,5948,5509,1588,5940,4495,5866,5021,4626,3979,3296,7589,4854,1998,5627,3926,8346,6512,9608,1918,7070,4747,4182,2858,2766,4606,6269,4107,8982,8568,9053,4244,5604,102,2756,727,5887,2566,7922,44,5986,621,1202,374,6988,4130,3627,6744,9443,4568,1398,8679,397,3928,9159,367,2917,6127,5788,3304,8129,911,2669,1463,9749,264,4478,8940,1109,7309,2462,117,4692,7724,225,2312
+4164,3637,2000,941,8903,39,3443,7172,1031,3687,4901,8082,4945,4515,7204,9310,9349,9535,9940,218,1788,9245,2237,1541,5670,6538,6047,5553,9807,8101,1925,8714,445,8332,7309,6830,5786,5736,7306,2710,3034,1838,7969,6318,7912,2584,2080,7437,6705,2254,7428,820,782,9861,7596,3842,3631,8063,5240,6666,394,4565,7865,4895,9890,6028,6117,4724,9156,4473,4552,602,470,6191,4927,5387,884,3146,1978,3000
+4258,6880,1696,3582,5793,4923,2119,1155,9056,9698,6603,3768,5514,9927,9609,6166,6566,4536,4985,4934,8076,9062,6741,6163,7399,4562,2337,5600,2919,9012,8459,1308,6072,1225,9306,8818,5886,7243,7365,8792,6007,9256,6699,7171,4230,7002,8720,7839,4533,1671,478,7774,1607,2317,5437,4705,7886,4760,6760,7271,3081,2997,3088,7675,6208,3101,6821,6840,122,9633,4900,2067,8546,4549,2091,7188,5605,8599,6758,5229
+7854,5243,9155,3556,8812,7047,2202,1541,5993,4600,4760,713,434,7911,7426,7414,8729,322,803,7960,7563,4908,6285,6291,736,3389,9339,4132,8701,7534,5287,3646,592,3065,7582,2592,8755,6068,8597,1982,5782,1894,2900,6236,4039,6569,3037,5837,7698,700,7815,2491,7272,5878,3083,6778,6639,3589,5010,8313,2581,6617,5869,8402,6808,2951,2321,5195,497,2190,6187,1342,1316,4453,7740,4154,2959,1781,1482,8256
+7178,2046,4419,744,8312,5356,6855,8839,319,2962,5662,47,6307,8662,68,4813,567,2712,9931,1678,3101,8227,6533,4933,6656,92,5846,4780,6256,6361,4323,9985,1231,2175,7178,3034,9744,6155,9165,7787,5836,9318,7860,9644,8941,6480,9443,8188,5928,161,6979,2352,5628,6991,1198,8067,5867,6620,3778,8426,2994,3122,3124,6335,3918,8897,2655,9670,634,1088,1576,8935,7255,474,8166,7417,9547,2886,5560,3842
+6957,3111,26,7530,7143,1295,1744,6057,3009,1854,8098,5405,2234,4874,9447,2620,9303,27,7410,969,40,2966,5648,7596,8637,4238,3143,3679,7187,690,9980,7085,7714,9373,5632,7526,6707,3951,9734,4216,2146,3602,5371,6029,3039,4433,4855,4151,1449,3376,8009,7240,7027,4602,2947,9081,4045,8424,9352,8742,923,2705,4266,3232,2264,6761,363,2651,3383,7770,6730,7856,7340,9679,2158,610,4471,4608,910,6241
+4417,6756,1013,8797,658,8809,5032,8703,7541,846,3357,2920,9817,1745,9980,7593,4667,3087,779,3218,6233,5568,4296,2289,2654,7898,5021,9461,5593,8214,9173,4203,2271,7980,2983,5952,9992,8399,3468,1776,3188,9314,1720,6523,2933,621,8685,5483,8986,6163,3444,9539,4320,155,3992,2828,2150,6071,524,2895,5468,8063,1210,3348,9071,4862,483,9017,4097,6186,9815,3610,5048,1644,1003,9865,9332,2145,1944,2213
+9284,3803,4920,1927,6706,4344,7383,4786,9890,2010,5228,1224,3158,6967,8580,8990,8883,5213,76,8306,2031,4980,5639,9519,7184,5645,7769,3259,8077,9130,1317,3096,9624,3818,1770,695,2454,947,6029,3474,9938,3527,5696,4760,7724,7738,2848,6442,5767,6845,8323,4131,2859,7595,2500,4815,3660,9130,8580,7016,8231,4391,8369,3444,4069,4021,556,6154,627,2778,1496,4206,6356,8434,8491,3816,8231,3190,5575,1015
+3787,7572,1788,6803,5641,6844,1961,4811,8535,9914,9999,1450,8857,738,4662,8569,6679,2225,7839,8618,286,2648,5342,2294,3205,4546,176,8705,3741,6134,8324,8021,7004,5205,7032,6637,9442,5539,5584,4819,5874,5807,8589,6871,9016,983,1758,3786,1519,6241,185,8398,495,3370,9133,3051,4549,9674,7311,9738,3316,9383,2658,2776,9481,7558,619,3943,3324,6491,4933,153,9738,4623,912,3595,7771,7939,1219,4405
+2650,3883,4154,5809,315,7756,4430,1788,4451,1631,6461,7230,6017,5751,138,588,5282,2442,9110,9035,6349,2515,1570,6122,4192,4174,3530,1933,4186,4420,4609,5739,4135,2963,6308,1161,8809,8619,2796,3819,6971,8228,4188,1492,909,8048,2328,6772,8467,7671,9068,2226,7579,6422,7056,8042,3296,2272,3006,2196,7320,3238,3490,3102,37,1293,3212,4767,5041,8773,5794,4456,6174,7279,7054,2835,7053,9088,790,6640
+3101,1057,7057,3826,6077,1025,2955,1224,1114,6729,5902,4698,6239,7203,9423,1804,4417,6686,1426,6941,8071,1029,4985,9010,6122,6597,1622,1574,3513,1684,7086,5505,3244,411,9638,4150,907,9135,829,981,1707,5359,8781,9751,5,9131,3973,7159,1340,6955,7514,7993,6964,8198,1933,2797,877,3993,4453,8020,9349,8646,2779,8679,2961,3547,3374,3510,1129,3568,2241,2625,9138,5974,8206,7669,7678,1833,8700,4480
+4865,9912,8038,8238,782,3095,8199,1127,4501,7280,2112,2487,3626,2790,9432,1475,6312,8277,4827,2218,5806,7132,8752,1468,7471,6386,739,8762,8323,8120,5169,9078,9058,3370,9560,7987,8585,8531,5347,9312,1058,4271,1159,5286,5404,6925,8606,9204,7361,2415,560,586,4002,2644,1927,2824,768,4409,2942,3345,1002,808,4941,6267,7979,5140,8643,7553,9438,7320,4938,2666,4609,2778,8158,6730,3748,3867,1866,7181
+171,3771,7134,8927,4778,2913,3326,2004,3089,7853,1378,1729,4777,2706,9578,1360,5693,3036,1851,7248,2403,2273,8536,6501,9216,613,9671,7131,7719,6425,773,717,8803,160,1114,7554,7197,753,4513,4322,8499,4533,2609,4226,8710,6627,644,9666,6260,4870,5744,7385,6542,6203,7703,6130,8944,5589,2262,6803,6381,7414,6888,5123,7320,9392,9061,6780,322,8975,7050,5089,1061,2260,3199,1150,1865,5386,9699,6501
+3744,8454,6885,8277,919,1923,4001,6864,7854,5519,2491,6057,8794,9645,1776,5714,9786,9281,7538,6916,3215,395,2501,9618,4835,8846,9708,2813,3303,1794,8309,7176,2206,1602,1838,236,4593,2245,8993,4017,10,8215,6921,5206,4023,5932,6997,7801,262,7640,3107,8275,4938,7822,2425,3223,3886,2105,8700,9526,2088,8662,8034,7004,5710,2124,7164,3574,6630,9980,4242,2901,9471,1491,2117,4562,1130,9086,4117,6698
+2810,2280,2331,1170,4554,4071,8387,1215,2274,9848,6738,1604,7281,8805,439,1298,8318,7834,9426,8603,6092,7944,1309,8828,303,3157,4638,4439,9175,1921,4695,7716,1494,1015,1772,5913,1127,1952,1950,8905,4064,9890,385,9357,7945,5035,7082,5369,4093,6546,5187,5637,2041,8946,1758,7111,6566,1027,1049,5148,7224,7248,296,6169,375,1656,7993,2816,3717,4279,4675,1609,3317,42,6201,3100,3144,163,9530,4531
+7096,6070,1009,4988,3538,5801,7149,3063,2324,2912,7911,7002,4338,7880,2481,7368,3516,2016,7556,2193,1388,3865,8125,4637,4096,8114,750,3144,1938,7002,9343,4095,1392,4220,3455,6969,9647,1321,9048,1996,1640,6626,1788,314,9578,6630,2813,6626,4981,9908,7024,4355,3201,3521,3864,3303,464,1923,595,9801,3391,8366,8084,9374,1041,8807,9085,1892,9431,8317,9016,9221,8574,9981,9240,5395,2009,6310,2854,9255
+8830,3145,2960,9615,8220,6061,3452,2918,6481,9278,2297,3385,6565,7066,7316,5682,107,7646,4466,68,1952,9603,8615,54,7191,791,6833,2560,693,9733,4168,570,9127,9537,1925,8287,5508,4297,8452,8795,6213,7994,2420,4208,524,5915,8602,8330,2651,8547,6156,1812,6271,7991,9407,9804,1553,6866,1128,2119,4691,9711,8315,5879,9935,6900,482,682,4126,1041,428,6247,3720,5882,7526,2582,4327,7725,3503,2631
+2738,9323,721,7434,1453,6294,2957,3786,5722,6019,8685,4386,3066,9057,6860,499,5315,3045,5194,7111,3137,9104,941,586,3066,755,4177,8819,7040,5309,3583,3897,4428,7788,4721,7249,6559,7324,825,7311,3760,6064,6070,9672,4882,584,1365,9739,9331,5783,2624,7889,1604,1303,1555,7125,8312,425,8936,3233,7724,1480,403,7440,1784,1754,4721,1569,652,3893,4574,5692,9730,4813,9844,8291,9199,7101,3391,8914
+6044,2928,9332,3328,8588,447,3830,1176,3523,2705,8365,6136,5442,9049,5526,8575,8869,9031,7280,706,2794,8814,5767,4241,7696,78,6570,556,5083,1426,4502,3336,9518,2292,1885,3740,3153,9348,9331,8051,2759,5407,9028,7840,9255,831,515,2612,9747,7435,8964,4971,2048,4900,5967,8271,1719,9670,2810,6777,1594,6367,6259,8316,3815,1689,6840,9437,4361,822,9619,3065,83,6344,7486,8657,8228,9635,6932,4864
+8478,4777,6334,4678,7476,4963,6735,3096,5860,1405,5127,7269,7793,4738,227,9168,2996,8928,765,733,1276,7677,6258,1528,9558,3329,302,8901,1422,8277,6340,645,9125,8869,5952,141,8141,1816,9635,4025,4184,3093,83,2344,2747,9352,7966,1206,1126,1826,218,7939,2957,2729,810,8752,5247,4174,4038,8884,7899,9567,301,5265,5752,7524,4381,1669,3106,8270,6228,6373,754,2547,4240,2313,5514,3022,1040,9738
+2265,8192,1763,1369,8469,8789,4836,52,1212,6690,5257,8918,6723,6319,378,4039,2421,8555,8184,9577,1432,7139,8078,5452,9628,7579,4161,7490,5159,8559,1011,81,478,5840,1964,1334,6875,8670,9900,739,1514,8692,522,9316,6955,1345,8132,2277,3193,9773,3923,4177,2183,1236,6747,6575,4874,6003,6409,8187,745,8776,9440,7543,9825,2582,7381,8147,7236,5185,7564,6125,218,7991,6394,391,7659,7456,5128,5294
+2132,8992,8160,5782,4420,3371,3798,5054,552,5631,7546,4716,1332,6486,7892,7441,4370,6231,4579,2121,8615,1145,9391,1524,1385,2400,9437,2454,7896,7467,2928,8400,3299,4025,7458,4703,7206,6358,792,6200,725,4275,4136,7390,5984,4502,7929,5085,8176,4600,119,3568,76,9363,6943,2248,9077,9731,6213,5817,6729,4190,3092,6910,759,2682,8380,1254,9604,3011,9291,5329,9453,9746,2739,6522,3765,5634,1113,5789
+5304,5499,564,2801,679,2653,1783,3608,7359,7797,3284,796,3222,437,7185,6135,8571,2778,7488,5746,678,6140,861,7750,803,9859,9918,2425,3734,2698,9005,4864,9818,6743,2475,132,9486,3825,5472,919,292,4411,7213,7699,6435,9019,6769,1388,802,2124,1345,8493,9487,8558,7061,8777,8833,2427,2238,5409,4957,8503,3171,7622,5779,6145,2417,5873,5563,5693,9574,9491,1937,7384,4563,6842,5432,2751,3406,7981
diff --git a/project_euler/problem_082/sol1.py b/project_euler/problem_082/sol1.py
new file mode 100644
index 000000000000..7b50dc887719
--- /dev/null
+++ b/project_euler/problem_082/sol1.py
@@ -0,0 +1,65 @@
+"""
+Project Euler Problem 82: https://projecteuler.net/problem=82
+
+The minimal path sum in the 5 by 5 matrix below, by starting in any cell
+in the left column and finishing in any cell in the right column,
+and only moving up, down, and right, is indicated in red and bold;
+the sum is equal to 994.
+
+ 131 673 [234] [103] [18]
+ [201] [96] [342] 965 150
+ 630 803 746 422 111
+ 537 699 497 121 956
+ 805 732 524 37 331
+
+Find the minimal path sum from the left column to the right column in matrix.txt
+(https://projecteuler.net/project/resources/p082_matrix.txt)
+(right click and "Save Link/Target As..."),
+a 31K text file containing an 80 by 80 matrix.
+"""
+
+import os
+
+
+def solution(filename: str = "input.txt") -> int:
+ """
+ Returns the minimal path sum in the matrix from the file, by starting in any cell
+ in the left column and finishing in any cell in the right column,
+ and only moving up, down, and right
+
+ >>> solution("test_matrix.txt")
+ 994
+ """
+
+ with open(os.path.join(os.path.dirname(__file__), filename)) as input_file:
+ matrix = [
+ [int(element) for element in line.split(",")]
+ for line in input_file.readlines()
+ ]
+
+ rows = len(matrix)
+ cols = len(matrix[0])
+
+ minimal_path_sums = [[-1 for _ in range(cols)] for _ in range(rows)]
+ for i in range(rows):
+ minimal_path_sums[i][0] = matrix[i][0]
+
+ for j in range(1, cols):
+ for i in range(rows):
+ minimal_path_sums[i][j] = minimal_path_sums[i][j - 1] + matrix[i][j]
+
+ for i in range(1, rows):
+ minimal_path_sums[i][j] = min(
+ minimal_path_sums[i][j], minimal_path_sums[i - 1][j] + matrix[i][j]
+ )
+
+ for i in range(rows - 2, -1, -1):
+ minimal_path_sums[i][j] = min(
+ minimal_path_sums[i][j], minimal_path_sums[i + 1][j] + matrix[i][j]
+ )
+
+ return min(minimal_path_sums_row[-1] for minimal_path_sums_row in minimal_path_sums)
+
+
+if __name__ == "__main__":
+ print(f"{solution() = }")
diff --git a/project_euler/problem_082/test_matrix.txt b/project_euler/problem_082/test_matrix.txt
new file mode 100644
index 000000000000..76167d9e7fc1
--- /dev/null
+++ b/project_euler/problem_082/test_matrix.txt
@@ -0,0 +1,5 @@
+131,673,234,103,18
+201,96,342,965,150
+630,803,746,422,111
+537,699,497,121,956
+805,732,524,37,331
From ee778128bdf8d4d6d386cfdc500f3b3173f56c06 Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Thu, 2 Mar 2023 07:57:07 +0300
Subject: [PATCH 008/808] Reduce the complexity of other/scoring_algorithm.py
(#8045)
* Increase the --max-complexity threshold in the file .flake8
---
other/scoring_algorithm.py | 57 ++++++++++++++++++++++++++++----------
1 file changed, 43 insertions(+), 14 deletions(-)
diff --git a/other/scoring_algorithm.py b/other/scoring_algorithm.py
index 00d87cfc0b73..8e04a8f30dd7 100644
--- a/other/scoring_algorithm.py
+++ b/other/scoring_algorithm.py
@@ -23,29 +23,29 @@
"""
-def procentual_proximity(
- source_data: list[list[float]], weights: list[int]
-) -> list[list[float]]:
+def get_data(source_data: list[list[float]]) -> list[list[float]]:
"""
- weights - int list
- possible values - 0 / 1
- 0 if lower values have higher weight in the data set
- 1 if higher values have higher weight in the data set
-
- >>> procentual_proximity([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]], [0, 0, 1])
- [[20, 60, 2012, 2.0], [23, 90, 2015, 1.0], [22, 50, 2011, 1.3333333333333335]]
+ >>> get_data([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]])
+ [[20.0, 23.0, 22.0], [60.0, 90.0, 50.0], [2012.0, 2015.0, 2011.0]]
"""
-
- # getting data
data_lists: list[list[float]] = []
for data in source_data:
for i, el in enumerate(data):
if len(data_lists) < i + 1:
data_lists.append([])
data_lists[i].append(float(el))
+ return data_lists
+
+def calculate_each_score(
+ data_lists: list[list[float]], weights: list[int]
+) -> list[list[float]]:
+ """
+ >>> calculate_each_score([[20, 23, 22], [60, 90, 50], [2012, 2015, 2011]],
+ ... [0, 0, 1])
+ [[1.0, 0.0, 0.33333333333333337], [0.75, 0.0, 1.0], [0.25, 1.0, 0.0]]
+ """
score_lists: list[list[float]] = []
- # calculating each score
for dlist, weight in zip(data_lists, weights):
mind = min(dlist)
maxd = max(dlist)
@@ -72,14 +72,43 @@ def procentual_proximity(
score_lists.append(score)
+ return score_lists
+
+
+def generate_final_scores(score_lists: list[list[float]]) -> list[float]:
+ """
+ >>> generate_final_scores([[1.0, 0.0, 0.33333333333333337],
+ ... [0.75, 0.0, 1.0],
+ ... [0.25, 1.0, 0.0]])
+ [2.0, 1.0, 1.3333333333333335]
+ """
# initialize final scores
final_scores: list[float] = [0 for i in range(len(score_lists[0]))]
- # generate final scores
for slist in score_lists:
for j, ele in enumerate(slist):
final_scores[j] = final_scores[j] + ele
+ return final_scores
+
+
+def procentual_proximity(
+ source_data: list[list[float]], weights: list[int]
+) -> list[list[float]]:
+ """
+ weights - int list
+ possible values - 0 / 1
+ 0 if lower values have higher weight in the data set
+ 1 if higher values have higher weight in the data set
+
+ >>> procentual_proximity([[20, 60, 2012],[23, 90, 2015],[22, 50, 2011]], [0, 0, 1])
+ [[20, 60, 2012, 2.0], [23, 90, 2015, 1.0], [22, 50, 2011, 1.3333333333333335]]
+ """
+
+ data_lists = get_data(source_data)
+ score_lists = calculate_each_score(data_lists, weights)
+ final_scores = generate_final_scores(score_lists)
+
# append scores to source data
for i, ele in enumerate(final_scores):
source_data[i].append(ele)
From 9720e6a6cf52e2395e2d7ef3ef6ae91a355d318e Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Thu, 2 Mar 2023 19:51:48 +0300
Subject: [PATCH 009/808] Add Project Euler problem 117 solution 1 (#6872)
Update DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 2 +
project_euler/problem_117/__init__.py | 0
project_euler/problem_117/sol1.py | 53 +++++++++++++++++++++++++++
3 files changed, 55 insertions(+)
create mode 100644 project_euler/problem_117/__init__.py
create mode 100644 project_euler/problem_117/sol1.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 3d1bc967e4b5..4844841040d9 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -956,6 +956,8 @@
* [Sol1](project_euler/problem_115/sol1.py)
* Problem 116
* [Sol1](project_euler/problem_116/sol1.py)
+ * Problem 117
+ * [Sol1](project_euler/problem_117/sol1.py)
* Problem 119
* [Sol1](project_euler/problem_119/sol1.py)
* Problem 120
diff --git a/project_euler/problem_117/__init__.py b/project_euler/problem_117/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/project_euler/problem_117/sol1.py b/project_euler/problem_117/sol1.py
new file mode 100644
index 000000000000..e8214454fac5
--- /dev/null
+++ b/project_euler/problem_117/sol1.py
@@ -0,0 +1,53 @@
+"""
+Project Euler Problem 117: https://projecteuler.net/problem=117
+
+Using a combination of grey square tiles and oblong tiles chosen from:
+red tiles (measuring two units), green tiles (measuring three units),
+and blue tiles (measuring four units),
+it is possible to tile a row measuring five units in length
+in exactly fifteen different ways.
+
+ |grey|grey|grey|grey|grey| |red,red|grey|grey|grey|
+
+ |grey|red,red|grey|grey| |grey|grey|red,red|grey|
+
+ |grey|grey|grey|red,red| |red,red|red,red|grey|
+
+ |red,red|grey|red,red| |grey|red,red|red,red|
+
+ |green,green,green|grey|grey| |grey|green,green,green|grey|
+
+ |grey|grey|green,green,green| |red,red|green,green,green|
+
+ |green,green,green|red,red| |blue,blue,blue,blue|grey|
+
+ |grey|blue,blue,blue,blue|
+
+How many ways can a row measuring fifty units in length be tiled?
+
+NOTE: This is related to Problem 116 (https://projecteuler.net/problem=116).
+"""
+
+
+def solution(length: int = 50) -> int:
+ """
+ Returns the number of ways can a row of the given length be tiled
+
+ >>> solution(5)
+ 15
+ """
+
+ ways_number = [1] * (length + 1)
+
+ for row_length in range(length + 1):
+ for tile_length in range(2, 5):
+ for tile_start in range(row_length - tile_length + 1):
+ ways_number[row_length] += ways_number[
+ row_length - tile_start - tile_length
+ ]
+
+ return ways_number[length]
+
+
+if __name__ == "__main__":
+ print(f"{solution() = }")
From 41b633a841084acac5a640042d365c985e23b357 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 7 Mar 2023 00:10:39 +0100
Subject: [PATCH 010/808] [pre-commit.ci] pre-commit autoupdate (#8168)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.253 → v0.0.254](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.253...v0.0.254)
* Rename get_top_billionaires.py to get_top_billionaires.py.disabled
* updating DIRECTORY.md
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.pre-commit-config.yaml | 2 +-
DIRECTORY.md | 1 -
...get_top_billionaires.py => get_top_billionaires.py.disabled} | 0
3 files changed, 1 insertion(+), 2 deletions(-)
rename web_programming/{get_top_billionaires.py => get_top_billionaires.py.disabled} (100%)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 9f27f985bb6a..329407265a5a 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -44,7 +44,7 @@ repos:
- --py311-plus
- repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.253
+ rev: v0.0.254
hooks:
- id: ruff
args:
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 4844841040d9..f25b0c6ff4e3 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -1167,7 +1167,6 @@
* [Get Amazon Product Data](web_programming/get_amazon_product_data.py)
* [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py)
* [Get Imdbtop](web_programming/get_imdbtop.py)
- * [Get Top Billionaires](web_programming/get_top_billionaires.py)
* [Get Top Hn Posts](web_programming/get_top_hn_posts.py)
* [Get User Tweets](web_programming/get_user_tweets.py)
* [Giphy](web_programming/giphy.py)
diff --git a/web_programming/get_top_billionaires.py b/web_programming/get_top_billionaires.py.disabled
similarity index 100%
rename from web_programming/get_top_billionaires.py
rename to web_programming/get_top_billionaires.py.disabled
From 9e28ecca28176254c39bcc791733589c6091422e Mon Sep 17 00:00:00 2001
From: Subhendu Dash <71781104+subhendudash02@users.noreply.github.com>
Date: Tue, 7 Mar 2023 21:46:25 +0530
Subject: [PATCH 011/808] Add circular convolution (#8158)
* add circular convolution
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* add type hint for __init__
* rounding off final values to 2 and minor changes
* add test case for unequal signals
* changes in list comprehension and enumeraton
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
electronics/circular_convolution.py | 99 +++++++++++++++++++++++++++++
1 file changed, 99 insertions(+)
create mode 100644 electronics/circular_convolution.py
diff --git a/electronics/circular_convolution.py b/electronics/circular_convolution.py
new file mode 100644
index 000000000000..f2e35742e944
--- /dev/null
+++ b/electronics/circular_convolution.py
@@ -0,0 +1,99 @@
+# https://en.wikipedia.org/wiki/Circular_convolution
+
+"""
+Circular convolution, also known as cyclic convolution,
+is a special case of periodic convolution, which is the convolution of two
+periodic functions that have the same period. Periodic convolution arises,
+for example, in the context of the discrete-time Fourier transform (DTFT).
+In particular, the DTFT of the product of two discrete sequences is the periodic
+convolution of the DTFTs of the individual sequences. And each DTFT is a periodic
+summation of a continuous Fourier transform function.
+
+Source: https://en.wikipedia.org/wiki/Circular_convolution
+"""
+
+import doctest
+from collections import deque
+
+import numpy as np
+
+
+class CircularConvolution:
+ """
+ This class stores the first and second signal and performs the circular convolution
+ """
+
+ def __init__(self) -> None:
+ """
+ First signal and second signal are stored as 1-D array
+ """
+
+ self.first_signal = [2, 1, 2, -1]
+ self.second_signal = [1, 2, 3, 4]
+
+ def circular_convolution(self) -> list[float]:
+ """
+ This function performs the circular convolution of the first and second signal
+ using matrix method
+
+ Usage:
+ >>> import circular_convolution as cc
+ >>> convolution = cc.CircularConvolution()
+ >>> convolution.circular_convolution()
+ [10, 10, 6, 14]
+
+ >>> convolution.first_signal = [0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6]
+ >>> convolution.second_signal = [0.1, 0.3, 0.5, 0.7, 0.9, 1.1, 1.3, 1.5]
+ >>> convolution.circular_convolution()
+ [5.2, 6.0, 6.48, 6.64, 6.48, 6.0, 5.2, 4.08]
+
+ >>> convolution.first_signal = [-1, 1, 2, -2]
+ >>> convolution.second_signal = [0.5, 1, -1, 2, 0.75]
+ >>> convolution.circular_convolution()
+ [6.25, -3.0, 1.5, -2.0, -2.75]
+
+ >>> convolution.first_signal = [1, -1, 2, 3, -1]
+ >>> convolution.second_signal = [1, 2, 3]
+ >>> convolution.circular_convolution()
+ [8, -2, 3, 4, 11]
+
+ """
+
+ length_first_signal = len(self.first_signal)
+ length_second_signal = len(self.second_signal)
+
+ max_length = max(length_first_signal, length_second_signal)
+
+ # create a zero matrix of max_length x max_length
+ matrix = [[0] * max_length for i in range(max_length)]
+
+ # fills the smaller signal with zeros to make both signals of same length
+ if length_first_signal < length_second_signal:
+ self.first_signal += [0] * (max_length - length_first_signal)
+ elif length_first_signal > length_second_signal:
+ self.second_signal += [0] * (max_length - length_second_signal)
+
+ """
+ Fills the matrix in the following way assuming 'x' is the signal of length 4
+ [
+ [x[0], x[3], x[2], x[1]],
+ [x[1], x[0], x[3], x[2]],
+ [x[2], x[1], x[0], x[3]],
+ [x[3], x[2], x[1], x[0]]
+ ]
+ """
+ for i in range(max_length):
+ rotated_signal = deque(self.second_signal)
+ rotated_signal.rotate(i)
+ for j, item in enumerate(rotated_signal):
+ matrix[i][j] += item
+
+ # multiply the matrix with the first signal
+ final_signal = np.matmul(np.transpose(matrix), np.transpose(self.first_signal))
+
+ # rounding-off to two decimal places
+ return [round(i, 2) for i in final_signal]
+
+
+if __name__ == "__main__":
+ doctest.testmod()
From f9cc25221c1521a0da9ee27d6a9bea1f14f4c986 Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Fri, 10 Mar 2023 12:48:05 +0300
Subject: [PATCH 012/808] Reduce the complexity of backtracking/word_search.py
(#8166)
* Lower the --max-complexity threshold in the file .flake8
---
backtracking/word_search.py | 112 +++++++++++++++++++-----------------
1 file changed, 60 insertions(+), 52 deletions(-)
diff --git a/backtracking/word_search.py b/backtracking/word_search.py
index 25d1436be36e..c9d52012b42b 100644
--- a/backtracking/word_search.py
+++ b/backtracking/word_search.py
@@ -33,6 +33,61 @@
"""
+def get_point_key(len_board: int, len_board_column: int, row: int, column: int) -> int:
+ """
+ Returns the hash key of matrix indexes.
+
+ >>> get_point_key(10, 20, 1, 0)
+ 200
+ """
+
+ return len_board * len_board_column * row + column
+
+
+def exits_word(
+ board: list[list[str]],
+ word: str,
+ row: int,
+ column: int,
+ word_index: int,
+ visited_points_set: set[int],
+) -> bool:
+ """
+ Return True if it's possible to search the word suffix
+ starting from the word_index.
+
+ >>> exits_word([["A"]], "B", 0, 0, 0, set())
+ False
+ """
+
+ if board[row][column] != word[word_index]:
+ return False
+
+ if word_index == len(word) - 1:
+ return True
+
+ traverts_directions = [(0, 1), (0, -1), (-1, 0), (1, 0)]
+ len_board = len(board)
+ len_board_column = len(board[0])
+ for direction in traverts_directions:
+ next_i = row + direction[0]
+ next_j = column + direction[1]
+ if not (0 <= next_i < len_board and 0 <= next_j < len_board_column):
+ continue
+
+ key = get_point_key(len_board, len_board_column, next_i, next_j)
+ if key in visited_points_set:
+ continue
+
+ visited_points_set.add(key)
+ if exits_word(board, word, next_i, next_j, word_index + 1, visited_points_set):
+ return True
+
+ visited_points_set.remove(key)
+
+ return False
+
+
def word_exists(board: list[list[str]], word: str) -> bool:
"""
>>> word_exists([["A","B","C","E"],["S","F","C","S"],["A","D","E","E"]], "ABCCED")
@@ -77,6 +132,8 @@ def word_exists(board: list[list[str]], word: str) -> bool:
board_error_message = (
"The board should be a non empty matrix of single chars strings."
)
+
+ len_board = len(board)
if not isinstance(board, list) or len(board) == 0:
raise ValueError(board_error_message)
@@ -94,61 +151,12 @@ def word_exists(board: list[list[str]], word: str) -> bool:
"The word parameter should be a string of length greater than 0."
)
- traverts_directions = [(0, 1), (0, -1), (-1, 0), (1, 0)]
- len_word = len(word)
- len_board = len(board)
len_board_column = len(board[0])
-
- # Returns the hash key of matrix indexes.
- def get_point_key(row: int, column: int) -> int:
- """
- >>> len_board=10
- >>> len_board_column=20
- >>> get_point_key(0, 0)
- 200
- """
-
- return len_board * len_board_column * row + column
-
- # Return True if it's possible to search the word suffix
- # starting from the word_index.
- def exits_word(
- row: int, column: int, word_index: int, visited_points_set: set[int]
- ) -> bool:
- """
- >>> board=[["A"]]
- >>> word="B"
- >>> exits_word(0, 0, 0, set())
- False
- """
-
- if board[row][column] != word[word_index]:
- return False
-
- if word_index == len_word - 1:
- return True
-
- for direction in traverts_directions:
- next_i = row + direction[0]
- next_j = column + direction[1]
- if not (0 <= next_i < len_board and 0 <= next_j < len_board_column):
- continue
-
- key = get_point_key(next_i, next_j)
- if key in visited_points_set:
- continue
-
- visited_points_set.add(key)
- if exits_word(next_i, next_j, word_index + 1, visited_points_set):
- return True
-
- visited_points_set.remove(key)
-
- return False
-
for i in range(len_board):
for j in range(len_board_column):
- if exits_word(i, j, 0, {get_point_key(i, j)}):
+ if exits_word(
+ board, word, i, j, 0, {get_point_key(len_board, len_board_column, i, j)}
+ ):
return True
return False
From 8959211100ba7a612d42a6e7db4755303b78c5a7 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Mon, 13 Mar 2023 23:18:35 +0100
Subject: [PATCH 013/808] [pre-commit.ci] pre-commit autoupdate (#8177)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.254 → v0.0.255](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.254...v0.0.255)
- [github.com/pre-commit/mirrors-mypy: v1.0.1 → v1.1.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.0.1...v1.1.1)
- [github.com/codespell-project/codespell: v2.2.2 → v2.2.4](https://github.com/codespell-project/codespell/compare/v2.2.2...v2.2.4)
* updating DIRECTORY.md
* Fixes for new version of codespell
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.pre-commit-config.yaml | 8 ++++----
DIRECTORY.md | 1 +
machine_learning/sequential_minimum_optimization.py | 2 +-
physics/lorentz_transformation_four_vector.py | 2 +-
4 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 329407265a5a..9aa965e42aec 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -44,7 +44,7 @@ repos:
- --py311-plus
- repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.254
+ rev: v0.0.255
hooks:
- id: ruff
args:
@@ -69,7 +69,7 @@ repos:
*flake8-plugins
- repo: https://github.com/pre-commit/mirrors-mypy
- rev: v1.0.1
+ rev: v1.1.1
hooks:
- id: mypy
args:
@@ -79,11 +79,11 @@ repos:
additional_dependencies: [types-requests]
- repo: https://github.com/codespell-project/codespell
- rev: v2.2.2
+ rev: v2.2.4
hooks:
- id: codespell
args:
- - --ignore-words-list=ans,crate,damon,fo,followings,hist,iff,mater,secant,som,sur,tim,zar
+ - --ignore-words-list=3rt,ans,crate,damon,fo,followings,hist,iff,kwanza,mater,secant,som,sur,tim,zar
exclude: |
(?x)^(
ciphers/prehistoric_men.txt |
diff --git a/DIRECTORY.md b/DIRECTORY.md
index f25b0c6ff4e3..b2daaaa9c47d 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -334,6 +334,7 @@
## Electronics
* [Builtin Voltage](electronics/builtin_voltage.py)
* [Carrier Concentration](electronics/carrier_concentration.py)
+ * [Circular Convolution](electronics/circular_convolution.py)
* [Coulombs Law](electronics/coulombs_law.py)
* [Electric Conductivity](electronics/electric_conductivity.py)
* [Electric Power](electronics/electric_power.py)
diff --git a/machine_learning/sequential_minimum_optimization.py b/machine_learning/sequential_minimum_optimization.py
index 37172c8e9bf6..b68bd52f4de9 100644
--- a/machine_learning/sequential_minimum_optimization.py
+++ b/machine_learning/sequential_minimum_optimization.py
@@ -569,7 +569,7 @@ def plot_partition_boundary(
"""
We can not get the optimum w of our kernel svm model which is different from linear
svm. For this reason, we generate randomly distributed points with high desity and
- prediced values of these points are calculated by using our tained model. Then we
+ prediced values of these points are calculated by using our trained model. Then we
could use this prediced values to draw contour map.
And this contour map can represent svm's partition boundary.
"""
diff --git a/physics/lorentz_transformation_four_vector.py b/physics/lorentz_transformation_four_vector.py
index 64be97245f29..f4fda4dff8cd 100644
--- a/physics/lorentz_transformation_four_vector.py
+++ b/physics/lorentz_transformation_four_vector.py
@@ -2,7 +2,7 @@
Lorentz transformations describe the transition between two inertial reference
frames F and F', each of which is moving in some direction with respect to the
other. This code only calculates Lorentz transformations for movement in the x
-direction with no spacial rotation (i.e., a Lorentz boost in the x direction).
+direction with no spatial rotation (i.e., a Lorentz boost in the x direction).
The Lorentz transformations are calculated here as linear transformations of
four-vectors [ct, x, y, z] described by Minkowski space. Note that t (time) is
multiplied by c (the speed of light) in the first entry of each four-vector.
From b797e437aeadcac50556d6606a547dc634cf5329 Mon Sep 17 00:00:00 2001
From: Andrey
Date: Tue, 14 Mar 2023 01:31:27 +0100
Subject: [PATCH 014/808] Add hashmap implementation (#7967)
---
data_structures/hashing/hash_map.py | 162 ++++++++++++++++++
.../hashing/tests/test_hash_map.py | 97 +++++++++++
2 files changed, 259 insertions(+)
create mode 100644 data_structures/hashing/hash_map.py
create mode 100644 data_structures/hashing/tests/test_hash_map.py
diff --git a/data_structures/hashing/hash_map.py b/data_structures/hashing/hash_map.py
new file mode 100644
index 000000000000..1dfcc8bbf906
--- /dev/null
+++ b/data_structures/hashing/hash_map.py
@@ -0,0 +1,162 @@
+"""
+Hash map with open addressing.
+
+https://en.wikipedia.org/wiki/Hash_table
+
+Another hash map implementation, with a good explanation.
+Modern Dictionaries by Raymond Hettinger
+https://www.youtube.com/watch?v=p33CVV29OG8
+"""
+from collections.abc import Iterator, MutableMapping
+from dataclasses import dataclass
+from typing import Generic, TypeVar
+
+KEY = TypeVar("KEY")
+VAL = TypeVar("VAL")
+
+
+@dataclass(frozen=True, slots=True)
+class _Item(Generic[KEY, VAL]):
+ key: KEY
+ val: VAL
+
+
+class _DeletedItem(_Item):
+ def __init__(self) -> None:
+ super().__init__(None, None)
+
+ def __bool__(self) -> bool:
+ return False
+
+
+_deleted = _DeletedItem()
+
+
+class HashMap(MutableMapping[KEY, VAL]):
+ """
+ Hash map with open addressing.
+ """
+
+ def __init__(
+ self, initial_block_size: int = 8, capacity_factor: float = 0.75
+ ) -> None:
+ self._initial_block_size = initial_block_size
+ self._buckets: list[_Item | None] = [None] * initial_block_size
+ assert 0.0 < capacity_factor < 1.0
+ self._capacity_factor = capacity_factor
+ self._len = 0
+
+ def _get_bucket_index(self, key: KEY) -> int:
+ return hash(key) % len(self._buckets)
+
+ def _get_next_ind(self, ind: int) -> int:
+ """
+ Get next index.
+
+ Implements linear open addressing.
+ """
+ return (ind + 1) % len(self._buckets)
+
+ def _try_set(self, ind: int, key: KEY, val: VAL) -> bool:
+ """
+ Try to add value to the bucket.
+
+ If bucket is empty or key is the same, does insert and return True.
+
+ If bucket has another key or deleted placeholder,
+ that means that we need to check next bucket.
+ """
+ stored = self._buckets[ind]
+ if not stored:
+ self._buckets[ind] = _Item(key, val)
+ self._len += 1
+ return True
+ elif stored.key == key:
+ self._buckets[ind] = _Item(key, val)
+ return True
+ else:
+ return False
+
+ def _is_full(self) -> bool:
+ """
+ Return true if we have reached safe capacity.
+
+ So we need to increase the number of buckets to avoid collisions.
+ """
+ limit = len(self._buckets) * self._capacity_factor
+ return len(self) >= int(limit)
+
+ def _is_sparse(self) -> bool:
+ """Return true if we need twice fewer buckets when we have now."""
+ if len(self._buckets) <= self._initial_block_size:
+ return False
+ limit = len(self._buckets) * self._capacity_factor / 2
+ return len(self) < limit
+
+ def _resize(self, new_size: int) -> None:
+ old_buckets = self._buckets
+ self._buckets = [None] * new_size
+ self._len = 0
+ for item in old_buckets:
+ if item:
+ self._add_item(item.key, item.val)
+
+ def _size_up(self) -> None:
+ self._resize(len(self._buckets) * 2)
+
+ def _size_down(self) -> None:
+ self._resize(len(self._buckets) // 2)
+
+ def _iterate_buckets(self, key: KEY) -> Iterator[int]:
+ ind = self._get_bucket_index(key)
+ for _ in range(len(self._buckets)):
+ yield ind
+ ind = self._get_next_ind(ind)
+
+ def _add_item(self, key: KEY, val: VAL) -> None:
+ for ind in self._iterate_buckets(key):
+ if self._try_set(ind, key, val):
+ break
+
+ def __setitem__(self, key: KEY, val: VAL) -> None:
+ if self._is_full():
+ self._size_up()
+
+ self._add_item(key, val)
+
+ def __delitem__(self, key: KEY) -> None:
+ for ind in self._iterate_buckets(key):
+ item = self._buckets[ind]
+ if item is None:
+ raise KeyError(key)
+ if item is _deleted:
+ continue
+ if item.key == key:
+ self._buckets[ind] = _deleted
+ self._len -= 1
+ break
+ if self._is_sparse():
+ self._size_down()
+
+ def __getitem__(self, key: KEY) -> VAL:
+ for ind in self._iterate_buckets(key):
+ item = self._buckets[ind]
+ if item is None:
+ break
+ if item is _deleted:
+ continue
+ if item.key == key:
+ return item.val
+ raise KeyError(key)
+
+ def __len__(self) -> int:
+ return self._len
+
+ def __iter__(self) -> Iterator[KEY]:
+ yield from (item.key for item in self._buckets if item)
+
+ def __repr__(self) -> str:
+ val_string = " ,".join(
+ f"{item.key}: {item.val}" for item in self._buckets if item
+ )
+ return f"HashMap({val_string})"
diff --git a/data_structures/hashing/tests/test_hash_map.py b/data_structures/hashing/tests/test_hash_map.py
new file mode 100644
index 000000000000..929e67311996
--- /dev/null
+++ b/data_structures/hashing/tests/test_hash_map.py
@@ -0,0 +1,97 @@
+from operator import delitem, getitem, setitem
+
+import pytest
+
+from data_structures.hashing.hash_map import HashMap
+
+
+def _get(k):
+ return getitem, k
+
+
+def _set(k, v):
+ return setitem, k, v
+
+
+def _del(k):
+ return delitem, k
+
+
+def _run_operation(obj, fun, *args):
+ try:
+ return fun(obj, *args), None
+ except Exception as e:
+ return None, e
+
+
+_add_items = (
+ _set("key_a", "val_a"),
+ _set("key_b", "val_b"),
+)
+
+_overwrite_items = [
+ _set("key_a", "val_a"),
+ _set("key_a", "val_b"),
+]
+
+_delete_items = [
+ _set("key_a", "val_a"),
+ _set("key_b", "val_b"),
+ _del("key_a"),
+ _del("key_b"),
+ _set("key_a", "val_a"),
+ _del("key_a"),
+]
+
+_access_absent_items = [
+ _get("key_a"),
+ _del("key_a"),
+ _set("key_a", "val_a"),
+ _del("key_a"),
+ _del("key_a"),
+ _get("key_a"),
+]
+
+_add_with_resize_up = [
+ *[_set(x, x) for x in range(5)], # guaranteed upsize
+]
+
+_add_with_resize_down = [
+ *[_set(x, x) for x in range(5)], # guaranteed upsize
+ *[_del(x) for x in range(5)],
+ _set("key_a", "val_b"),
+]
+
+
+@pytest.mark.parametrize(
+ "operations",
+ (
+ pytest.param(_add_items, id="add items"),
+ pytest.param(_overwrite_items, id="overwrite items"),
+ pytest.param(_delete_items, id="delete items"),
+ pytest.param(_access_absent_items, id="access absent items"),
+ pytest.param(_add_with_resize_up, id="add with resize up"),
+ pytest.param(_add_with_resize_down, id="add with resize down"),
+ ),
+)
+def test_hash_map_is_the_same_as_dict(operations):
+ my = HashMap(initial_block_size=4)
+ py = {}
+ for _, (fun, *args) in enumerate(operations):
+ my_res, my_exc = _run_operation(my, fun, *args)
+ py_res, py_exc = _run_operation(py, fun, *args)
+ assert my_res == py_res
+ assert str(my_exc) == str(py_exc)
+ assert set(py) == set(my)
+ assert len(py) == len(my)
+ assert set(my.items()) == set(py.items())
+
+
+def test_no_new_methods_was_added_to_api():
+ def is_public(name: str) -> bool:
+ return not name.startswith("_")
+
+ dict_public_names = {name for name in dir({}) if is_public(name)}
+ hash_public_names = {name for name in dir(HashMap()) if is_public(name)}
+
+ assert dict_public_names > hash_public_names
From 9701e459e884e883fc720277452ec592eae305d0 Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Tue, 14 Mar 2023 08:39:36 +0300
Subject: [PATCH 015/808] Add Project Euler problem 100 solution 1 (#8175)
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
DIRECTORY.md | 2 ++
project_euler/problem_100/__init__.py | 0
project_euler/problem_100/sol1.py | 48 +++++++++++++++++++++++++++
3 files changed, 50 insertions(+)
create mode 100644 project_euler/problem_100/__init__.py
create mode 100644 project_euler/problem_100/sol1.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index b2daaaa9c47d..e1ce44eedce1 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -937,6 +937,8 @@
* [Sol1](project_euler/problem_097/sol1.py)
* Problem 099
* [Sol1](project_euler/problem_099/sol1.py)
+ * Problem 100
+ * [Sol1](project_euler/problem_100/sol1.py)
* Problem 101
* [Sol1](project_euler/problem_101/sol1.py)
* Problem 102
diff --git a/project_euler/problem_100/__init__.py b/project_euler/problem_100/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/project_euler/problem_100/sol1.py b/project_euler/problem_100/sol1.py
new file mode 100644
index 000000000000..367378e7ab17
--- /dev/null
+++ b/project_euler/problem_100/sol1.py
@@ -0,0 +1,48 @@
+"""
+Project Euler Problem 100: https://projecteuler.net/problem=100
+
+If a box contains twenty-one coloured discs, composed of fifteen blue discs and
+six red discs, and two discs were taken at random, it can be seen that
+the probability of taking two blue discs, P(BB) = (15/21) x (14/20) = 1/2.
+
+The next such arrangement, for which there is exactly 50% chance of taking two blue
+discs at random, is a box containing eighty-five blue discs and thirty-five red discs.
+
+By finding the first arrangement to contain over 10^12 = 1,000,000,000,000 discs
+in total, determine the number of blue discs that the box would contain.
+"""
+
+
+def solution(min_total: int = 10**12) -> int:
+ """
+ Returns the number of blue discs for the first arrangement to contain
+ over min_total discs in total
+
+ >>> solution(2)
+ 3
+
+ >>> solution(4)
+ 15
+
+ >>> solution(21)
+ 85
+ """
+
+ prev_numerator = 1
+ prev_denominator = 0
+
+ numerator = 1
+ denominator = 1
+
+ while numerator <= 2 * min_total - 1:
+ prev_numerator += 2 * numerator
+ numerator += 2 * prev_numerator
+
+ prev_denominator += 2 * denominator
+ denominator += 2 * prev_denominator
+
+ return (denominator + 1) // 2
+
+
+if __name__ == "__main__":
+ print(f"{solution() = }")
From 47b3c729826e864fb1d0a30b03cf95fa2adae591 Mon Sep 17 00:00:00 2001
From: David Leal
Date: Mon, 13 Mar 2023 23:46:52 -0600
Subject: [PATCH 016/808] docs: add the other/miscellaneous form (#8163)
Co-authored-by: Christian Clauss
Co-authored-by: Dhruv Manilawala
---
.github/ISSUE_TEMPLATE/other.yml | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
create mode 100644 .github/ISSUE_TEMPLATE/other.yml
diff --git a/.github/ISSUE_TEMPLATE/other.yml b/.github/ISSUE_TEMPLATE/other.yml
new file mode 100644
index 000000000000..44d6ff541506
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/other.yml
@@ -0,0 +1,19 @@
+name: Other
+description: Use this for any other issues. PLEASE do not create blank issues
+labels: ["awaiting triage"]
+body:
+ - type: textarea
+ id: issuedescription
+ attributes:
+ label: What would you like to share?
+ description: Provide a clear and concise explanation of your issue.
+ validations:
+ required: true
+
+ - type: textarea
+ id: extrainfo
+ attributes:
+ label: Additional information
+ description: Is there anything else we should know about this issue?
+ validations:
+ required: false
From adc3ccdabede375df5cff62c3c8f06d8a191a803 Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Wed, 15 Mar 2023 15:56:03 +0300
Subject: [PATCH 017/808] Add Project Euler problem 131 solution 1 (#8179)
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
DIRECTORY.md | 5 +++
project_euler/problem_131/__init__.py | 0
project_euler/problem_131/sol1.py | 56 +++++++++++++++++++++++++++
3 files changed, 61 insertions(+)
create mode 100644 project_euler/problem_131/__init__.py
create mode 100644 project_euler/problem_131/sol1.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index e1ce44eedce1..1d3177801a2c 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -196,11 +196,14 @@
* [Disjoint Set](data_structures/disjoint_set/disjoint_set.py)
* Hashing
* [Double Hash](data_structures/hashing/double_hash.py)
+ * [Hash Map](data_structures/hashing/hash_map.py)
* [Hash Table](data_structures/hashing/hash_table.py)
* [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py)
* Number Theory
* [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py)
* [Quadratic Probing](data_structures/hashing/quadratic_probing.py)
+ * Tests
+ * [Test Hash Map](data_structures/hashing/tests/test_hash_map.py)
* Heap
* [Binomial Heap](data_structures/heap/binomial_heap.py)
* [Heap](data_structures/heap/heap.py)
@@ -973,6 +976,8 @@
* [Sol1](project_euler/problem_125/sol1.py)
* Problem 129
* [Sol1](project_euler/problem_129/sol1.py)
+ * Problem 131
+ * [Sol1](project_euler/problem_131/sol1.py)
* Problem 135
* [Sol1](project_euler/problem_135/sol1.py)
* Problem 144
diff --git a/project_euler/problem_131/__init__.py b/project_euler/problem_131/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/project_euler/problem_131/sol1.py b/project_euler/problem_131/sol1.py
new file mode 100644
index 000000000000..f5302aac8644
--- /dev/null
+++ b/project_euler/problem_131/sol1.py
@@ -0,0 +1,56 @@
+"""
+Project Euler Problem 131: https://projecteuler.net/problem=131
+
+There are some prime values, p, for which there exists a positive integer, n,
+such that the expression n^3 + n^2p is a perfect cube.
+
+For example, when p = 19, 8^3 + 8^2 x 19 = 12^3.
+
+What is perhaps most surprising is that for each prime with this property
+the value of n is unique, and there are only four such primes below one-hundred.
+
+How many primes below one million have this remarkable property?
+"""
+
+from math import isqrt
+
+
+def is_prime(number: int) -> bool:
+ """
+ Determines whether number is prime
+
+ >>> is_prime(3)
+ True
+
+ >>> is_prime(4)
+ False
+ """
+
+ for divisor in range(2, isqrt(number) + 1):
+ if number % divisor == 0:
+ return False
+ return True
+
+
+def solution(max_prime: int = 10**6) -> int:
+ """
+ Returns number of primes below max_prime with the property
+
+ >>> solution(100)
+ 4
+ """
+
+ primes_count = 0
+ cube_index = 1
+ prime_candidate = 7
+ while prime_candidate < max_prime:
+ primes_count += is_prime(prime_candidate)
+
+ cube_index += 1
+ prime_candidate += 6 * cube_index
+
+ return primes_count
+
+
+if __name__ == "__main__":
+ print(f"{solution() = }")
From c96241b5a5052af466894ef90c7a7c749ba872eb Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Wed, 15 Mar 2023 13:58:25 +0100
Subject: [PATCH 018/808] Replace bandit, flake8, isort, and pyupgrade with
ruff (#8178)
* Replace bandit, flake8, isort, and pyupgrade with ruff
* Comment on ruff rules
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.flake8 | 10 ---
.github/workflows/ruff.yml | 16 ++++
.pre-commit-config.yaml | 78 +++++--------------
arithmetic_analysis/newton_raphson.py | 2 +-
arithmetic_analysis/newton_raphson_new.py | 2 +-
data_structures/heap/heap_generic.py | 1 -
dynamic_programming/min_distance_up_bottom.py | 9 +--
dynamic_programming/minimum_tickets_cost.py | 4 +-
dynamic_programming/word_break.py | 4 +-
hashes/sha1.py | 12 +--
machine_learning/support_vector_machines.py | 4 +-
maths/eulers_totient.py | 34 ++++----
maths/fibonacci.py | 4 +-
maths/pythagoras.py | 6 +-
other/quine.py | 1 +
project_euler/problem_075/sol1.py | 3 +-
pyproject.toml | 59 ++++++++++++--
sorts/external_sort.py | 2 +-
strings/check_anagrams.py | 3 +-
strings/word_occurrence.py | 3 +-
web_programming/currency_converter.py | 2 +-
21 files changed, 127 insertions(+), 132 deletions(-)
delete mode 100644 .flake8
create mode 100644 .github/workflows/ruff.yml
diff --git a/.flake8 b/.flake8
deleted file mode 100644
index b68ee8533a61..000000000000
--- a/.flake8
+++ /dev/null
@@ -1,10 +0,0 @@
-[flake8]
-max-line-length = 88
-# max-complexity should be 10
-max-complexity = 19
-extend-ignore =
- # Formatting style for `black`
- # E203 is whitespace before ':'
- E203,
- # W503 is line break occurred before a binary operator
- W503
diff --git a/.github/workflows/ruff.yml b/.github/workflows/ruff.yml
new file mode 100644
index 000000000000..ca2d5be47327
--- /dev/null
+++ b/.github/workflows/ruff.yml
@@ -0,0 +1,16 @@
+# https://beta.ruff.rs
+name: ruff
+on:
+ push:
+ branches:
+ - master
+ pull_request:
+ branches:
+ - master
+jobs:
+ ruff:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v3
+ - run: pip install --user ruff
+ - run: ruff --format=github .
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 9aa965e42aec..82aad6c65a9b 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -3,6 +3,7 @@ repos:
rev: v4.4.0
hooks:
- id: check-executables-have-shebangs
+ - id: check-toml
- id: check-yaml
- id: end-of-file-fixer
types: [python]
@@ -14,60 +15,41 @@ repos:
hooks:
- id: auto-walrus
+ - repo: https://github.com/charliermarsh/ruff-pre-commit
+ rev: v0.0.255
+ hooks:
+ - id: ruff
+
- repo: https://github.com/psf/black
rev: 23.1.0
hooks:
- id: black
- - repo: https://github.com/PyCQA/isort
- rev: 5.12.0
+ - repo: https://github.com/codespell-project/codespell
+ rev: v2.2.4
hooks:
- - id: isort
- args:
- - --profile=black
+ - id: codespell
+ additional_dependencies:
+ - tomli
- repo: https://github.com/tox-dev/pyproject-fmt
rev: "0.9.2"
hooks:
- id: pyproject-fmt
+ - repo: local
+ hooks:
+ - id: validate-filenames
+ name: Validate filenames
+ entry: ./scripts/validate_filenames.py
+ language: script
+ pass_filenames: false
+
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.12.1
hooks:
- id: validate-pyproject
- - repo: https://github.com/asottile/pyupgrade
- rev: v3.3.1
- hooks:
- - id: pyupgrade
- args:
- - --py311-plus
-
- - repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.255
- hooks:
- - id: ruff
- args:
- - --ignore=E741
-
- - repo: https://github.com/PyCQA/flake8
- rev: 6.0.0
- hooks:
- - id: flake8 # See .flake8 for args
- additional_dependencies: &flake8-plugins
- - flake8-bugbear
- - flake8-builtins
- # - flake8-broken-line
- - flake8-comprehensions
- - pep8-naming
-
- - repo: https://github.com/asottile/yesqa
- rev: v1.4.0
- hooks:
- - id: yesqa
- additional_dependencies:
- *flake8-plugins
-
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.1.1
hooks:
@@ -77,25 +59,3 @@ repos:
- --install-types # See mirrors-mypy README.md
- --non-interactive
additional_dependencies: [types-requests]
-
- - repo: https://github.com/codespell-project/codespell
- rev: v2.2.4
- hooks:
- - id: codespell
- args:
- - --ignore-words-list=3rt,ans,crate,damon,fo,followings,hist,iff,kwanza,mater,secant,som,sur,tim,zar
- exclude: |
- (?x)^(
- ciphers/prehistoric_men.txt |
- strings/dictionary.txt |
- strings/words.txt |
- project_euler/problem_022/p022_names.txt
- )$
-
- - repo: local
- hooks:
- - id: validate-filenames
- name: Validate filenames
- entry: ./scripts/validate_filenames.py
- language: script
- pass_filenames: false
diff --git a/arithmetic_analysis/newton_raphson.py b/arithmetic_analysis/newton_raphson.py
index 86ff9d350dde..aee2f07e5743 100644
--- a/arithmetic_analysis/newton_raphson.py
+++ b/arithmetic_analysis/newton_raphson.py
@@ -5,7 +5,7 @@
from __future__ import annotations
from decimal import Decimal
-from math import * # noqa: F401, F403
+from math import * # noqa: F403
from sympy import diff
diff --git a/arithmetic_analysis/newton_raphson_new.py b/arithmetic_analysis/newton_raphson_new.py
index 472cb5b5ac54..f61841e2eb84 100644
--- a/arithmetic_analysis/newton_raphson_new.py
+++ b/arithmetic_analysis/newton_raphson_new.py
@@ -8,7 +8,7 @@
# Newton's Method - https://en.wikipedia.org/wiki/Newton's_method
from sympy import diff, lambdify, symbols
-from sympy.functions import * # noqa: F401, F403
+from sympy.functions import * # noqa: F403
def newton_raphson(
diff --git a/data_structures/heap/heap_generic.py b/data_structures/heap/heap_generic.py
index b4d7019f41f9..ee92149e25a9 100644
--- a/data_structures/heap/heap_generic.py
+++ b/data_structures/heap/heap_generic.py
@@ -166,7 +166,6 @@ def test_heap() -> None:
>>> h.get_top()
[9, -40]
"""
- pass
if __name__ == "__main__":
diff --git a/dynamic_programming/min_distance_up_bottom.py b/dynamic_programming/min_distance_up_bottom.py
index 49c361f24d45..4870c7ef4499 100644
--- a/dynamic_programming/min_distance_up_bottom.py
+++ b/dynamic_programming/min_distance_up_bottom.py
@@ -6,13 +6,13 @@
The aim is to demonstate up bottom approach for solving the task.
The implementation was tested on the
leetcode: https://leetcode.com/problems/edit-distance/
-"""
-"""
Levinstein distance
Dynamic Programming: up -> down.
"""
+import functools
+
def min_distance_up_bottom(word1: str, word2: str) -> int:
"""
@@ -25,13 +25,10 @@ def min_distance_up_bottom(word1: str, word2: str) -> int:
>>> min_distance_up_bottom("zooicoarchaeologist", "zoologist")
10
"""
-
- from functools import lru_cache
-
len_word1 = len(word1)
len_word2 = len(word2)
- @lru_cache(maxsize=None)
+ @functools.cache
def min_distance(index1: int, index2: int) -> int:
# if first word index is overflow - delete all from the second word
if index1 >= len_word1:
diff --git a/dynamic_programming/minimum_tickets_cost.py b/dynamic_programming/minimum_tickets_cost.py
index d07056d9217f..6790c21f16ed 100644
--- a/dynamic_programming/minimum_tickets_cost.py
+++ b/dynamic_programming/minimum_tickets_cost.py
@@ -22,7 +22,7 @@
Dynamic Programming: up -> down.
"""
-from functools import lru_cache
+import functools
def mincost_tickets(days: list[int], costs: list[int]) -> int:
@@ -106,7 +106,7 @@ def mincost_tickets(days: list[int], costs: list[int]) -> int:
days_set = set(days)
- @lru_cache(maxsize=None)
+ @functools.cache
def dynamic_programming(index: int) -> int:
if index > 365:
return 0
diff --git a/dynamic_programming/word_break.py b/dynamic_programming/word_break.py
index 642ea0edf40d..4d7ac869080c 100644
--- a/dynamic_programming/word_break.py
+++ b/dynamic_programming/word_break.py
@@ -20,7 +20,7 @@
Space: O(n)
"""
-from functools import lru_cache
+import functools
from typing import Any
@@ -80,7 +80,7 @@ def word_break(string: str, words: list[str]) -> bool:
len_string = len(string)
# Dynamic programming method
- @lru_cache(maxsize=None)
+ @functools.cache
def is_breakable(index: int) -> bool:
"""
>>> string = 'a'
diff --git a/hashes/sha1.py b/hashes/sha1.py
index b19e0cfafea3..9f0437f208fa 100644
--- a/hashes/sha1.py
+++ b/hashes/sha1.py
@@ -26,7 +26,6 @@
import argparse
import hashlib # hashlib is only used inside the Test class
import struct
-import unittest
class SHA1Hash:
@@ -128,14 +127,9 @@ def final_hash(self):
return "%08x%08x%08x%08x%08x" % tuple(self.h)
-class SHA1HashTest(unittest.TestCase):
- """
- Test class for the SHA1Hash class. Inherits the TestCase class from unittest
- """
-
- def testMatchHashes(self): # noqa: N802
- msg = bytes("Test String", "utf-8")
- self.assertEqual(SHA1Hash(msg).final_hash(), hashlib.sha1(msg).hexdigest())
+def test_sha1_hash():
+ msg = b"Test String"
+ assert SHA1Hash(msg).final_hash() == hashlib.sha1(msg).hexdigest() # noqa: S324
def main():
diff --git a/machine_learning/support_vector_machines.py b/machine_learning/support_vector_machines.py
index caec10175c50..df854cc850b1 100644
--- a/machine_learning/support_vector_machines.py
+++ b/machine_learning/support_vector_machines.py
@@ -56,7 +56,7 @@ def __init__(
*,
regularization: float = np.inf,
kernel: str = "linear",
- gamma: float = 0,
+ gamma: float = 0.0,
) -> None:
self.regularization = regularization
self.gamma = gamma
@@ -65,7 +65,7 @@ def __init__(
elif kernel == "rbf":
if self.gamma == 0:
raise ValueError("rbf kernel requires gamma")
- if not (isinstance(self.gamma, float) or isinstance(self.gamma, int)):
+ if not isinstance(self.gamma, (float, int)):
raise ValueError("gamma must be float or int")
if not self.gamma > 0:
raise ValueError("gamma must be > 0")
diff --git a/maths/eulers_totient.py b/maths/eulers_totient.py
index 6a35e69bde0b..a156647037b4 100644
--- a/maths/eulers_totient.py
+++ b/maths/eulers_totient.py
@@ -1,5 +1,20 @@
# Eulers Totient function finds the number of relative primes of a number n from 1 to n
def totient(n: int) -> list:
+ """
+ >>> n = 10
+ >>> totient_calculation = totient(n)
+ >>> for i in range(1, n):
+ ... print(f"{i} has {totient_calculation[i]} relative primes.")
+ 1 has 0 relative primes.
+ 2 has 1 relative primes.
+ 3 has 2 relative primes.
+ 4 has 2 relative primes.
+ 5 has 4 relative primes.
+ 6 has 2 relative primes.
+ 7 has 6 relative primes.
+ 8 has 4 relative primes.
+ 9 has 6 relative primes.
+ """
is_prime = [True for i in range(n + 1)]
totients = [i - 1 for i in range(n + 1)]
primes = []
@@ -20,25 +35,6 @@ def totient(n: int) -> list:
return totients
-def test_totient() -> None:
- """
- >>> n = 10
- >>> totient_calculation = totient(n)
- >>> for i in range(1, n):
- ... print(f"{i} has {totient_calculation[i]} relative primes.")
- 1 has 0 relative primes.
- 2 has 1 relative primes.
- 3 has 2 relative primes.
- 4 has 2 relative primes.
- 5 has 4 relative primes.
- 6 has 2 relative primes.
- 7 has 6 relative primes.
- 8 has 4 relative primes.
- 9 has 6 relative primes.
- """
- pass
-
-
if __name__ == "__main__":
import doctest
diff --git a/maths/fibonacci.py b/maths/fibonacci.py
index d58c9fc68c67..e810add69dc7 100644
--- a/maths/fibonacci.py
+++ b/maths/fibonacci.py
@@ -16,7 +16,7 @@
fib_binet runtime: 0.0174 ms
"""
-from functools import lru_cache
+import functools
from math import sqrt
from time import time
@@ -110,7 +110,7 @@ def fib_recursive_cached(n: int) -> list[int]:
Exception: n is negative
"""
- @lru_cache(maxsize=None)
+ @functools.cache
def fib_recursive_term(i: int) -> int:
"""
Calculates the i-th (0-indexed) Fibonacci number using recursion
diff --git a/maths/pythagoras.py b/maths/pythagoras.py
index 69a17731a0fd..7770e981d44d 100644
--- a/maths/pythagoras.py
+++ b/maths/pythagoras.py
@@ -14,17 +14,13 @@ def __repr__(self) -> str:
def distance(a: Point, b: Point) -> float:
- return math.sqrt(abs((b.x - a.x) ** 2 + (b.y - a.y) ** 2 + (b.z - a.z) ** 2))
-
-
-def test_distance() -> None:
"""
>>> point1 = Point(2, -1, 7)
>>> point2 = Point(1, -3, 5)
>>> print(f"Distance from {point1} to {point2} is {distance(point1, point2)}")
Distance from Point(2, -1, 7) to Point(1, -3, 5) is 3.0
"""
- pass
+ return math.sqrt(abs((b.x - a.x) ** 2 + (b.y - a.y) ** 2 + (b.z - a.z) ** 2))
if __name__ == "__main__":
diff --git a/other/quine.py b/other/quine.py
index 01e03bbb02cb..500a351d38dc 100644
--- a/other/quine.py
+++ b/other/quine.py
@@ -1,4 +1,5 @@
#!/bin/python3
+# ruff: noqa
"""
Quine:
diff --git a/project_euler/problem_075/sol1.py b/project_euler/problem_075/sol1.py
index b57604d76a86..0ccaf5dee7ec 100644
--- a/project_euler/problem_075/sol1.py
+++ b/project_euler/problem_075/sol1.py
@@ -29,7 +29,6 @@
from collections import defaultdict
from math import gcd
-from typing import DefaultDict
def solution(limit: int = 1500000) -> int:
@@ -43,7 +42,7 @@ def solution(limit: int = 1500000) -> int:
>>> solution(50000)
5502
"""
- frequencies: DefaultDict = defaultdict(int)
+ frequencies: defaultdict = defaultdict(int)
euclid_m = 2
while 2 * euclid_m * (euclid_m + 1) <= limit:
for euclid_n in range((euclid_m % 2) + 1, euclid_m, 2):
diff --git a/pyproject.toml b/pyproject.toml
index 5f9b1aa06c0e..6552101d2faa 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -12,8 +12,57 @@ addopts = [
omit = [".env/*"]
sort = "Cover"
-#[report]
-#sort = Cover
-#omit =
-# .env/*
-# backtracking/*
+[tool.codespell]
+ignore-words-list = "3rt,ans,crate,damon,fo,followings,hist,iff,kwanza,mater,secant,som,sur,tim,zar"
+skip = "./.*,*.json,ciphers/prehistoric_men.txt,project_euler/problem_022/p022_names.txt,pyproject.toml,strings/dictionary.txt,strings/words.txt"
+
+[tool.ruff]
+ignore = [ # `ruff rule S101` for a description of that rule
+ "B904", # B904: Within an `except` clause, raise exceptions with `raise ... from err`
+ "B905", # B905: `zip()` without an explicit `strict=` parameter
+ "E741", # E741: Ambiguous variable name 'l'
+ "G004", # G004 Logging statement uses f-string
+ "N999", # N999: Invalid module name
+ "PLC1901", # PLC1901: `{}` can be simplified to `{}` as an empty string is falsey
+ "PLR2004", # PLR2004: Magic value used in comparison
+ "PLR5501", # PLR5501: Consider using `elif` instead of `else`
+ "PLW0120", # PLW0120: `else` clause on loop without a `break` statement
+ "PLW060", # PLW060: Using global for `{name}` but no assignment is done -- DO NOT FIX
+ "PLW2901", # PLW2901: Redefined loop variable
+ "RUF00", # RUF00: Ambiguous unicode character -- DO NOT FIX
+ "RUF100", # RUF100: Unused `noqa` directive
+ "S101", # S101: Use of `assert` detected -- DO NOT FIX
+ "S105", # S105: Possible hardcoded password: 'password'
+ "S113", # S113: Probable use of requests call without timeout
+ "UP038", # UP038: Use `X | Y` in `{}` call instead of `(X, Y)` -- DO NOT FIX
+]
+select = [ # https://beta.ruff.rs/docs/rules
+ "A", # A: builtins
+ "B", # B: bugbear
+ "C40", # C40: comprehensions
+ "C90", # C90: mccabe code complexity
+ "E", # E: pycodestyle errors
+ "F", # F: pyflakes
+ "G", # G: logging format
+ "I", # I: isort
+ "N", # N: pep8 naming
+ "PL", # PL: pylint
+ "PIE", # PIE: pie
+ "PYI", # PYI: type hinting stub files
+ "RUF", # RUF: ruff
+ "S", # S: bandit
+ "TID", # TID: tidy imports
+ "UP", # UP: pyupgrade
+ "W", # W: pycodestyle warnings
+ "YTT", # YTT: year 2020
+]
+target-version = "py311"
+
+[tool.ruff.mccabe] # DO NOT INCREASE THIS VALUE
+max-complexity = 20 # default: 10
+
+[tool.ruff.pylint] # DO NOT INCREASE THESE VALUES
+max-args = 10 # default: 5
+max-branches = 20 # default: 12
+max-returns = 8 # default: 6
+max-statements = 88 # default: 50
diff --git a/sorts/external_sort.py b/sorts/external_sort.py
index 7af7dc0a609d..e6b0d47f79f5 100644
--- a/sorts/external_sort.py
+++ b/sorts/external_sort.py
@@ -104,7 +104,7 @@ def get_file_handles(self, filenames, buffer_size):
files = {}
for i in range(len(filenames)):
- files[i] = open(filenames[i], "r", buffer_size)
+ files[i] = open(filenames[i], "r", buffer_size) # noqa: UP015
return files
diff --git a/strings/check_anagrams.py b/strings/check_anagrams.py
index a364b98212ad..9dcdffcfb921 100644
--- a/strings/check_anagrams.py
+++ b/strings/check_anagrams.py
@@ -2,7 +2,6 @@
wiki: https://en.wikipedia.org/wiki/Anagram
"""
from collections import defaultdict
-from typing import DefaultDict
def check_anagrams(first_str: str, second_str: str) -> bool:
@@ -30,7 +29,7 @@ def check_anagrams(first_str: str, second_str: str) -> bool:
return False
# Default values for count should be 0
- count: DefaultDict[str, int] = defaultdict(int)
+ count: defaultdict[str, int] = defaultdict(int)
# For each character in input strings,
# increment count in the corresponding
diff --git a/strings/word_occurrence.py b/strings/word_occurrence.py
index 8260620c38a4..5a18ebf771e4 100644
--- a/strings/word_occurrence.py
+++ b/strings/word_occurrence.py
@@ -1,7 +1,6 @@
# Created by sarathkaul on 17/11/19
# Modified by Arkadip Bhattacharya(@darkmatter18) on 20/04/2020
from collections import defaultdict
-from typing import DefaultDict
def word_occurrence(sentence: str) -> dict:
@@ -15,7 +14,7 @@ def word_occurrence(sentence: str) -> dict:
>>> dict(word_occurrence("Two spaces"))
{'Two': 1, 'spaces': 1}
"""
- occurrence: DefaultDict[str, int] = defaultdict(int)
+ occurrence: defaultdict[str, int] = defaultdict(int)
# Creating a dictionary containing count of each word
for word in sentence.split():
occurrence[word] += 1
diff --git a/web_programming/currency_converter.py b/web_programming/currency_converter.py
index 6fcc60e8feeb..69f2a2c4d421 100644
--- a/web_programming/currency_converter.py
+++ b/web_programming/currency_converter.py
@@ -8,7 +8,7 @@
import requests
URL_BASE = "https://www.amdoren.com/api/currency.php"
-TESTING = os.getenv("CI", False)
+TESTING = os.getenv("CI", "")
API_KEY = os.getenv("AMDOREN_API_KEY", "")
if not API_KEY and not TESTING:
From 521fbca61c6bdb84746564eb58c2ef2131260187 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Thu, 16 Mar 2023 13:31:29 +0100
Subject: [PATCH 019/808] Replace flake8 with ruff (#8184)
---
CONTRIBUTING.md | 6 +++---
audio_filters/equal_loudness_filter.py.broken.txt | 2 +-
data_structures/binary_tree/red_black_tree.py | 4 ++--
digital_image_processing/change_contrast.py | 4 ++--
maths/is_square_free.py | 4 ++--
maths/mobius_function.py | 4 ++--
other/linear_congruential_generator.py | 8 ++++----
pyproject.toml | 1 +
quantum/ripple_adder_classic.py | 6 +++---
9 files changed, 20 insertions(+), 19 deletions(-)
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 3ce5bd1edf68..6b6e4d21bfc7 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -81,11 +81,11 @@ We want your work to be readable by others; therefore, we encourage you to note
black .
```
-- All submissions will need to pass the test `flake8 . --ignore=E203,W503 --max-line-length=88` before they will be accepted so if possible, try this test locally on your Python file(s) before submitting your pull request.
+- All submissions will need to pass the test `ruff .` before they will be accepted so if possible, try this test locally on your Python file(s) before submitting your pull request.
```bash
- python3 -m pip install flake8 # only required the first time
- flake8 . --ignore=E203,W503 --max-line-length=88 --show-source
+ python3 -m pip install ruff # only required the first time
+ ruff .
```
- Original code submission require docstrings or comments to describe your work.
diff --git a/audio_filters/equal_loudness_filter.py.broken.txt b/audio_filters/equal_loudness_filter.py.broken.txt
index b9a3c50e1c33..88cba8533cf7 100644
--- a/audio_filters/equal_loudness_filter.py.broken.txt
+++ b/audio_filters/equal_loudness_filter.py.broken.txt
@@ -20,7 +20,7 @@ class EqualLoudnessFilter:
samplerate, use with caution.
Code based on matlab implementation at https://bit.ly/3eqh2HU
- (url shortened for flake8)
+ (url shortened for ruff)
Target curve: https://i.imgur.com/3g2VfaM.png
Yulewalk response: https://i.imgur.com/J9LnJ4C.png
diff --git a/data_structures/binary_tree/red_black_tree.py b/data_structures/binary_tree/red_black_tree.py
index b50d75d33689..3ebc8d63939b 100644
--- a/data_structures/binary_tree/red_black_tree.py
+++ b/data_structures/binary_tree/red_black_tree.py
@@ -1,6 +1,6 @@
"""
-python/black : true
-flake8 : passed
+psf/black : true
+ruff : passed
"""
from __future__ import annotations
diff --git a/digital_image_processing/change_contrast.py b/digital_image_processing/change_contrast.py
index 6a150400249f..7e49694708f8 100644
--- a/digital_image_processing/change_contrast.py
+++ b/digital_image_processing/change_contrast.py
@@ -4,8 +4,8 @@
This algorithm is used in
https://noivce.pythonanywhere.com/ Python web app.
-python/black: True
-flake8 : True
+psf/black: True
+ruff : True
"""
from PIL import Image
diff --git a/maths/is_square_free.py b/maths/is_square_free.py
index 4134398d258b..08c70dc32c38 100644
--- a/maths/is_square_free.py
+++ b/maths/is_square_free.py
@@ -1,7 +1,7 @@
"""
References: wikipedia:square free number
-python/black : True
-flake8 : True
+psf/black : True
+ruff : True
"""
from __future__ import annotations
diff --git a/maths/mobius_function.py b/maths/mobius_function.py
index 4fcf35f21813..8abdc4cafcb4 100644
--- a/maths/mobius_function.py
+++ b/maths/mobius_function.py
@@ -1,8 +1,8 @@
"""
References: https://en.wikipedia.org/wiki/M%C3%B6bius_function
References: wikipedia:square free number
-python/black : True
-flake8 : True
+psf/black : True
+ruff : True
"""
from maths.is_square_free import is_square_free
diff --git a/other/linear_congruential_generator.py b/other/linear_congruential_generator.py
index 777ee6355b9b..c016310f9cfa 100644
--- a/other/linear_congruential_generator.py
+++ b/other/linear_congruential_generator.py
@@ -9,10 +9,10 @@ class LinearCongruentialGenerator:
"""
# The default value for **seed** is the result of a function call which is not
- # normally recommended and causes flake8-bugbear to raise a B008 error. However,
- # in this case, it is accptable because `LinearCongruentialGenerator.__init__()`
- # will only be called once per instance and it ensures that each instance will
- # generate a unique sequence of numbers.
+ # normally recommended and causes ruff to raise a B008 error. However, in this case,
+ # it is accptable because `LinearCongruentialGenerator.__init__()` will only be
+ # called once per instance and it ensures that each instance will generate a unique
+ # sequence of numbers.
def __init__(self, multiplier, increment, modulo, seed=int(time())): # noqa: B008
"""
diff --git a/pyproject.toml b/pyproject.toml
index 6552101d2faa..169c3a71ba6c 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -56,6 +56,7 @@ select = [ # https://beta.ruff.rs/docs/rules
"W", # W: pycodestyle warnings
"YTT", # YTT: year 2020
]
+show-source = true
target-version = "py311"
[tool.ruff.mccabe] # DO NOT INCREASE THIS VALUE
diff --git a/quantum/ripple_adder_classic.py b/quantum/ripple_adder_classic.py
index c07757af7fff..b604395bc583 100644
--- a/quantum/ripple_adder_classic.py
+++ b/quantum/ripple_adder_classic.py
@@ -54,9 +54,9 @@ def full_adder(
# The default value for **backend** is the result of a function call which is not
-# normally recommended and causes flake8-bugbear to raise a B008 error. However,
-# in this case, this is acceptable because `Aer.get_backend()` is called when the
-# function is defined and that same backend is then reused for all function calls.
+# normally recommended and causes ruff to raise a B008 error. However, in this case,
+# this is acceptable because `Aer.get_backend()` is called when the function is defined
+# and that same backend is then reused for all function calls.
def ripple_adder(
From 3f9150c1b2dd15808a4962e03a1455f8d825512c Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Mon, 20 Mar 2023 22:16:13 +0100
Subject: [PATCH 020/808] [pre-commit.ci] pre-commit autoupdate (#8294)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.255 → v0.0.257](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.255...v0.0.257)
* Fix PLR1711 Useless statement at end of function
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.pre-commit-config.yaml | 2 +-
data_structures/binary_tree/avl_tree.py | 4 ----
machine_learning/polymonial_regression.py | 1 -
3 files changed, 1 insertion(+), 6 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 82aad6c65a9b..58cec4ff6ee6 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.255
+ rev: v0.0.257
hooks:
- id: ruff
diff --git a/data_structures/binary_tree/avl_tree.py b/data_structures/binary_tree/avl_tree.py
index 320e7ed0d792..4c1fb17afe86 100644
--- a/data_structures/binary_tree/avl_tree.py
+++ b/data_structures/binary_tree/avl_tree.py
@@ -60,19 +60,15 @@ def get_height(self) -> int:
def set_data(self, data: Any) -> None:
self.data = data
- return
def set_left(self, node: MyNode | None) -> None:
self.left = node
- return
def set_right(self, node: MyNode | None) -> None:
self.right = node
- return
def set_height(self, height: int) -> None:
self.height = height
- return
def get_height(node: MyNode | None) -> int:
diff --git a/machine_learning/polymonial_regression.py b/machine_learning/polymonial_regression.py
index 374c35f7f905..487fb814526f 100644
--- a/machine_learning/polymonial_regression.py
+++ b/machine_learning/polymonial_regression.py
@@ -34,7 +34,6 @@ def viz_polymonial():
plt.xlabel("Position level")
plt.ylabel("Salary")
plt.show()
- return
if __name__ == "__main__":
From 7cdb011ba440a07768179bfaea190bddefc890d8 Mon Sep 17 00:00:00 2001
From: Genesis <128913081+KaixLina@users.noreply.github.com>
Date: Sun, 26 Mar 2023 20:49:18 +0530
Subject: [PATCH 021/808] New gitter link added or replaced (#8551)
* New gitter link added
* ruff==0.0.258
* noqa: S310
* noqa: S310
* Update ruff.yml
* Add Ruff rule S311
* Ruff v0.0.259
* return ("{:08x}" * 5).format(*self.h)
* pickle.load(f) # noqa: S301
---------
Co-authored-by: Christian Clauss
---
.github/stale.yml | 4 ++--
.pre-commit-config.yaml | 2 +-
CONTRIBUTING.md | 4 ++--
README.md | 4 ++--
hashes/sha1.py | 2 +-
machine_learning/sequential_minimum_optimization.py | 2 +-
neural_network/convolution_neural_network.py | 2 +-
project_euler/README.md | 2 +-
pyproject.toml | 1 +
web_programming/download_images_from_google_query.py | 2 +-
10 files changed, 13 insertions(+), 12 deletions(-)
diff --git a/.github/stale.yml b/.github/stale.yml
index 36ca56266b26..813f688348d8 100644
--- a/.github/stale.yml
+++ b/.github/stale.yml
@@ -45,7 +45,7 @@ pulls:
closeComment: >
Please reopen this pull request once you commit the changes requested
or make improvements on the code. If this is not the case and you need
- some help, feel free to seek help from our [Gitter](https://gitter.im/TheAlgorithms)
+ some help, feel free to seek help from our [Gitter](https://app.gitter.im/#/room/#TheAlgorithms_community:gitter.im)
or ping one of the reviewers. Thank you for your contributions!
issues:
@@ -59,5 +59,5 @@ issues:
closeComment: >
Please reopen this issue once you add more information and updates here.
If this is not the case and you need some help, feel free to seek help
- from our [Gitter](https://gitter.im/TheAlgorithms) or ping one of the
+ from our [Gitter](https://app.gitter.im/#/room/#TheAlgorithms_community:gitter.im) or ping one of the
reviewers. Thank you for your contributions!
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 58cec4ff6ee6..72a878387e15 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.257
+ rev: v0.0.259
hooks:
- id: ruff
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 6b6e4d21bfc7..75e4fb893723 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -2,7 +2,7 @@
## Before contributing
-Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before sending your pull requests, make sure that you __read the whole guidelines__. If you have any doubt on the contributing guide, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms).
+Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before sending your pull requests, make sure that you __read the whole guidelines__. If you have any doubt on the contributing guide, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://app.gitter.im/#/room/#TheAlgorithms_community:gitter.im).
## Contributing
@@ -176,7 +176,7 @@ We want your work to be readable by others; therefore, we encourage you to note
- Most importantly,
- __Be consistent in the use of these guidelines when submitting.__
- - __Join__ us on [Discord](https://discord.com/invite/c7MnfGFGa6) and [Gitter](https://gitter.im/TheAlgorithms) __now!__
+ - __Join__ us on [Discord](https://discord.com/invite/c7MnfGFGa6) and [Gitter](https://app.gitter.im/#/room/#TheAlgorithms_community:gitter.im) __now!__
- Happy coding!
Writer [@poyea](https://github.com/poyea), Jun 2019.
diff --git a/README.md b/README.md
index 68a6e5e6fbce..3d2f1a110780 100644
--- a/README.md
+++ b/README.md
@@ -16,7 +16,7 @@
-
+
@@ -42,7 +42,7 @@ Read through our [Contribution Guidelines](CONTRIBUTING.md) before you contribut
## Community Channels
-We are on [Discord](https://discord.gg/c7MnfGFGa6) and [Gitter](https://gitter.im/TheAlgorithms)! Community channels are a great way for you to ask questions and get help. Please join us!
+We are on [Discord](https://discord.gg/c7MnfGFGa6) and [Gitter](https://app.gitter.im/#/room/#TheAlgorithms_community:gitter.im)! Community channels are a great way for you to ask questions and get help. Please join us!
## List of Algorithms
diff --git a/hashes/sha1.py b/hashes/sha1.py
index 9f0437f208fa..b325ce3e43bb 100644
--- a/hashes/sha1.py
+++ b/hashes/sha1.py
@@ -124,7 +124,7 @@ def final_hash(self):
self.h[3] + d & 0xFFFFFFFF,
self.h[4] + e & 0xFFFFFFFF,
)
- return "%08x%08x%08x%08x%08x" % tuple(self.h)
+ return ("{:08x}" * 5).format(*self.h)
def test_sha1_hash():
diff --git a/machine_learning/sequential_minimum_optimization.py b/machine_learning/sequential_minimum_optimization.py
index b68bd52f4de9..b24f5669e2e8 100644
--- a/machine_learning/sequential_minimum_optimization.py
+++ b/machine_learning/sequential_minimum_optimization.py
@@ -458,7 +458,7 @@ def test_cancel_data():
CANCER_DATASET_URL,
headers={"User-Agent": "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)"},
)
- response = urllib.request.urlopen(request)
+ response = urllib.request.urlopen(request) # noqa: S310
content = response.read().decode("utf-8")
with open(r"cancel_data.csv", "w") as f:
f.write(content)
diff --git a/neural_network/convolution_neural_network.py b/neural_network/convolution_neural_network.py
index bd0550212157..f5ec156f3593 100644
--- a/neural_network/convolution_neural_network.py
+++ b/neural_network/convolution_neural_network.py
@@ -77,7 +77,7 @@ def save_model(self, save_path):
def read_model(cls, model_path):
# read saved model
with open(model_path, "rb") as f:
- model_dic = pickle.load(f)
+ model_dic = pickle.load(f) # noqa: S301
conv_get = model_dic.get("conv1")
conv_get.append(model_dic.get("step_conv1"))
diff --git a/project_euler/README.md b/project_euler/README.md
index e3dc035eee5e..4832d0078ebf 100644
--- a/project_euler/README.md
+++ b/project_euler/README.md
@@ -10,7 +10,7 @@ The solutions will be checked by our [automated testing on GitHub Actions](https
## Solution Guidelines
-Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before reading the solution guidelines, make sure you read the whole [Contributing Guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) as it won't be repeated in here. If you have any doubt on the guidelines, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms). You can use the [template](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#solution-template) we have provided below as your starting point but be sure to read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) part first.
+Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before reading the solution guidelines, make sure you read the whole [Contributing Guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) as it won't be repeated in here. If you have any doubt on the guidelines, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://app.gitter.im/#/room/#TheAlgorithms_community:gitter.im). You can use the [template](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#solution-template) we have provided below as your starting point but be sure to read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) part first.
### Coding Style
diff --git a/pyproject.toml b/pyproject.toml
index 169c3a71ba6c..23fe45e97d20 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -34,6 +34,7 @@ ignore = [ # `ruff rule S101` for a description of that rule
"S101", # S101: Use of `assert` detected -- DO NOT FIX
"S105", # S105: Possible hardcoded password: 'password'
"S113", # S113: Probable use of requests call without timeout
+ "S311", # S311: Standard pseudo-random generators are not suitable for cryptographic purposes
"UP038", # UP038: Use `X | Y` in `{}` call instead of `(X, Y)` -- DO NOT FIX
]
select = [ # https://beta.ruff.rs/docs/rules
diff --git a/web_programming/download_images_from_google_query.py b/web_programming/download_images_from_google_query.py
index 9c0c21dc804e..441347459f8e 100644
--- a/web_programming/download_images_from_google_query.py
+++ b/web_programming/download_images_from_google_query.py
@@ -86,7 +86,7 @@ def download_images_from_google_query(query: str = "dhaka", max_images: int = 5)
path_name = f"query_{query.replace(' ', '_')}"
if not os.path.exists(path_name):
os.makedirs(path_name)
- urllib.request.urlretrieve(
+ urllib.request.urlretrieve( # noqa: S310
original_size_img, f"{path_name}/original_size_img_{index}.jpg"
)
return index
From 86b2ab09aab359ef1b4bea58ed3c1fdf5b989500 Mon Sep 17 00:00:00 2001
From: Christian Veenhuis
Date: Sun, 26 Mar 2023 18:20:47 +0200
Subject: [PATCH 022/808] Fix broken links to Gitter Community (Fixes: #8197)
(#8546)
Co-authored-by: Christian Clauss
---
.github/stale.yml | 4 ++--
CONTRIBUTING.md | 4 ++--
README.md | 4 ++--
project_euler/README.md | 2 +-
4 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/.github/stale.yml b/.github/stale.yml
index 813f688348d8..0939e1f223ff 100644
--- a/.github/stale.yml
+++ b/.github/stale.yml
@@ -45,7 +45,7 @@ pulls:
closeComment: >
Please reopen this pull request once you commit the changes requested
or make improvements on the code. If this is not the case and you need
- some help, feel free to seek help from our [Gitter](https://app.gitter.im/#/room/#TheAlgorithms_community:gitter.im)
+ some help, feel free to seek help from our [Gitter](https://gitter.im/TheAlgorithms/community)
or ping one of the reviewers. Thank you for your contributions!
issues:
@@ -59,5 +59,5 @@ issues:
closeComment: >
Please reopen this issue once you add more information and updates here.
If this is not the case and you need some help, feel free to seek help
- from our [Gitter](https://app.gitter.im/#/room/#TheAlgorithms_community:gitter.im) or ping one of the
+ from our [Gitter](https://gitter.im/TheAlgorithms/community) or ping one of the
reviewers. Thank you for your contributions!
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 75e4fb893723..2bb0c2e39eee 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -2,7 +2,7 @@
## Before contributing
-Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before sending your pull requests, make sure that you __read the whole guidelines__. If you have any doubt on the contributing guide, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://app.gitter.im/#/room/#TheAlgorithms_community:gitter.im).
+Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before sending your pull requests, make sure that you __read the whole guidelines__. If you have any doubt on the contributing guide, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms/community).
## Contributing
@@ -176,7 +176,7 @@ We want your work to be readable by others; therefore, we encourage you to note
- Most importantly,
- __Be consistent in the use of these guidelines when submitting.__
- - __Join__ us on [Discord](https://discord.com/invite/c7MnfGFGa6) and [Gitter](https://app.gitter.im/#/room/#TheAlgorithms_community:gitter.im) __now!__
+ - __Join__ us on [Discord](https://discord.com/invite/c7MnfGFGa6) and [Gitter](https://gitter.im/TheAlgorithms/community) __now!__
- Happy coding!
Writer [@poyea](https://github.com/poyea), Jun 2019.
diff --git a/README.md b/README.md
index 3d2f1a110780..bf6e0ed3cf75 100644
--- a/README.md
+++ b/README.md
@@ -16,7 +16,7 @@
-
+
@@ -42,7 +42,7 @@ Read through our [Contribution Guidelines](CONTRIBUTING.md) before you contribut
## Community Channels
-We are on [Discord](https://discord.gg/c7MnfGFGa6) and [Gitter](https://app.gitter.im/#/room/#TheAlgorithms_community:gitter.im)! Community channels are a great way for you to ask questions and get help. Please join us!
+We are on [Discord](https://discord.gg/c7MnfGFGa6) and [Gitter](https://gitter.im/TheAlgorithms/community)! Community channels are a great way for you to ask questions and get help. Please join us!
## List of Algorithms
diff --git a/project_euler/README.md b/project_euler/README.md
index 4832d0078ebf..16865edf2a67 100644
--- a/project_euler/README.md
+++ b/project_euler/README.md
@@ -10,7 +10,7 @@ The solutions will be checked by our [automated testing on GitHub Actions](https
## Solution Guidelines
-Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before reading the solution guidelines, make sure you read the whole [Contributing Guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) as it won't be repeated in here. If you have any doubt on the guidelines, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://app.gitter.im/#/room/#TheAlgorithms_community:gitter.im). You can use the [template](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#solution-template) we have provided below as your starting point but be sure to read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) part first.
+Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before reading the solution guidelines, make sure you read the whole [Contributing Guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) as it won't be repeated in here. If you have any doubt on the guidelines, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms/community). You can use the [template](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#solution-template) we have provided below as your starting point but be sure to read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) part first.
### Coding Style
From ac111ee463065e372ad148dbafba630045ecf94c Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Wed, 29 Mar 2023 00:41:54 +0300
Subject: [PATCH 023/808] Reduce the complexity of
graphs/bi_directional_dijkstra.py (#8165)
* Reduce the complexity of graphs/bi_directional_dijkstra.py
* Try to lower the --max-complexity threshold in the file .flake8
* Lower the --max-complexity threshold in the file .flake8
* updating DIRECTORY.md
* updating DIRECTORY.md
* Try to lower max-complexity
* Try to lower max-complexity
* Try to lower max-complexity
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
graphs/bi_directional_dijkstra.py | 95 +++++++++++++++++--------------
pyproject.toml | 2 +-
2 files changed, 53 insertions(+), 44 deletions(-)
diff --git a/graphs/bi_directional_dijkstra.py b/graphs/bi_directional_dijkstra.py
index fc53e2f0d8f3..a4489026be80 100644
--- a/graphs/bi_directional_dijkstra.py
+++ b/graphs/bi_directional_dijkstra.py
@@ -17,6 +17,32 @@
import numpy as np
+def pass_and_relaxation(
+ graph: dict,
+ v: str,
+ visited_forward: set,
+ visited_backward: set,
+ cst_fwd: dict,
+ cst_bwd: dict,
+ queue: PriorityQueue,
+ parent: dict,
+ shortest_distance: float | int,
+) -> float | int:
+ for nxt, d in graph[v]:
+ if nxt in visited_forward:
+ continue
+ old_cost_f = cst_fwd.get(nxt, np.inf)
+ new_cost_f = cst_fwd[v] + d
+ if new_cost_f < old_cost_f:
+ queue.put((new_cost_f, nxt))
+ cst_fwd[nxt] = new_cost_f
+ parent[nxt] = v
+ if nxt in visited_backward:
+ if cst_fwd[v] + d + cst_bwd[nxt] < shortest_distance:
+ shortest_distance = cst_fwd[v] + d + cst_bwd[nxt]
+ return shortest_distance
+
+
def bidirectional_dij(
source: str, destination: str, graph_forward: dict, graph_backward: dict
) -> int:
@@ -51,53 +77,36 @@ def bidirectional_dij(
if source == destination:
return 0
- while queue_forward and queue_backward:
- while not queue_forward.empty():
- _, v_fwd = queue_forward.get()
-
- if v_fwd not in visited_forward:
- break
- else:
- break
+ while not queue_forward.empty() and not queue_backward.empty():
+ _, v_fwd = queue_forward.get()
visited_forward.add(v_fwd)
- while not queue_backward.empty():
- _, v_bwd = queue_backward.get()
-
- if v_bwd not in visited_backward:
- break
- else:
- break
+ _, v_bwd = queue_backward.get()
visited_backward.add(v_bwd)
- # forward pass and relaxation
- for nxt_fwd, d_forward in graph_forward[v_fwd]:
- if nxt_fwd in visited_forward:
- continue
- old_cost_f = cst_fwd.get(nxt_fwd, np.inf)
- new_cost_f = cst_fwd[v_fwd] + d_forward
- if new_cost_f < old_cost_f:
- queue_forward.put((new_cost_f, nxt_fwd))
- cst_fwd[nxt_fwd] = new_cost_f
- parent_forward[nxt_fwd] = v_fwd
- if nxt_fwd in visited_backward:
- if cst_fwd[v_fwd] + d_forward + cst_bwd[nxt_fwd] < shortest_distance:
- shortest_distance = cst_fwd[v_fwd] + d_forward + cst_bwd[nxt_fwd]
-
- # backward pass and relaxation
- for nxt_bwd, d_backward in graph_backward[v_bwd]:
- if nxt_bwd in visited_backward:
- continue
- old_cost_b = cst_bwd.get(nxt_bwd, np.inf)
- new_cost_b = cst_bwd[v_bwd] + d_backward
- if new_cost_b < old_cost_b:
- queue_backward.put((new_cost_b, nxt_bwd))
- cst_bwd[nxt_bwd] = new_cost_b
- parent_backward[nxt_bwd] = v_bwd
-
- if nxt_bwd in visited_forward:
- if cst_bwd[v_bwd] + d_backward + cst_fwd[nxt_bwd] < shortest_distance:
- shortest_distance = cst_bwd[v_bwd] + d_backward + cst_fwd[nxt_bwd]
+ shortest_distance = pass_and_relaxation(
+ graph_forward,
+ v_fwd,
+ visited_forward,
+ visited_backward,
+ cst_fwd,
+ cst_bwd,
+ queue_forward,
+ parent_forward,
+ shortest_distance,
+ )
+
+ shortest_distance = pass_and_relaxation(
+ graph_backward,
+ v_bwd,
+ visited_backward,
+ visited_forward,
+ cst_bwd,
+ cst_fwd,
+ queue_backward,
+ parent_backward,
+ shortest_distance,
+ )
if cst_fwd[v_fwd] + cst_bwd[v_bwd] >= shortest_distance:
break
diff --git a/pyproject.toml b/pyproject.toml
index 23fe45e97d20..48c3fbd4009d 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -61,7 +61,7 @@ show-source = true
target-version = "py311"
[tool.ruff.mccabe] # DO NOT INCREASE THIS VALUE
-max-complexity = 20 # default: 10
+max-complexity = 17 # default: 10
[tool.ruff.pylint] # DO NOT INCREASE THESE VALUES
max-args = 10 # default: 5
From a71f22dae54f830dbf68b3bd5e5e8d540e338a4c Mon Sep 17 00:00:00 2001
From: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
Date: Thu, 30 Mar 2023 10:39:21 +0530
Subject: [PATCH 024/808] Update cnn_classification.py (#8570)
---
computer_vision/cnn_classification.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/computer_vision/cnn_classification.py b/computer_vision/cnn_classification.py
index 1c193fcbb50b..9b5f8c95eebf 100644
--- a/computer_vision/cnn_classification.py
+++ b/computer_vision/cnn_classification.py
@@ -93,7 +93,7 @@
test_image = tf.keras.preprocessing.image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis=0)
result = classifier.predict(test_image)
- training_set.class_indices
+ # training_set.class_indices
if result[0][0] == 0:
prediction = "Normal"
if result[0][0] == 1:
From a00492911a949a1e59072367bbabee22cd884106 Mon Sep 17 00:00:00 2001
From: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
Date: Fri, 31 Mar 2023 16:47:13 +0530
Subject: [PATCH 025/808] added a problem on kadane's algo and its solution.
(#8569)
* added kadane's algorithm directory with one problem's solution.
* added type hints
* Rename kaadne_algorithm/max_product_subarray.py to dynamic_programming/max_product_subarray.py
* Update dynamic_programming/max_product_subarray.py
Co-authored-by: Christian Clauss
* Update max_product_subarray.py
* Update max_product_subarray.py
* Update dynamic_programming/max_product_subarray.py
Co-authored-by: Christian Clauss
* Update max_product_subarray.py
* Update max_product_subarray.py
* Update max_product_subarray.py
* Update max_product_subarray.py
* Update max_product_subarray.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update max_product_subarray.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update max_product_subarray.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update max_product_subarray.py
* Update max_product_subarray.py
* Update dynamic_programming/max_product_subarray.py
Co-authored-by: Christian Clauss
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update dynamic_programming/max_product_subarray.py
Co-authored-by: Christian Clauss
* Update max_product_subarray.py
---------
Co-authored-by: Christian Clauss
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
dynamic_programming/max_product_subarray.py | 53 +++++++++++++++++++++
1 file changed, 53 insertions(+)
create mode 100644 dynamic_programming/max_product_subarray.py
diff --git a/dynamic_programming/max_product_subarray.py b/dynamic_programming/max_product_subarray.py
new file mode 100644
index 000000000000..425859bc03e3
--- /dev/null
+++ b/dynamic_programming/max_product_subarray.py
@@ -0,0 +1,53 @@
+def max_product_subarray(numbers: list[int]) -> int:
+ """
+ Returns the maximum product that can be obtained by multiplying a
+ contiguous subarray of the given integer list `nums`.
+
+ Example:
+ >>> max_product_subarray([2, 3, -2, 4])
+ 6
+ >>> max_product_subarray((-2, 0, -1))
+ 0
+ >>> max_product_subarray([2, 3, -2, 4, -1])
+ 48
+ >>> max_product_subarray([-1])
+ -1
+ >>> max_product_subarray([0])
+ 0
+ >>> max_product_subarray([])
+ 0
+ >>> max_product_subarray("")
+ 0
+ >>> max_product_subarray(None)
+ 0
+ >>> max_product_subarray([2, 3, -2, 4.5, -1])
+ Traceback (most recent call last):
+ ...
+ ValueError: numbers must be an iterable of integers
+ >>> max_product_subarray("ABC")
+ Traceback (most recent call last):
+ ...
+ ValueError: numbers must be an iterable of integers
+ """
+ if not numbers:
+ return 0
+
+ if not isinstance(numbers, (list, tuple)) or not all(
+ isinstance(number, int) for number in numbers
+ ):
+ raise ValueError("numbers must be an iterable of integers")
+
+ max_till_now = min_till_now = max_prod = numbers[0]
+
+ for i in range(1, len(numbers)):
+ # update the maximum and minimum subarray products
+ number = numbers[i]
+ if number < 0:
+ max_till_now, min_till_now = min_till_now, max_till_now
+ max_till_now = max(number, max_till_now * number)
+ min_till_now = min(number, min_till_now * number)
+
+ # update the maximum product found till now
+ max_prod = max(max_prod, max_till_now)
+
+ return max_prod
From 238fe8c494ab5be80c96441095d1c8958f95c04d Mon Sep 17 00:00:00 2001
From: NIKITA PANDEY <113332472+nikitapandeyy@users.noreply.github.com>
Date: Fri, 31 Mar 2023 19:38:13 +0530
Subject: [PATCH 026/808] Update receive_file.py (#8541)
* Update receive_file.py
Here are the changes I made:
Added the main() function and called it from if __name__ == "__main__" block. This makes it easier to test the code and import it into other programs.
Added socket.AF_INET as the first argument to socket.socket(). This specifies the address family to be used, which is necessary when using connect().
Changed print(f"{data = }") to print("Received:", len(data), "bytes"). This makes it clearer what's happening and how much data is being received.
Changed the final print statement to "Successfully received the file". This makes it more accurate and descriptive.
Moved the import statement to the top of the file. This is a common convention in Python.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
file_transfer/receive_file.py | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/file_transfer/receive_file.py b/file_transfer/receive_file.py
index 37a503036dc2..f50ad9fe1107 100644
--- a/file_transfer/receive_file.py
+++ b/file_transfer/receive_file.py
@@ -1,8 +1,9 @@
-if __name__ == "__main__":
- import socket # Import socket module
+import socket
+
- sock = socket.socket() # Create a socket object
- host = socket.gethostname() # Get local machine name
+def main():
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ host = socket.gethostname()
port = 12312
sock.connect((host, port))
@@ -13,11 +14,14 @@
print("Receiving data...")
while True:
data = sock.recv(1024)
- print(f"{data = }")
if not data:
break
- out_file.write(data) # Write data to a file
+ out_file.write(data)
- print("Successfully got the file")
+ print("Successfully received the file")
sock.close()
print("Connection closed")
+
+
+if __name__ == "__main__":
+ main()
From 5ce63b5966b6ad9c7ce36c449fb31112c3e1d084 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Sat, 1 Apr 2023 01:11:24 -0400
Subject: [PATCH 027/808] Fix `mypy` errors in `lu_decomposition.py` (attempt
2) (#8100)
* updating DIRECTORY.md
* Fix mypy errors in lu_decomposition.py
* Replace for-loops with comprehensions
* Add explanation of LU decomposition and extra doctests
Add an explanation of LU decomposition with conditions for when an LU
decomposition exists
Add extra doctests to handle each of the possible conditions for when a
decomposition exists/doesn't exist
* updating DIRECTORY.md
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
arithmetic_analysis/lu_decomposition.py | 91 ++++++++++++++++++-------
1 file changed, 65 insertions(+), 26 deletions(-)
diff --git a/arithmetic_analysis/lu_decomposition.py b/arithmetic_analysis/lu_decomposition.py
index 217719cf4da1..941c1dadf556 100644
--- a/arithmetic_analysis/lu_decomposition.py
+++ b/arithmetic_analysis/lu_decomposition.py
@@ -1,62 +1,101 @@
-"""Lower-Upper (LU) Decomposition.
+"""
+Lower–upper (LU) decomposition factors a matrix as a product of a lower
+triangular matrix and an upper triangular matrix. A square matrix has an LU
+decomposition under the following conditions:
+ - If the matrix is invertible, then it has an LU decomposition if and only
+ if all of its leading principal minors are non-zero (see
+ https://en.wikipedia.org/wiki/Minor_(linear_algebra) for an explanation of
+ leading principal minors of a matrix).
+ - If the matrix is singular (i.e., not invertible) and it has a rank of k
+ (i.e., it has k linearly independent columns), then it has an LU
+ decomposition if its first k leading principal minors are non-zero.
+
+This algorithm will simply attempt to perform LU decomposition on any square
+matrix and raise an error if no such decomposition exists.
-Reference:
-- https://en.wikipedia.org/wiki/LU_decomposition
+Reference: https://en.wikipedia.org/wiki/LU_decomposition
"""
from __future__ import annotations
import numpy as np
-from numpy import float64
-from numpy.typing import ArrayLike
-
-def lower_upper_decomposition(
- table: ArrayLike[float64],
-) -> tuple[ArrayLike[float64], ArrayLike[float64]]:
- """Lower-Upper (LU) Decomposition
-
- Example:
+def lower_upper_decomposition(table: np.ndarray) -> tuple[np.ndarray, np.ndarray]:
+ """
+ Perform LU decomposition on a given matrix and raises an error if the matrix
+ isn't square or if no such decomposition exists
>>> matrix = np.array([[2, -2, 1], [0, 1, 2], [5, 3, 1]])
- >>> outcome = lower_upper_decomposition(matrix)
- >>> outcome[0]
+ >>> lower_mat, upper_mat = lower_upper_decomposition(matrix)
+ >>> lower_mat
array([[1. , 0. , 0. ],
[0. , 1. , 0. ],
[2.5, 8. , 1. ]])
- >>> outcome[1]
+ >>> upper_mat
array([[ 2. , -2. , 1. ],
[ 0. , 1. , 2. ],
[ 0. , 0. , -17.5]])
+ >>> matrix = np.array([[4, 3], [6, 3]])
+ >>> lower_mat, upper_mat = lower_upper_decomposition(matrix)
+ >>> lower_mat
+ array([[1. , 0. ],
+ [1.5, 1. ]])
+ >>> upper_mat
+ array([[ 4. , 3. ],
+ [ 0. , -1.5]])
+
+ # Matrix is not square
>>> matrix = np.array([[2, -2, 1], [0, 1, 2]])
- >>> lower_upper_decomposition(matrix)
+ >>> lower_mat, upper_mat = lower_upper_decomposition(matrix)
Traceback (most recent call last):
...
ValueError: 'table' has to be of square shaped array but got a 2x3 array:
[[ 2 -2 1]
[ 0 1 2]]
+
+ # Matrix is invertible, but its first leading principal minor is 0
+ >>> matrix = np.array([[0, 1], [1, 0]])
+ >>> lower_mat, upper_mat = lower_upper_decomposition(matrix)
+ Traceback (most recent call last):
+ ...
+ ArithmeticError: No LU decomposition exists
+
+ # Matrix is singular, but its first leading principal minor is 1
+ >>> matrix = np.array([[1, 0], [1, 0]])
+ >>> lower_mat, upper_mat = lower_upper_decomposition(matrix)
+ >>> lower_mat
+ array([[1., 0.],
+ [1., 1.]])
+ >>> upper_mat
+ array([[1., 0.],
+ [0., 0.]])
+
+ # Matrix is singular, but its first leading principal minor is 0
+ >>> matrix = np.array([[0, 1], [0, 1]])
+ >>> lower_mat, upper_mat = lower_upper_decomposition(matrix)
+ Traceback (most recent call last):
+ ...
+ ArithmeticError: No LU decomposition exists
"""
- # Table that contains our data
- # Table has to be a square array so we need to check first
+ # Ensure that table is a square array
rows, columns = np.shape(table)
if rows != columns:
raise ValueError(
- f"'table' has to be of square shaped array but got a {rows}x{columns} "
- + f"array:\n{table}"
+ f"'table' has to be of square shaped array but got a "
+ f"{rows}x{columns} array:\n{table}"
)
+
lower = np.zeros((rows, columns))
upper = np.zeros((rows, columns))
for i in range(columns):
for j in range(i):
- total = 0
- for k in range(j):
- total += lower[i][k] * upper[k][j]
+ total = sum(lower[i][k] * upper[k][j] for k in range(j))
+ if upper[j][j] == 0:
+ raise ArithmeticError("No LU decomposition exists")
lower[i][j] = (table[i][j] - total) / upper[j][j]
lower[i][i] = 1
for j in range(i, columns):
- total = 0
- for k in range(i):
- total += lower[i][k] * upper[k][j]
+ total = sum(lower[i][k] * upper[k][j] for k in range(j))
upper[i][j] = table[i][j] - total
return lower, upper
From dc4f603dad22eab31892855555999b552e97e9d8 Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Sat, 1 Apr 2023 08:47:24 +0300
Subject: [PATCH 028/808] Add Project Euler problem 187 solution 1 (#8182)
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
DIRECTORY.md | 2 +
project_euler/problem_187/__init__.py | 0
project_euler/problem_187/sol1.py | 58 +++++++++++++++++++++++++++
3 files changed, 60 insertions(+)
create mode 100644 project_euler/problem_187/__init__.py
create mode 100644 project_euler/problem_187/sol1.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 1d3177801a2c..1a641d8ecb59 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -990,6 +990,8 @@
* [Sol1](project_euler/problem_174/sol1.py)
* Problem 180
* [Sol1](project_euler/problem_180/sol1.py)
+ * Problem 187
+ * [Sol1](project_euler/problem_187/sol1.py)
* Problem 188
* [Sol1](project_euler/problem_188/sol1.py)
* Problem 191
diff --git a/project_euler/problem_187/__init__.py b/project_euler/problem_187/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/project_euler/problem_187/sol1.py b/project_euler/problem_187/sol1.py
new file mode 100644
index 000000000000..12f03e2a7023
--- /dev/null
+++ b/project_euler/problem_187/sol1.py
@@ -0,0 +1,58 @@
+"""
+Project Euler Problem 187: https://projecteuler.net/problem=187
+
+A composite is a number containing at least two prime factors.
+For example, 15 = 3 x 5; 9 = 3 x 3; 12 = 2 x 2 x 3.
+
+There are ten composites below thirty containing precisely two,
+not necessarily distinct, prime factors: 4, 6, 9, 10, 14, 15, 21, 22, 25, 26.
+
+How many composite integers, n < 10^8, have precisely two,
+not necessarily distinct, prime factors?
+"""
+
+from math import isqrt
+
+
+def calculate_prime_numbers(max_number: int) -> list[int]:
+ """
+ Returns prime numbers below max_number
+
+ >>> calculate_prime_numbers(10)
+ [2, 3, 5, 7]
+ """
+
+ is_prime = [True] * max_number
+ for i in range(2, isqrt(max_number - 1) + 1):
+ if is_prime[i]:
+ for j in range(i**2, max_number, i):
+ is_prime[j] = False
+
+ return [i for i in range(2, max_number) if is_prime[i]]
+
+
+def solution(max_number: int = 10**8) -> int:
+ """
+ Returns the number of composite integers below max_number have precisely two,
+ not necessarily distinct, prime factors
+
+ >>> solution(30)
+ 10
+ """
+
+ prime_numbers = calculate_prime_numbers(max_number // 2)
+
+ semiprimes_count = 0
+ left = 0
+ right = len(prime_numbers) - 1
+ while left <= right:
+ while prime_numbers[left] * prime_numbers[right] >= max_number:
+ right -= 1
+ semiprimes_count += right - left + 1
+ left += 1
+
+ return semiprimes_count
+
+
+if __name__ == "__main__":
+ print(f"{solution() = }")
From e4d90e2d5b92fdcff558f1848843dfbe20d81035 Mon Sep 17 00:00:00 2001
From: amirsoroush <114881632+amirsoroush@users.noreply.github.com>
Date: Sat, 1 Apr 2023 09:26:43 +0300
Subject: [PATCH 029/808] change space complexity of linked list's __len__ from
O(n) to O(1) (#8183)
---
data_structures/linked_list/circular_linked_list.py | 2 +-
data_structures/linked_list/doubly_linked_list.py | 2 +-
data_structures/linked_list/merge_two_lists.py | 2 +-
data_structures/linked_list/singly_linked_list.py | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/data_structures/linked_list/circular_linked_list.py b/data_structures/linked_list/circular_linked_list.py
index 67a63cd55e19..9092fb29e3ff 100644
--- a/data_structures/linked_list/circular_linked_list.py
+++ b/data_structures/linked_list/circular_linked_list.py
@@ -24,7 +24,7 @@ def __iter__(self) -> Iterator[Any]:
break
def __len__(self) -> int:
- return len(tuple(iter(self)))
+ return sum(1 for _ in self)
def __repr__(self):
return "->".join(str(item) for item in iter(self))
diff --git a/data_structures/linked_list/doubly_linked_list.py b/data_structures/linked_list/doubly_linked_list.py
index 6c81493fff85..41d07d63e005 100644
--- a/data_structures/linked_list/doubly_linked_list.py
+++ b/data_structures/linked_list/doubly_linked_list.py
@@ -51,7 +51,7 @@ def __len__(self):
>>> len(linked_list) == 5
True
"""
- return len(tuple(iter(self)))
+ return sum(1 for _ in self)
def insert_at_head(self, data):
self.insert_at_nth(0, data)
diff --git a/data_structures/linked_list/merge_two_lists.py b/data_structures/linked_list/merge_two_lists.py
index 61e2412aa7fd..ca0d3bb48540 100644
--- a/data_structures/linked_list/merge_two_lists.py
+++ b/data_structures/linked_list/merge_two_lists.py
@@ -44,7 +44,7 @@ def __len__(self) -> int:
>>> len(SortedLinkedList(test_data_odd))
8
"""
- return len(tuple(iter(self)))
+ return sum(1 for _ in self)
def __str__(self) -> str:
"""
diff --git a/data_structures/linked_list/singly_linked_list.py b/data_structures/linked_list/singly_linked_list.py
index bdeb5922ac67..a8f9e8ebb977 100644
--- a/data_structures/linked_list/singly_linked_list.py
+++ b/data_structures/linked_list/singly_linked_list.py
@@ -72,7 +72,7 @@ def __len__(self) -> int:
>>> len(linked_list)
0
"""
- return len(tuple(iter(self)))
+ return sum(1 for _ in self)
def __repr__(self) -> str:
"""
From 9e0c357a57f76abc354d704012040f3f5511a941 Mon Sep 17 00:00:00 2001
From: Dhruv Manilawala
Date: Sat, 1 Apr 2023 11:59:26 +0530
Subject: [PATCH 030/808] chore: additional Project Euler solution hash (#8593)
---
scripts/project_euler_answers.json | 109 ++++++++++++++++++++++++++++-
1 file changed, 108 insertions(+), 1 deletion(-)
diff --git a/scripts/project_euler_answers.json b/scripts/project_euler_answers.json
index 6d354363ee5f..f2b876934766 100644
--- a/scripts/project_euler_answers.json
+++ b/scripts/project_euler_answers.json
@@ -723,5 +723,112 @@
"722": "9687101dfe209fd65f57a10603baa38ba83c9152e43a8b802b96f1e07f568e0e",
"723": "74832787e7d4e0cb7991256c8f6d02775dffec0684de234786f25f898003f2de",
"724": "fa05e2b497e7eafa64574017a4c45aadef6b163d907b03d63ba3f4021096d329",
- "725": "005c873563f51bbebfdb1f8dbc383259e9a98e506bc87ae8d8c9044b81fc6418"
+ "725": "005c873563f51bbebfdb1f8dbc383259e9a98e506bc87ae8d8c9044b81fc6418",
+ "726": "93e41c533136bf4b436e493090fd4e7b277234db2a69c62a871f775ff26681bf",
+ "727": "c366f7426ca9351dcdde2e3bea01181897cda4d9b44977678ea3828419b84851",
+ "728": "8de62a644511d27c7c23c7722f56112b3c1ab9b05a078a98a0891f09f92464c6",
+ "729": "0ae82177174eef99fc80a2ec921295f61a6ac4dfed86a1bf333a50c26d01955c",
+ "730": "78cd876a176c8fbf7c2155b80dccbdededdbc43c28ef17b5a6e554d649325d38",
+ "731": "54afb9f829be51d29f90eecbfe40e5ba91f3a3bf538de62f3e34674af15eb542",
+ "732": "c4dc4610dcafc806b30e5d3f5560b57f462218a04397809843a7110838f0ebac",
+ "733": "bdde7d98d057d6a6ae360fd2f872d8bccb7e7f2971df37a3c5f20712ea3c618f",
+ "734": "9a514875bd9af26fcc565337771f852d311cd77033186e4d957e7b6c7b8ce018",
+ "735": "8bbc5a27c0031d8c44f3f73c99622a202cd6ea9a080049d615a7ae80ce6024f9",
+ "736": "e0d4c78b9b3dae51940877aff28275d036eccfc641111c8e34227ff6015a0fab",
+ "737": "a600884bcaa01797310c83b198bad58c98530289305af29b0bf75f679af38d3a",
+ "738": "c85f15fdaafe7d5525acff960afef7e4b8ffded5a7ee0d1dc2b0e8d0c26b9b46",
+ "739": "8716e9302f0fb90153e2f522bd88a710361a897480e4ccc0542473c704793518",
+ "740": "6ff41ee34b263b742cda109aee3be9ad6c95eec2ce31d6a9fc5353bba1b41afd",
+ "741": "99ac0eb9589b895e5755895206bbad5febd6bc29b2912df1c7544c547e26bca3",
+ "742": "7d2761a240aa577348df4813ea248088d0d6d8d421142c712ed576cdc90d4df9",
+ "743": "d93c42a129c0961b4e36738efae3b7e8ffae3a4daeced20e85bb740d3d72522d",
+ "744": "211f76700a010461486dde6c723720be85e68c192cd8a8ed0a88860b8ae9b0f0",
+ "745": "2d32dc1fea2f1b8600c0ada927b057b566870ceb5362cce71ac3693dcb7136ae",
+ "746": "2df1c2a0181f0c25e8d13d2a1eadba55a6b06267a2b22075fcf6867fb2e10c02",
+ "747": "a8d8f93142e320c6f0dd386c7a3bfb011bbdc15b85291a9be8f0266b3608175e",
+ "748": "7de937e04c10386b240afb8bb2ff590009946df8b7850a0329ccdb59fca8955f",
+ "749": "1a55f5484ccf964aeb186faedefa01db05d87180891dc2280b6eb85b6efb4779",
+ "750": "fa4318c213179e6af1c949be7cf47210f4383e0a44d191e2bad44228d3192f14",
+ "751": "12fe650fcb3afc214b3d647c655070e8142cfd397441fc7636ad7e6ffcaefde2",
+ "752": "e416c0123bc6b82df8726b328494db31aa4781d938a0a6e2107b1e44c73c0434",
+ "753": "0ee3299bc89e1e4c2fc79285fb1cd84c887456358a825e56be92244b7115f5af",
+ "754": "1370574b16207c41d3dafb62aa898379ec101ac36843634b1633b7b509d4c35a",
+ "755": "78bb4b18b13f5254cfafe872c0e93791ab5206b2851960dc6aebea8f62b9580c",
+ "756": "6becaabbda2e9ea22373e62e989b6b70467efa24fbe2f0d124d7a99a53e93f74",
+ "757": "fbfee0a5c4fa57a1dd6cf0c9bb2423cf7e7bcb130e67114aa360e42234987314",
+ "758": "8e4dfc259cec9dfd89d4b4ac8c33c75af6e0f5f7926526ee22ad4d45f93d3c18",
+ "759": "40bac0ed2e4f7861a6d9a2d87191a9034e177c319aa40a43638cc1b69572e5f2",
+ "760": "7ab50386a211f0815593389ab05b57a1a5eb5cbf5b9a85fe4afc517dcab74e06",
+ "761": "1cdb0318ac16e11c8d2ae7b1d7ca7138f7b1a461e9d75bd69be0f9cdd3add0c5",
+ "762": "84c4662267d5809380a540dfc2881665b3019047d74d5ef0a01f86e45f4b5b59",
+ "763": "f0def5903139447fabe7d106db5fff660d94b45af7b8b48d789596cf65ab2514",
+ "764": "7b4131f4d1e13d091ca7dd4d32317a14a2a24e6e1abd214df1c14c215287b330",
+ "765": "7558b775727426bccd945f5aa6b3e131e6034a7b1ff8576332329ef65d6a1663",
+ "766": "23c309430fa9546adb617457dbfd30fb7432904595c8c000e9b67ea23f32a53b",
+ "767": "70aef22ac2db8a5bdfcc42ff8dafbd2901e85e268f5f3c45085aa40c590b1d42",
+ "768": "b69a808dfc654b037e2f47ace16f48fe3bb553b3c8eed3e2b6421942fbf521d0",
+ "769": "78537a30577e806c6d8d94725e54d2d52e56f7f39f89c133cd5d0a2aad7e46e4",
+ "770": "c9d80c19c4895d1498bf809fcc37c447fa961fb325e5667eb35d6aa992966b41",
+ "771": "9803ace30c0d90d422e703fdf25a10a9342d0178a277ebc20c7bd6feac4c7a15",
+ "772": "f5a1e391af815ea6453db58a1bd71790f433c44ed63e5e93d8f5c045dfd5a464",
+ "773": "e1b93fc323c4d9c383100603339548e1e56ce9c38bcdcc425024c12b862ea8cb",
+ "774": "3646cd098b213014fb7bbc9597871585e62ee0cf2770e141f1df771237cc09ab",
+ "775": "d9d7d515ce7350c9e5696d85f68bbb42daa74b9e171a601dd04c823b18bb7757",
+ "776": "83286074d3bc86a5b449facb5fe5eafc91eb4c8031e2fb5e716443402cd8ed0f",
+ "777": "e62616a387d05b619d47cee3d49d5d2db19393736bf54b6cdd20933c0531cb7e",
+ "778": "d4de958ba44d25353de5b380e04d06c7968794ad50dbf6231ad0049ff53e106b",
+ "779": "c08ce54a59afc4af62f28b80a9c9a5190822d124eed8d73fd6db3e19c81e2157",
+ "780": "fc7ba646c16482f0f4f5ce2b06d21183dba2bdeaf9469b36b55bc7bc2d87baf3",
+ "781": "8fa5733f06838fb61b55b3e9d59c5061d922147e59947fe52e566dd975b2199f",
+ "782": "9f757d92df401ee049bc066bb2625c6287e5e4bcd38c958396a77a578f036a24",
+ "783": "270ff37f60c267a673bd4b223e44941f01ae9cfbf6bbdf99ca57af89b1e9a66f",
+ "784": "388b17c4c7b829cef767f83b4686c903faeec1241edfe5f58ee91d2b0c7f8dfc",
+ "785": "77cf600204c5265e1d5d3d26bf28ba1e92e6f24def040c16977450bec8b1cb99",
+ "786": "fb14022b7edbc6c7bfde27f35b49f6acaa4f0fc383af27614cb9d4a1980e626b",
+ "787": "7516ba0ac1951665723dcc4adcc52764d9497e7b6ed30bdb9937ac9df82b7c4f",
+ "788": "adede1d30258bb0f353af11f559b67f8b823304c71e967f52db52d002760c24f",
+ "789": "0c82e744a1f9bc57fd8ae8b2f479998455bc45126de971c59b68541c254e303a",
+ "790": "319847122251afd20d4d650047c55981a509fa2be78abd7c9c3caa0555e60a05",
+ "791": "2e0bbdcd0a8460e1e33c55668d0dc9752379a78b9f3561d7a17b922a5541a3fb",
+ "792": "5f77834c5a509023dd95dd98411eae1dd4bafd125deca590632f409f92fd257b",
+ "793": "dbfd900a3b31eeec2f14b916f5151611541cb716d80b7b9a1229de12293a02ea",
+ "794": "d019fe415aba832c4c761140d60c466c9aaad52b504df3167c17f2d3f0b277a7",
+ "795": "617b259349da44c2af2664acde113673ab3bb03a85d31f1be8f01027d0ebd4d3",
+ "796": "cba6b30a818d073398e5802211987f0897523e4752987bb445b2bca079670e22",
+ "797": "61e42cac3d7858b8850111a8c64c56432a18dd058dfb6afd773f07d703703b1a",
+ "798": "ae8b155d6b77522af79f7e4017fefe92aaa5d45eff132c83dc4d4bcfc9686020",
+ "799": "a41cb14ddf8f1948a01f590fbe53d9ca4e2faf48375ce1c306f91acf7c94e005",
+ "800": "c6a47bc6f02cf06be16728fb308c83f2f2ae350325ef7016867f5bdaea849d71",
+ "801": "d14b358c76b55106613f9c0a2112393338dfd01513b0fd231b79fc8db20e41f0",
+ "802": "22ae33e67fb48accfaa3b36e70c5a19066b974194c3130680de0c7cdce2d0f2e",
+ "803": "d95b3f9bbb7054042c1fba4db02f7223a2dad94977a36f08c8aaf92f373f9e78",
+ "804": "b0b1cf7253593eb2334c75e66dbe22b4b4540347485f1ea24e80226b4b18171c",
+ "805": "41b1ff5db0e70984ad20c50d1a9ac2b5a53ccd5f42796c8e948ae8880005fbb9",
+ "806": "b9c813beb39671adb8e1530555cadca44c21ddc7127932274918df2091dbd9ca",
+ "807": "745fd9ba97970d85a29877942839e41fc192794420e86f3bde39fd26db7a8bff",
+ "808": "6c73b947eb603602a7e8afadc83eaaa381a46db8b82a6fb89c9c1d93cb023fce",
+ "809": "eebac7753da4c1230dfce0f15fc124ffff01b0e432f0b74623b60cff71bbc9a9",
+ "810": "42be7899672a1a0046823603ce60dbeda7250a56fcb8d0913093850c85394307",
+ "811": "8698cd28ae4d93db36631870c33e4a8a527d970050d994666115f54260b64138",
+ "812": "dc2495924f37353db8b846323b8085fae9db502e890c513ed2e64ed7281f567f",
+ "813": "92179dde05aa6557baca65699fda50ca024d33a77078d8e128caa3c5db84064b",
+ "814": "344ed8cb7684307c00b7f03d751729a7f9d2a5f4a4cb4574594113d69593c0c1",
+ "815": "f642cf15345af3feab60e26a02aee038f759914906a5b2b469b46fdeee50ff59",
+ "816": "058178444e85f2aedb2f75d824a469747381f0bd3235d8c72df4385fec86eb07",
+ "817": "582fdc2233298192b09ceaf1463d6be06a09894075532630aa9d9efcfcb31da4",
+ "818": "67f6964d6ff114a43371b8375c44db2f1362df4f110b4a7ce8d79cf1b76621a0",
+ "819": "c7a82513ad48dfc87f2c1e0f2915b71464b7f5a16501c71df4ae4a8741dceef3",
+ "820": "9b23ae0181f320aadda2637ac2179c8b41b00715630c3acb643c7aee3b81cf90",
+ "821": "0941e396ff15b98fd7827de8e33ef94996d48ba719a88ba8e2da7f2605df3e5c",
+ "822": "ed8ef7f568939b9df1b77ae58344940b91c7e154a4367fe2b179bc7b9484d4e6",
+ "823": "05139328571a86096032b57e3a6a02a61acad4fb0d8f8e1b5d0ffb0d063ba697",
+ "826": "7f40f14ca65e5c06dd9ec9bbb212adb4d97a503199cb3c30ed921a04373bbe1c",
+ "827": "80461f02c63654c642382a6ffb7a44d0a3554434dfcfcea00ba91537724c7106",
+ "828": "520c196175625a0230afb76579ea26033372de3ef4c78aceb146b84322bfa871",
+ "829": "ed0089e61cf5540dd4a8fef1c468b96cf57f1d2bb79968755ba856d547ddafdf",
+ "831": "8ec445084427419ca6da405e0ded9814a4b4e11a2be84d88a8dea421f8e49992",
+ "832": "cfcb9ebef9308823f64798b5e12a59bf77ff6f92b0eae3790a61c0a26f577010",
+ "833": "e6ff3a5b257eb53366a32bfc8ea410a00a78bafa63650c76ac2bceddfbb42ff5",
+ "834": "b0d2a7e7d629ef14db9e7352a9a06d6ca66f750429170bb169ca52c172b8cc96",
+ "835": "bdfa1b1eecbad79f5de48bc6daee4d2b07689d7fb172aa306dd6094172b396f0"
}
From d66e1e873288bf399559c9ca40310d4b031aec50 Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Sat, 1 Apr 2023 15:18:13 +0300
Subject: [PATCH 031/808] Add Project Euler problem 800 solution 1 (#8567)
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
DIRECTORY.md | 3 ++
project_euler/problem_800/__init__.py | 0
project_euler/problem_800/sol1.py | 65 +++++++++++++++++++++++++++
3 files changed, 68 insertions(+)
create mode 100644 project_euler/problem_800/__init__.py
create mode 100644 project_euler/problem_800/sol1.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 1a641d8ecb59..18c573909773 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -317,6 +317,7 @@
* [Longest Sub Array](dynamic_programming/longest_sub_array.py)
* [Matrix Chain Order](dynamic_programming/matrix_chain_order.py)
* [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py)
+ * [Max Product Subarray](dynamic_programming/max_product_subarray.py)
* [Max Sub Array](dynamic_programming/max_sub_array.py)
* [Max Sum Contiguous Subsequence](dynamic_programming/max_sum_contiguous_subsequence.py)
* [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py)
@@ -1016,6 +1017,8 @@
* [Sol1](project_euler/problem_587/sol1.py)
* Problem 686
* [Sol1](project_euler/problem_686/sol1.py)
+ * Problem 800
+ * [Sol1](project_euler/problem_800/sol1.py)
## Quantum
* [Bb84](quantum/bb84.py)
diff --git a/project_euler/problem_800/__init__.py b/project_euler/problem_800/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/project_euler/problem_800/sol1.py b/project_euler/problem_800/sol1.py
new file mode 100644
index 000000000000..f887787bcbc6
--- /dev/null
+++ b/project_euler/problem_800/sol1.py
@@ -0,0 +1,65 @@
+"""
+Project Euler Problem 800: https://projecteuler.net/problem=800
+
+An integer of the form p^q q^p with prime numbers p != q is called a hybrid-integer.
+For example, 800 = 2^5 5^2 is a hybrid-integer.
+
+We define C(n) to be the number of hybrid-integers less than or equal to n.
+You are given C(800) = 2 and C(800^800) = 10790
+
+Find C(800800^800800)
+"""
+
+from math import isqrt, log2
+
+
+def calculate_prime_numbers(max_number: int) -> list[int]:
+ """
+ Returns prime numbers below max_number
+
+ >>> calculate_prime_numbers(10)
+ [2, 3, 5, 7]
+ """
+
+ is_prime = [True] * max_number
+ for i in range(2, isqrt(max_number - 1) + 1):
+ if is_prime[i]:
+ for j in range(i**2, max_number, i):
+ is_prime[j] = False
+
+ return [i for i in range(2, max_number) if is_prime[i]]
+
+
+def solution(base: int = 800800, degree: int = 800800) -> int:
+ """
+ Returns the number of hybrid-integers less than or equal to base^degree
+
+ >>> solution(800, 1)
+ 2
+
+ >>> solution(800, 800)
+ 10790
+ """
+
+ upper_bound = degree * log2(base)
+ max_prime = int(upper_bound)
+ prime_numbers = calculate_prime_numbers(max_prime)
+
+ hybrid_integers_count = 0
+ left = 0
+ right = len(prime_numbers) - 1
+ while left < right:
+ while (
+ prime_numbers[right] * log2(prime_numbers[left])
+ + prime_numbers[left] * log2(prime_numbers[right])
+ > upper_bound
+ ):
+ right -= 1
+ hybrid_integers_count += right - left
+ left += 1
+
+ return hybrid_integers_count
+
+
+if __name__ == "__main__":
+ print(f"{solution() = }")
From 3d2012c4ba3a9d9ddd80e518f0b5b9ba6c52df7d Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Sat, 1 Apr 2023 15:20:08 +0300
Subject: [PATCH 032/808] Add Project Euler problem 94 solution 1 (#8599)
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
DIRECTORY.md | 2 ++
project_euler/problem_094/__init__.py | 0
project_euler/problem_094/sol1.py | 44 +++++++++++++++++++++++++++
3 files changed, 46 insertions(+)
create mode 100644 project_euler/problem_094/__init__.py
create mode 100644 project_euler/problem_094/sol1.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 18c573909773..c781b17bf05f 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -937,6 +937,8 @@
* [Sol1](project_euler/problem_091/sol1.py)
* Problem 092
* [Sol1](project_euler/problem_092/sol1.py)
+ * Problem 094
+ * [Sol1](project_euler/problem_094/sol1.py)
* Problem 097
* [Sol1](project_euler/problem_097/sol1.py)
* Problem 099
diff --git a/project_euler/problem_094/__init__.py b/project_euler/problem_094/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/project_euler/problem_094/sol1.py b/project_euler/problem_094/sol1.py
new file mode 100644
index 000000000000..a41292fe26fd
--- /dev/null
+++ b/project_euler/problem_094/sol1.py
@@ -0,0 +1,44 @@
+"""
+Project Euler Problem 94: https://projecteuler.net/problem=94
+
+It is easily proved that no equilateral triangle exists with integral length sides and
+integral area. However, the almost equilateral triangle 5-5-6 has an area of 12 square
+units.
+
+We shall define an almost equilateral triangle to be a triangle for which two sides are
+equal and the third differs by no more than one unit.
+
+Find the sum of the perimeters of all almost equilateral triangles with integral side
+lengths and area and whose perimeters do not exceed one billion (1,000,000,000).
+"""
+
+
+def solution(max_perimeter: int = 10**9) -> int:
+ """
+ Returns the sum of the perimeters of all almost equilateral triangles with integral
+ side lengths and area and whose perimeters do not exceed max_perimeter
+
+ >>> solution(20)
+ 16
+ """
+
+ prev_value = 1
+ value = 2
+
+ perimeters_sum = 0
+ i = 0
+ perimeter = 0
+ while perimeter <= max_perimeter:
+ perimeters_sum += perimeter
+
+ prev_value += 2 * value
+ value += prev_value
+
+ perimeter = 2 * value + 2 if i % 2 == 0 else 2 * value - 2
+ i += 1
+
+ return perimeters_sum
+
+
+if __name__ == "__main__":
+ print(f"{solution() = }")
From 63710883c8634772fadf0145899cea4a1eadc31d Mon Sep 17 00:00:00 2001
From: amirsoroush <114881632+amirsoroush@users.noreply.github.com>
Date: Sat, 1 Apr 2023 15:23:21 +0300
Subject: [PATCH 033/808] Remove extra `len` calls in doubly-linked-list's
methods (#8600)
---
data_structures/linked_list/doubly_linked_list.py | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/data_structures/linked_list/doubly_linked_list.py b/data_structures/linked_list/doubly_linked_list.py
index 41d07d63e005..69763d12da15 100644
--- a/data_structures/linked_list/doubly_linked_list.py
+++ b/data_structures/linked_list/doubly_linked_list.py
@@ -81,7 +81,9 @@ def insert_at_nth(self, index: int, data):
....
IndexError: list index out of range
"""
- if not 0 <= index <= len(self):
+ length = len(self)
+
+ if not 0 <= index <= length:
raise IndexError("list index out of range")
new_node = Node(data)
if self.head is None:
@@ -90,7 +92,7 @@ def insert_at_nth(self, index: int, data):
self.head.previous = new_node
new_node.next = self.head
self.head = new_node
- elif index == len(self):
+ elif index == length:
self.tail.next = new_node
new_node.previous = self.tail
self.tail = new_node
@@ -131,15 +133,17 @@ def delete_at_nth(self, index: int):
....
IndexError: list index out of range
"""
- if not 0 <= index <= len(self) - 1:
+ length = len(self)
+
+ if not 0 <= index <= length - 1:
raise IndexError("list index out of range")
delete_node = self.head # default first node
- if len(self) == 1:
+ if length == 1:
self.head = self.tail = None
elif index == 0:
self.head = self.head.next
self.head.previous = None
- elif index == len(self) - 1:
+ elif index == length - 1:
delete_node = self.tail
self.tail = self.tail.previous
self.tail.next = None
From 59cae167e0e6b830b7ff5c89f5f2b8c747fb84c2 Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Sat, 1 Apr 2023 19:22:33 +0300
Subject: [PATCH 034/808] Reduce the complexity of
digital_image_processing/edge detection/canny.py (#8167)
* Reduce the complexity of digital_image_processing/edge_detection/canny.py
* Fix
* updating DIRECTORY.md
* updating DIRECTORY.md
* updating DIRECTORY.md
* Fix review issues
* Rename dst to destination
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.../edge_detection/canny.py | 129 ++++++++++--------
1 file changed, 75 insertions(+), 54 deletions(-)
diff --git a/digital_image_processing/edge_detection/canny.py b/digital_image_processing/edge_detection/canny.py
index a830355267c4..f8cbeedb3874 100644
--- a/digital_image_processing/edge_detection/canny.py
+++ b/digital_image_processing/edge_detection/canny.py
@@ -18,105 +18,126 @@ def gen_gaussian_kernel(k_size, sigma):
return g
-def canny(image, threshold_low=15, threshold_high=30, weak=128, strong=255):
- image_row, image_col = image.shape[0], image.shape[1]
- # gaussian_filter
- gaussian_out = img_convolve(image, gen_gaussian_kernel(9, sigma=1.4))
- # get the gradient and degree by sobel_filter
- sobel_grad, sobel_theta = sobel_filter(gaussian_out)
- gradient_direction = np.rad2deg(sobel_theta)
- gradient_direction += PI
-
- dst = np.zeros((image_row, image_col))
-
+def suppress_non_maximum(image_shape, gradient_direction, sobel_grad):
"""
Non-maximum suppression. If the edge strength of the current pixel is the largest
compared to the other pixels in the mask with the same direction, the value will be
preserved. Otherwise, the value will be suppressed.
"""
- for row in range(1, image_row - 1):
- for col in range(1, image_col - 1):
+ destination = np.zeros(image_shape)
+
+ for row in range(1, image_shape[0] - 1):
+ for col in range(1, image_shape[1] - 1):
direction = gradient_direction[row, col]
if (
- 0 <= direction < 22.5
+ 0 <= direction < PI / 8
or 15 * PI / 8 <= direction <= 2 * PI
or 7 * PI / 8 <= direction <= 9 * PI / 8
):
w = sobel_grad[row, col - 1]
e = sobel_grad[row, col + 1]
if sobel_grad[row, col] >= w and sobel_grad[row, col] >= e:
- dst[row, col] = sobel_grad[row, col]
+ destination[row, col] = sobel_grad[row, col]
- elif (PI / 8 <= direction < 3 * PI / 8) or (
- 9 * PI / 8 <= direction < 11 * PI / 8
+ elif (
+ PI / 8 <= direction < 3 * PI / 8
+ or 9 * PI / 8 <= direction < 11 * PI / 8
):
sw = sobel_grad[row + 1, col - 1]
ne = sobel_grad[row - 1, col + 1]
if sobel_grad[row, col] >= sw and sobel_grad[row, col] >= ne:
- dst[row, col] = sobel_grad[row, col]
+ destination[row, col] = sobel_grad[row, col]
- elif (3 * PI / 8 <= direction < 5 * PI / 8) or (
- 11 * PI / 8 <= direction < 13 * PI / 8
+ elif (
+ 3 * PI / 8 <= direction < 5 * PI / 8
+ or 11 * PI / 8 <= direction < 13 * PI / 8
):
n = sobel_grad[row - 1, col]
s = sobel_grad[row + 1, col]
if sobel_grad[row, col] >= n and sobel_grad[row, col] >= s:
- dst[row, col] = sobel_grad[row, col]
+ destination[row, col] = sobel_grad[row, col]
- elif (5 * PI / 8 <= direction < 7 * PI / 8) or (
- 13 * PI / 8 <= direction < 15 * PI / 8
+ elif (
+ 5 * PI / 8 <= direction < 7 * PI / 8
+ or 13 * PI / 8 <= direction < 15 * PI / 8
):
nw = sobel_grad[row - 1, col - 1]
se = sobel_grad[row + 1, col + 1]
if sobel_grad[row, col] >= nw and sobel_grad[row, col] >= se:
- dst[row, col] = sobel_grad[row, col]
-
- """
- High-Low threshold detection. If an edge pixel’s gradient value is higher
- than the high threshold value, it is marked as a strong edge pixel. If an
- edge pixel’s gradient value is smaller than the high threshold value and
- larger than the low threshold value, it is marked as a weak edge pixel. If
- an edge pixel's value is smaller than the low threshold value, it will be
- suppressed.
- """
- if dst[row, col] >= threshold_high:
- dst[row, col] = strong
- elif dst[row, col] <= threshold_low:
- dst[row, col] = 0
+ destination[row, col] = sobel_grad[row, col]
+
+ return destination
+
+
+def detect_high_low_threshold(
+ image_shape, destination, threshold_low, threshold_high, weak, strong
+):
+ """
+ High-Low threshold detection. If an edge pixel’s gradient value is higher
+ than the high threshold value, it is marked as a strong edge pixel. If an
+ edge pixel’s gradient value is smaller than the high threshold value and
+ larger than the low threshold value, it is marked as a weak edge pixel. If
+ an edge pixel's value is smaller than the low threshold value, it will be
+ suppressed.
+ """
+ for row in range(1, image_shape[0] - 1):
+ for col in range(1, image_shape[1] - 1):
+ if destination[row, col] >= threshold_high:
+ destination[row, col] = strong
+ elif destination[row, col] <= threshold_low:
+ destination[row, col] = 0
else:
- dst[row, col] = weak
+ destination[row, col] = weak
+
+def track_edge(image_shape, destination, weak, strong):
"""
Edge tracking. Usually a weak edge pixel caused from true edges will be connected
to a strong edge pixel while noise responses are unconnected. As long as there is
one strong edge pixel that is involved in its 8-connected neighborhood, that weak
edge point can be identified as one that should be preserved.
"""
- for row in range(1, image_row):
- for col in range(1, image_col):
- if dst[row, col] == weak:
+ for row in range(1, image_shape[0]):
+ for col in range(1, image_shape[1]):
+ if destination[row, col] == weak:
if 255 in (
- dst[row, col + 1],
- dst[row, col - 1],
- dst[row - 1, col],
- dst[row + 1, col],
- dst[row - 1, col - 1],
- dst[row + 1, col - 1],
- dst[row - 1, col + 1],
- dst[row + 1, col + 1],
+ destination[row, col + 1],
+ destination[row, col - 1],
+ destination[row - 1, col],
+ destination[row + 1, col],
+ destination[row - 1, col - 1],
+ destination[row + 1, col - 1],
+ destination[row - 1, col + 1],
+ destination[row + 1, col + 1],
):
- dst[row, col] = strong
+ destination[row, col] = strong
else:
- dst[row, col] = 0
+ destination[row, col] = 0
+
+
+def canny(image, threshold_low=15, threshold_high=30, weak=128, strong=255):
+ # gaussian_filter
+ gaussian_out = img_convolve(image, gen_gaussian_kernel(9, sigma=1.4))
+ # get the gradient and degree by sobel_filter
+ sobel_grad, sobel_theta = sobel_filter(gaussian_out)
+ gradient_direction = PI + np.rad2deg(sobel_theta)
+
+ destination = suppress_non_maximum(image.shape, gradient_direction, sobel_grad)
+
+ detect_high_low_threshold(
+ image.shape, destination, threshold_low, threshold_high, weak, strong
+ )
+
+ track_edge(image.shape, destination, weak, strong)
- return dst
+ return destination
if __name__ == "__main__":
# read original image in gray mode
lena = cv2.imread(r"../image_data/lena.jpg", 0)
# canny edge detection
- canny_dst = canny(lena)
- cv2.imshow("canny", canny_dst)
+ canny_destination = canny(lena)
+ cv2.imshow("canny", canny_destination)
cv2.waitKey(0)
From a213cea5f5a74e0a6b19240526779a3b0b1f270d Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Sat, 1 Apr 2023 12:39:22 -0400
Subject: [PATCH 035/808] Fix `mypy` errors in `dilation_operation.py` (#8595)
* updating DIRECTORY.md
* Fix mypy errors in dilation_operation.py
* Rename functions to use snake case
* updating DIRECTORY.md
* updating DIRECTORY.md
* Replace raw file string with pathlib Path
* Update digital_image_processing/morphological_operations/dilation_operation.py
Co-authored-by: Christian Clauss
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.../dilation_operation.py | 35 ++++++++++---------
1 file changed, 18 insertions(+), 17 deletions(-)
diff --git a/digital_image_processing/morphological_operations/dilation_operation.py b/digital_image_processing/morphological_operations/dilation_operation.py
index c8380737d219..e49b955c1480 100644
--- a/digital_image_processing/morphological_operations/dilation_operation.py
+++ b/digital_image_processing/morphological_operations/dilation_operation.py
@@ -1,33 +1,35 @@
+from pathlib import Path
+
import numpy as np
from PIL import Image
-def rgb2gray(rgb: np.array) -> np.array:
+def rgb_to_gray(rgb: np.ndarray) -> np.ndarray:
"""
Return gray image from rgb image
- >>> rgb2gray(np.array([[[127, 255, 0]]]))
+ >>> rgb_to_gray(np.array([[[127, 255, 0]]]))
array([[187.6453]])
- >>> rgb2gray(np.array([[[0, 0, 0]]]))
+ >>> rgb_to_gray(np.array([[[0, 0, 0]]]))
array([[0.]])
- >>> rgb2gray(np.array([[[2, 4, 1]]]))
+ >>> rgb_to_gray(np.array([[[2, 4, 1]]]))
array([[3.0598]])
- >>> rgb2gray(np.array([[[26, 255, 14], [5, 147, 20], [1, 200, 0]]]))
+ >>> rgb_to_gray(np.array([[[26, 255, 14], [5, 147, 20], [1, 200, 0]]]))
array([[159.0524, 90.0635, 117.6989]])
"""
r, g, b = rgb[:, :, 0], rgb[:, :, 1], rgb[:, :, 2]
return 0.2989 * r + 0.5870 * g + 0.1140 * b
-def gray2binary(gray: np.array) -> np.array:
+def gray_to_binary(gray: np.ndarray) -> np.ndarray:
"""
Return binary image from gray image
- >>> gray2binary(np.array([[127, 255, 0]]))
+ >>> gray_to_binary(np.array([[127, 255, 0]]))
array([[False, True, False]])
- >>> gray2binary(np.array([[0]]))
+ >>> gray_to_binary(np.array([[0]]))
array([[False]])
- >>> gray2binary(np.array([[26.2409, 4.9315, 1.4729]]))
+ >>> gray_to_binary(np.array([[26.2409, 4.9315, 1.4729]]))
array([[False, False, False]])
- >>> gray2binary(np.array([[26, 255, 14], [5, 147, 20], [1, 200, 0]]))
+ >>> gray_to_binary(np.array([[26, 255, 14], [5, 147, 20], [1, 200, 0]]))
array([[False, True, False],
[False, True, False],
[False, True, False]])
@@ -35,7 +37,7 @@ def gray2binary(gray: np.array) -> np.array:
return (gray > 127) & (gray <= 255)
-def dilation(image: np.array, kernel: np.array) -> np.array:
+def dilation(image: np.ndarray, kernel: np.ndarray) -> np.ndarray:
"""
Return dilated image
>>> dilation(np.array([[True, False, True]]), np.array([[0, 1, 0]]))
@@ -61,14 +63,13 @@ def dilation(image: np.array, kernel: np.array) -> np.array:
return output
-# kernel to be applied
-structuring_element = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
-
-
if __name__ == "__main__":
# read original image
- image = np.array(Image.open(r"..\image_data\lena.jpg"))
- output = dilation(gray2binary(rgb2gray(image)), structuring_element)
+ lena_path = Path(__file__).resolve().parent / "image_data" / "lena.jpg"
+ lena = np.array(Image.open(lena_path))
+ # kernel to be applied
+ structuring_element = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
+ output = dilation(gray_to_binary(rgb_to_gray(lena)), structuring_element)
# Save the output image
pil_img = Image.fromarray(output).convert("RGB")
pil_img.save("result_dilation.png")
From 84b6852de80bb51c185c30942bff47f9c451c74d Mon Sep 17 00:00:00 2001
From: Blake Reimer
Date: Sat, 1 Apr 2023 10:43:07 -0600
Subject: [PATCH 036/808] Graham's Law (#8162)
* grahams law
* doctest and type hints
* doctest formatting
* peer review updates
---
physics/grahams_law.py | 208 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 208 insertions(+)
create mode 100644 physics/grahams_law.py
diff --git a/physics/grahams_law.py b/physics/grahams_law.py
new file mode 100644
index 000000000000..6e5d75127e83
--- /dev/null
+++ b/physics/grahams_law.py
@@ -0,0 +1,208 @@
+"""
+Title: Graham's Law of Effusion
+
+Description: Graham's law of effusion states that the rate of effusion of a gas is
+inversely proportional to the square root of the molar mass of its particles:
+
+r1/r2 = sqrt(m2/m1)
+
+r1 = Rate of effusion for the first gas.
+r2 = Rate of effusion for the second gas.
+m1 = Molar mass of the first gas.
+m2 = Molar mass of the second gas.
+
+(Description adapted from https://en.wikipedia.org/wiki/Graham%27s_law)
+"""
+
+from math import pow, sqrt
+
+
+def validate(*values: float) -> bool:
+ """
+ Input Parameters:
+ -----------------
+ effusion_rate_1: Effustion rate of first gas (m^2/s, mm^2/s, etc.)
+ effusion_rate_2: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
+ molar_mass_1: Molar mass of the first gas (g/mol, kg/kmol, etc.)
+ molar_mass_2: Molar mass of the second gas (g/mol, kg/kmol, etc.)
+
+ Returns:
+ --------
+ >>> validate(2.016, 4.002)
+ True
+ >>> validate(-2.016, 4.002)
+ False
+ >>> validate()
+ False
+ """
+ result = len(values) > 0 and all(value > 0.0 for value in values)
+ return result
+
+
+def effusion_ratio(molar_mass_1: float, molar_mass_2: float) -> float | ValueError:
+ """
+ Input Parameters:
+ -----------------
+ molar_mass_1: Molar mass of the first gas (g/mol, kg/kmol, etc.)
+ molar_mass_2: Molar mass of the second gas (g/mol, kg/kmol, etc.)
+
+ Returns:
+ --------
+ >>> effusion_ratio(2.016, 4.002)
+ 1.408943
+ >>> effusion_ratio(-2.016, 4.002)
+ ValueError('Input Error: Molar mass values must greater than 0.')
+ >>> effusion_ratio(2.016)
+ Traceback (most recent call last):
+ ...
+ TypeError: effusion_ratio() missing 1 required positional argument: 'molar_mass_2'
+ """
+ return (
+ round(sqrt(molar_mass_2 / molar_mass_1), 6)
+ if validate(molar_mass_1, molar_mass_2)
+ else ValueError("Input Error: Molar mass values must greater than 0.")
+ )
+
+
+def first_effusion_rate(
+ effusion_rate: float, molar_mass_1: float, molar_mass_2: float
+) -> float | ValueError:
+ """
+ Input Parameters:
+ -----------------
+ effusion_rate: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
+ molar_mass_1: Molar mass of the first gas (g/mol, kg/kmol, etc.)
+ molar_mass_2: Molar mass of the second gas (g/mol, kg/kmol, etc.)
+
+ Returns:
+ --------
+ >>> first_effusion_rate(1, 2.016, 4.002)
+ 1.408943
+ >>> first_effusion_rate(-1, 2.016, 4.002)
+ ValueError('Input Error: Molar mass and effusion rate values must greater than 0.')
+ >>> first_effusion_rate(1)
+ Traceback (most recent call last):
+ ...
+ TypeError: first_effusion_rate() missing 2 required positional arguments: \
+'molar_mass_1' and 'molar_mass_2'
+ >>> first_effusion_rate(1, 2.016)
+ Traceback (most recent call last):
+ ...
+ TypeError: first_effusion_rate() missing 1 required positional argument: \
+'molar_mass_2'
+ """
+ return (
+ round(effusion_rate * sqrt(molar_mass_2 / molar_mass_1), 6)
+ if validate(effusion_rate, molar_mass_1, molar_mass_2)
+ else ValueError(
+ "Input Error: Molar mass and effusion rate values must greater than 0."
+ )
+ )
+
+
+def second_effusion_rate(
+ effusion_rate: float, molar_mass_1: float, molar_mass_2: float
+) -> float | ValueError:
+ """
+ Input Parameters:
+ -----------------
+ effusion_rate: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
+ molar_mass_1: Molar mass of the first gas (g/mol, kg/kmol, etc.)
+ molar_mass_2: Molar mass of the second gas (g/mol, kg/kmol, etc.)
+
+ Returns:
+ --------
+ >>> second_effusion_rate(1, 2.016, 4.002)
+ 0.709752
+ >>> second_effusion_rate(-1, 2.016, 4.002)
+ ValueError('Input Error: Molar mass and effusion rate values must greater than 0.')
+ >>> second_effusion_rate(1)
+ Traceback (most recent call last):
+ ...
+ TypeError: second_effusion_rate() missing 2 required positional arguments: \
+'molar_mass_1' and 'molar_mass_2'
+ >>> second_effusion_rate(1, 2.016)
+ Traceback (most recent call last):
+ ...
+ TypeError: second_effusion_rate() missing 1 required positional argument: \
+'molar_mass_2'
+ """
+ return (
+ round(effusion_rate / sqrt(molar_mass_2 / molar_mass_1), 6)
+ if validate(effusion_rate, molar_mass_1, molar_mass_2)
+ else ValueError(
+ "Input Error: Molar mass and effusion rate values must greater than 0."
+ )
+ )
+
+
+def first_molar_mass(
+ molar_mass: float, effusion_rate_1: float, effusion_rate_2: float
+) -> float | ValueError:
+ """
+ Input Parameters:
+ -----------------
+ molar_mass: Molar mass of the first gas (g/mol, kg/kmol, etc.)
+ effusion_rate_1: Effustion rate of first gas (m^2/s, mm^2/s, etc.)
+ effusion_rate_2: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
+
+ Returns:
+ --------
+ >>> first_molar_mass(2, 1.408943, 0.709752)
+ 0.507524
+ >>> first_molar_mass(-1, 2.016, 4.002)
+ ValueError('Input Error: Molar mass and effusion rate values must greater than 0.')
+ >>> first_molar_mass(1)
+ Traceback (most recent call last):
+ ...
+ TypeError: first_molar_mass() missing 2 required positional arguments: \
+'effusion_rate_1' and 'effusion_rate_2'
+ >>> first_molar_mass(1, 2.016)
+ Traceback (most recent call last):
+ ...
+ TypeError: first_molar_mass() missing 1 required positional argument: \
+'effusion_rate_2'
+ """
+ return (
+ round(molar_mass / pow(effusion_rate_1 / effusion_rate_2, 2), 6)
+ if validate(molar_mass, effusion_rate_1, effusion_rate_2)
+ else ValueError(
+ "Input Error: Molar mass and effusion rate values must greater than 0."
+ )
+ )
+
+
+def second_molar_mass(
+ molar_mass: float, effusion_rate_1: float, effusion_rate_2: float
+) -> float | ValueError:
+ """
+ Input Parameters:
+ -----------------
+ molar_mass: Molar mass of the first gas (g/mol, kg/kmol, etc.)
+ effusion_rate_1: Effustion rate of first gas (m^2/s, mm^2/s, etc.)
+ effusion_rate_2: Effustion rate of second gas (m^2/s, mm^2/s, etc.)
+
+ Returns:
+ --------
+ >>> second_molar_mass(2, 1.408943, 0.709752)
+ 1.970351
+ >>> second_molar_mass(-2, 1.408943, 0.709752)
+ ValueError('Input Error: Molar mass and effusion rate values must greater than 0.')
+ >>> second_molar_mass(1)
+ Traceback (most recent call last):
+ ...
+ TypeError: second_molar_mass() missing 2 required positional arguments: \
+'effusion_rate_1' and 'effusion_rate_2'
+ >>> second_molar_mass(1, 2.016)
+ Traceback (most recent call last):
+ ...
+ TypeError: second_molar_mass() missing 1 required positional argument: \
+'effusion_rate_2'
+ """
+ return (
+ round(pow(effusion_rate_1 / effusion_rate_2, 2) / molar_mass, 6)
+ if validate(molar_mass, effusion_rate_1, effusion_rate_2)
+ else ValueError(
+ "Input Error: Molar mass and effusion rate values must greater than 0."
+ )
+ )
From 56a40eb3ee9aa151defd97597f4e67acf294089f Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Sat, 1 Apr 2023 20:43:11 +0300
Subject: [PATCH 037/808] Reenable files when TensorFlow supports the current
Python (#8602)
* Remove python_version < "3.11" for tensorflow
* Reenable neural_network/input_data.py_tf
* updating DIRECTORY.md
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Try to fix ruff
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Try to fix ruff
* Try to fix ruff
* Try to fix ruff
* Try to fix pre-commit
* Try to fix
* Fix
* Fix
* Reenable dynamic_programming/k_means_clustering_tensorflow.py_tf
* updating DIRECTORY.md
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Try to fix ruff
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
DIRECTORY.md | 2 +
...py_tf => k_means_clustering_tensorflow.py} | 9 +-
.../{input_data.py_tf => input_data.py} | 98 +++++++++----------
requirements.txt | 2 +-
4 files changed, 55 insertions(+), 56 deletions(-)
rename dynamic_programming/{k_means_clustering_tensorflow.py_tf => k_means_clustering_tensorflow.py} (98%)
rename neural_network/{input_data.py_tf => input_data.py} (83%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index c781b17bf05f..34967082b359 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -309,6 +309,7 @@
* [Floyd Warshall](dynamic_programming/floyd_warshall.py)
* [Integer Partition](dynamic_programming/integer_partition.py)
* [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py)
+ * [K Means Clustering Tensorflow](dynamic_programming/k_means_clustering_tensorflow.py)
* [Knapsack](dynamic_programming/knapsack.py)
* [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py)
* [Longest Common Substring](dynamic_programming/longest_common_substring.py)
@@ -685,6 +686,7 @@
* [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
+ * [Input Data](neural_network/input_data.py)
* [Perceptron](neural_network/perceptron.py)
* [Simple Neural Network](neural_network/simple_neural_network.py)
diff --git a/dynamic_programming/k_means_clustering_tensorflow.py_tf b/dynamic_programming/k_means_clustering_tensorflow.py
similarity index 98%
rename from dynamic_programming/k_means_clustering_tensorflow.py_tf
rename to dynamic_programming/k_means_clustering_tensorflow.py
index 4fbcedeaa0dc..8d3f6f0dfbcb 100644
--- a/dynamic_programming/k_means_clustering_tensorflow.py_tf
+++ b/dynamic_programming/k_means_clustering_tensorflow.py
@@ -1,9 +1,10 @@
-import tensorflow as tf
from random import shuffle
+
+import tensorflow as tf
from numpy import array
-def TFKMeansCluster(vectors, noofclusters):
+def tf_k_means_cluster(vectors, noofclusters):
"""
K-Means Clustering using TensorFlow.
'vectors' should be a n*k 2-D NumPy array, where n is the number
@@ -30,7 +31,6 @@ def TFKMeansCluster(vectors, noofclusters):
graph = tf.Graph()
with graph.as_default():
-
# SESSION OF COMPUTATION
sess = tf.Session()
@@ -95,8 +95,7 @@ def TFKMeansCluster(vectors, noofclusters):
# iterations. To keep things simple, we will only do a set number of
# iterations, instead of using a Stopping Criterion.
noofiterations = 100
- for iteration_n in range(noofiterations):
-
+ for _ in range(noofiterations):
##EXPECTATION STEP
##Based on the centroid locations till last iteration, compute
##the _expected_ centroid assignments.
diff --git a/neural_network/input_data.py_tf b/neural_network/input_data.py
similarity index 83%
rename from neural_network/input_data.py_tf
rename to neural_network/input_data.py
index 0e22ac0bcda5..2a32f0b82c37 100644
--- a/neural_network/input_data.py_tf
+++ b/neural_network/input_data.py
@@ -21,13 +21,10 @@
import collections
import gzip
import os
+import urllib
import numpy
-from six.moves import urllib
-from six.moves import xrange # pylint: disable=redefined-builtin
-
-from tensorflow.python.framework import dtypes
-from tensorflow.python.framework import random_seed
+from tensorflow.python.framework import dtypes, random_seed
from tensorflow.python.platform import gfile
from tensorflow.python.util.deprecation import deprecated
@@ -46,16 +43,16 @@ def _read32(bytestream):
def _extract_images(f):
"""Extract the images into a 4D uint8 numpy array [index, y, x, depth].
- Args:
- f: A file object that can be passed into a gzip reader.
+ Args:
+ f: A file object that can be passed into a gzip reader.
- Returns:
- data: A 4D uint8 numpy array [index, y, x, depth].
+ Returns:
+ data: A 4D uint8 numpy array [index, y, x, depth].
- Raises:
- ValueError: If the bytestream does not start with 2051.
+ Raises:
+ ValueError: If the bytestream does not start with 2051.
- """
+ """
print("Extracting", f.name)
with gzip.GzipFile(fileobj=f) as bytestream:
magic = _read32(bytestream)
@@ -86,17 +83,17 @@ def _dense_to_one_hot(labels_dense, num_classes):
def _extract_labels(f, one_hot=False, num_classes=10):
"""Extract the labels into a 1D uint8 numpy array [index].
- Args:
- f: A file object that can be passed into a gzip reader.
- one_hot: Does one hot encoding for the result.
- num_classes: Number of classes for the one hot encoding.
+ Args:
+ f: A file object that can be passed into a gzip reader.
+ one_hot: Does one hot encoding for the result.
+ num_classes: Number of classes for the one hot encoding.
- Returns:
- labels: a 1D uint8 numpy array.
+ Returns:
+ labels: a 1D uint8 numpy array.
- Raises:
- ValueError: If the bystream doesn't start with 2049.
- """
+ Raises:
+ ValueError: If the bystream doesn't start with 2049.
+ """
print("Extracting", f.name)
with gzip.GzipFile(fileobj=f) as bytestream:
magic = _read32(bytestream)
@@ -115,8 +112,8 @@ def _extract_labels(f, one_hot=False, num_classes=10):
class _DataSet:
"""Container class for a _DataSet (deprecated).
- THIS CLASS IS DEPRECATED.
- """
+ THIS CLASS IS DEPRECATED.
+ """
@deprecated(
None,
@@ -135,21 +132,21 @@ def __init__(
):
"""Construct a _DataSet.
- one_hot arg is used only if fake_data is true. `dtype` can be either
- `uint8` to leave the input as `[0, 255]`, or `float32` to rescale into
- `[0, 1]`. Seed arg provides for convenient deterministic testing.
-
- Args:
- images: The images
- labels: The labels
- fake_data: Ignore inages and labels, use fake data.
- one_hot: Bool, return the labels as one hot vectors (if True) or ints (if
- False).
- dtype: Output image dtype. One of [uint8, float32]. `uint8` output has
- range [0,255]. float32 output has range [0,1].
- reshape: Bool. If True returned images are returned flattened to vectors.
- seed: The random seed to use.
- """
+ one_hot arg is used only if fake_data is true. `dtype` can be either
+ `uint8` to leave the input as `[0, 255]`, or `float32` to rescale into
+ `[0, 1]`. Seed arg provides for convenient deterministic testing.
+
+ Args:
+ images: The images
+ labels: The labels
+ fake_data: Ignore inages and labels, use fake data.
+ one_hot: Bool, return the labels as one hot vectors (if True) or ints (if
+ False).
+ dtype: Output image dtype. One of [uint8, float32]. `uint8` output has
+ range [0,255]. float32 output has range [0,1].
+ reshape: Bool. If True returned images are returned flattened to vectors.
+ seed: The random seed to use.
+ """
seed1, seed2 = random_seed.get_seed(seed)
# If op level seed is not set, use whatever graph level seed is returned
numpy.random.seed(seed1 if seed is None else seed2)
@@ -206,8 +203,8 @@ def next_batch(self, batch_size, fake_data=False, shuffle=True):
else:
fake_label = 0
return (
- [fake_image for _ in xrange(batch_size)],
- [fake_label for _ in xrange(batch_size)],
+ [fake_image for _ in range(batch_size)],
+ [fake_label for _ in range(batch_size)],
)
start = self._index_in_epoch
# Shuffle for the first epoch
@@ -250,19 +247,19 @@ def next_batch(self, batch_size, fake_data=False, shuffle=True):
def _maybe_download(filename, work_directory, source_url):
"""Download the data from source url, unless it's already here.
- Args:
- filename: string, name of the file in the directory.
- work_directory: string, path to working directory.
- source_url: url to download from if file doesn't exist.
+ Args:
+ filename: string, name of the file in the directory.
+ work_directory: string, path to working directory.
+ source_url: url to download from if file doesn't exist.
- Returns:
- Path to resulting file.
- """
+ Returns:
+ Path to resulting file.
+ """
if not gfile.Exists(work_directory):
gfile.MakeDirs(work_directory)
filepath = os.path.join(work_directory, filename)
if not gfile.Exists(filepath):
- urllib.request.urlretrieve(source_url, filepath)
+ urllib.request.urlretrieve(source_url, filepath) # noqa: S310
with gfile.GFile(filepath) as f:
size = f.size()
print("Successfully downloaded", filename, size, "bytes.")
@@ -328,7 +325,8 @@ def fake():
if not 0 <= validation_size <= len(train_images):
raise ValueError(
- f"Validation size should be between 0 and {len(train_images)}. Received: {validation_size}."
+ f"Validation size should be between 0 and {len(train_images)}. "
+ f"Received: {validation_size}."
)
validation_images = train_images[:validation_size]
@@ -336,7 +334,7 @@ def fake():
train_images = train_images[validation_size:]
train_labels = train_labels[validation_size:]
- options = dict(dtype=dtype, reshape=reshape, seed=seed)
+ options = {"dtype": dtype, "reshape": reshape, "seed": seed}
train = _DataSet(train_images, train_labels, **options)
validation = _DataSet(validation_images, validation_labels, **options)
diff --git a/requirements.txt b/requirements.txt
index a1d607df07e1..acfbc823e77f 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -15,7 +15,7 @@ scikit-fuzzy
scikit-learn
statsmodels
sympy
-tensorflow; python_version < "3.11"
+tensorflow
texttable
tweepy
xgboost
From 33114f0272bcc1fafa6ce0f40d92ded908747ce3 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Sat, 1 Apr 2023 16:05:01 -0400
Subject: [PATCH 038/808] Revamp `md5.py` (#8065)
* Add type hints to md5.py
* Rename some vars to snake case
* Specify functions imported from math
* Rename vars and functions to be more descriptive
* Make tests from test function into doctests
* Clarify more var names
* Refactor some MD5 code into preprocess function
* Simplify loop indices in get_block_words
* Add more detailed comments, docs, and doctests
* updating DIRECTORY.md
* updating DIRECTORY.md
* updating DIRECTORY.md
* updating DIRECTORY.md
* updating DIRECTORY.md
* Add type hints to md5.py
* Rename some vars to snake case
* Specify functions imported from math
* Rename vars and functions to be more descriptive
* Make tests from test function into doctests
* Clarify more var names
* Refactor some MD5 code into preprocess function
* Simplify loop indices in get_block_words
* Add more detailed comments, docs, and doctests
* updating DIRECTORY.md
* updating DIRECTORY.md
* updating DIRECTORY.md
* updating DIRECTORY.md
* Convert str types to bytes
* Add tests comparing md5_me to hashlib's md5
* Replace line-break backslashes with parentheses
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 +
hashes/md5.py | 372 +++++++++++++++++++++++++++++++++++++++-----------
2 files changed, 290 insertions(+), 83 deletions(-)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 34967082b359..b1adc23f6e61 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -717,6 +717,7 @@
* [Archimedes Principle](physics/archimedes_principle.py)
* [Casimir Effect](physics/casimir_effect.py)
* [Centripetal Force](physics/centripetal_force.py)
+ * [Grahams Law](physics/grahams_law.py)
* [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py)
* [Hubble Parameter](physics/hubble_parameter.py)
* [Ideal Gas Law](physics/ideal_gas_law.py)
diff --git a/hashes/md5.py b/hashes/md5.py
index 2020bf2e53bf..2187006ec8a9 100644
--- a/hashes/md5.py
+++ b/hashes/md5.py
@@ -1,91 +1,223 @@
-import math
+"""
+The MD5 algorithm is a hash function that's commonly used as a checksum to
+detect data corruption. The algorithm works by processing a given message in
+blocks of 512 bits, padding the message as needed. It uses the blocks to operate
+a 128-bit state and performs a total of 64 such operations. Note that all values
+are little-endian, so inputs are converted as needed.
+Although MD5 was used as a cryptographic hash function in the past, it's since
+been cracked, so it shouldn't be used for security purposes.
-def rearrange(bit_string_32):
- """[summary]
- Regroups the given binary string.
+For more info, see https://en.wikipedia.org/wiki/MD5
+"""
+
+from collections.abc import Generator
+from math import sin
+
+
+def to_little_endian(string_32: bytes) -> bytes:
+ """
+ Converts the given string to little-endian in groups of 8 chars.
Arguments:
- bitString32 {[string]} -- [32 bit binary]
+ string_32 {[string]} -- [32-char string]
Raises:
- ValueError -- [if the given string not are 32 bit binary string]
+ ValueError -- [input is not 32 char]
Returns:
- [string] -- [32 bit binary string]
- >>> rearrange('1234567890abcdfghijklmnopqrstuvw')
- 'pqrstuvwhijklmno90abcdfg12345678'
+ 32-char little-endian string
+ >>> to_little_endian(b'1234567890abcdfghijklmnopqrstuvw')
+ b'pqrstuvwhijklmno90abcdfg12345678'
+ >>> to_little_endian(b'1234567890')
+ Traceback (most recent call last):
+ ...
+ ValueError: Input must be of length 32
"""
+ if len(string_32) != 32:
+ raise ValueError("Input must be of length 32")
- if len(bit_string_32) != 32:
- raise ValueError("Need length 32")
- new_string = ""
+ little_endian = b""
for i in [3, 2, 1, 0]:
- new_string += bit_string_32[8 * i : 8 * i + 8]
- return new_string
+ little_endian += string_32[8 * i : 8 * i + 8]
+ return little_endian
+
+
+def reformat_hex(i: int) -> bytes:
+ """
+ Converts the given non-negative integer to hex string.
+ Example: Suppose the input is the following:
+ i = 1234
-def reformat_hex(i):
- """[summary]
- Converts the given integer into 8-digit hex number.
+ The input is 0x000004d2 in hex, so the little-endian hex string is
+ "d2040000".
Arguments:
- i {[int]} -- [integer]
+ i {[int]} -- [integer]
+
+ Raises:
+ ValueError -- [input is negative]
+
+ Returns:
+ 8-char little-endian hex string
+
+ >>> reformat_hex(1234)
+ b'd2040000'
>>> reformat_hex(666)
- '9a020000'
+ b'9a020000'
+ >>> reformat_hex(0)
+ b'00000000'
+ >>> reformat_hex(1234567890)
+ b'd2029649'
+ >>> reformat_hex(1234567890987654321)
+ b'b11c6cb1'
+ >>> reformat_hex(-1)
+ Traceback (most recent call last):
+ ...
+ ValueError: Input must be non-negative
"""
+ if i < 0:
+ raise ValueError("Input must be non-negative")
- hexrep = format(i, "08x")
- thing = ""
+ hex_rep = format(i, "08x")[-8:]
+ little_endian_hex = b""
for i in [3, 2, 1, 0]:
- thing += hexrep[2 * i : 2 * i + 2]
- return thing
+ little_endian_hex += hex_rep[2 * i : 2 * i + 2].encode("utf-8")
+ return little_endian_hex
-def pad(bit_string):
- """[summary]
- Fills up the binary string to a 512 bit binary string
+def preprocess(message: bytes) -> bytes:
+ """
+ Preprocesses the message string:
+ - Convert message to bit string
+ - Pad bit string to a multiple of 512 chars:
+ - Append a 1
+ - Append 0's until length = 448 (mod 512)
+ - Append length of original message (64 chars)
+
+ Example: Suppose the input is the following:
+ message = "a"
+
+ The message bit string is "01100001", which is 8 bits long. Thus, the
+ bit string needs 439 bits of padding so that
+ (bit_string + "1" + padding) = 448 (mod 512).
+ The message length is "000010000...0" in 64-bit little-endian binary.
+ The combined bit string is then 512 bits long.
Arguments:
- bitString {[string]} -- [binary string]
+ message {[string]} -- [message string]
Returns:
- [string] -- [binary string]
+ processed bit string padded to a multiple of 512 chars
+
+ >>> preprocess(b"a") == (b"01100001" + b"1" +
+ ... (b"0" * 439) + b"00001000" + (b"0" * 56))
+ True
+ >>> preprocess(b"") == b"1" + (b"0" * 447) + (b"0" * 64)
+ True
"""
- start_length = len(bit_string)
- bit_string += "1"
+ bit_string = b""
+ for char in message:
+ bit_string += format(char, "08b").encode("utf-8")
+ start_len = format(len(bit_string), "064b").encode("utf-8")
+
+ # Pad bit_string to a multiple of 512 chars
+ bit_string += b"1"
while len(bit_string) % 512 != 448:
- bit_string += "0"
- last_part = format(start_length, "064b")
- bit_string += rearrange(last_part[32:]) + rearrange(last_part[:32])
+ bit_string += b"0"
+ bit_string += to_little_endian(start_len[32:]) + to_little_endian(start_len[:32])
+
return bit_string
-def get_block(bit_string):
- """[summary]
- Iterator:
- Returns by each call a list of length 16 with the 32 bit
- integer blocks.
+def get_block_words(bit_string: bytes) -> Generator[list[int], None, None]:
+ """
+ Splits bit string into blocks of 512 chars and yields each block as a list
+ of 32-bit words
+
+ Example: Suppose the input is the following:
+ bit_string =
+ "000000000...0" + # 0x00 (32 bits, padded to the right)
+ "000000010...0" + # 0x01 (32 bits, padded to the right)
+ "000000100...0" + # 0x02 (32 bits, padded to the right)
+ "000000110...0" + # 0x03 (32 bits, padded to the right)
+ ...
+ "000011110...0" # 0x0a (32 bits, padded to the right)
+
+ Then len(bit_string) == 512, so there'll be 1 block. The block is split
+ into 32-bit words, and each word is converted to little endian. The
+ first word is interpreted as 0 in decimal, the second word is
+ interpreted as 1 in decimal, etc.
+
+ Thus, block_words == [[0, 1, 2, 3, ..., 15]].
Arguments:
- bit_string {[string]} -- [binary string >= 512]
+ bit_string {[string]} -- [bit string with multiple of 512 as length]
+
+ Raises:
+ ValueError -- [length of bit string isn't multiple of 512]
+
+ Yields:
+ a list of 16 32-bit words
+
+ >>> test_string = ("".join(format(n << 24, "032b") for n in range(16))
+ ... .encode("utf-8"))
+ >>> list(get_block_words(test_string))
+ [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]]
+ >>> list(get_block_words(test_string * 4)) == [list(range(16))] * 4
+ True
+ >>> list(get_block_words(b"1" * 512)) == [[4294967295] * 16]
+ True
+ >>> list(get_block_words(b""))
+ []
+ >>> list(get_block_words(b"1111"))
+ Traceback (most recent call last):
+ ...
+ ValueError: Input must have length that's a multiple of 512
"""
+ if len(bit_string) % 512 != 0:
+ raise ValueError("Input must have length that's a multiple of 512")
- curr_pos = 0
- while curr_pos < len(bit_string):
- curr_part = bit_string[curr_pos : curr_pos + 512]
- my_splits = []
- for i in range(16):
- my_splits.append(int(rearrange(curr_part[32 * i : 32 * i + 32]), 2))
- yield my_splits
- curr_pos += 512
+ for pos in range(0, len(bit_string), 512):
+ block = bit_string[pos : pos + 512]
+ block_words = []
+ for i in range(0, 512, 32):
+ block_words.append(int(to_little_endian(block[i : i + 32]), 2))
+ yield block_words
-def not32(i):
+def not_32(i: int) -> int:
"""
- >>> not32(34)
+ Perform bitwise NOT on given int.
+
+ Arguments:
+ i {[int]} -- [given int]
+
+ Raises:
+ ValueError -- [input is negative]
+
+ Returns:
+ Result of bitwise NOT on i
+
+ >>> not_32(34)
4294967261
+ >>> not_32(1234)
+ 4294966061
+ >>> not_32(4294966061)
+ 1234
+ >>> not_32(0)
+ 4294967295
+ >>> not_32(1)
+ 4294967294
+ >>> not_32(-1)
+ Traceback (most recent call last):
+ ...
+ ValueError: Input must be non-negative
"""
+ if i < 0:
+ raise ValueError("Input must be non-negative")
+
i_str = format(i, "032b")
new_str = ""
for c in i_str:
@@ -93,35 +225,114 @@ def not32(i):
return int(new_str, 2)
-def sum32(a, b):
+def sum_32(a: int, b: int) -> int:
+ """
+ Add two numbers as 32-bit ints.
+
+ Arguments:
+ a {[int]} -- [first given int]
+ b {[int]} -- [second given int]
+
+ Returns:
+ (a + b) as an unsigned 32-bit int
+
+ >>> sum_32(1, 1)
+ 2
+ >>> sum_32(2, 3)
+ 5
+ >>> sum_32(0, 0)
+ 0
+ >>> sum_32(-1, -1)
+ 4294967294
+ >>> sum_32(4294967295, 1)
+ 0
+ """
return (a + b) % 2**32
-def leftrot32(i, s):
- return (i << s) ^ (i >> (32 - s))
+def left_rotate_32(i: int, shift: int) -> int:
+ """
+ Rotate the bits of a given int left by a given amount.
+
+ Arguments:
+ i {[int]} -- [given int]
+ shift {[int]} -- [shift amount]
+
+ Raises:
+ ValueError -- [either given int or shift is negative]
+ Returns:
+ `i` rotated to the left by `shift` bits
+
+ >>> left_rotate_32(1234, 1)
+ 2468
+ >>> left_rotate_32(1111, 4)
+ 17776
+ >>> left_rotate_32(2147483648, 1)
+ 1
+ >>> left_rotate_32(2147483648, 3)
+ 4
+ >>> left_rotate_32(4294967295, 4)
+ 4294967295
+ >>> left_rotate_32(1234, 0)
+ 1234
+ >>> left_rotate_32(0, 0)
+ 0
+ >>> left_rotate_32(-1, 0)
+ Traceback (most recent call last):
+ ...
+ ValueError: Input must be non-negative
+ >>> left_rotate_32(0, -1)
+ Traceback (most recent call last):
+ ...
+ ValueError: Shift must be non-negative
+ """
+ if i < 0:
+ raise ValueError("Input must be non-negative")
+ if shift < 0:
+ raise ValueError("Shift must be non-negative")
+ return ((i << shift) ^ (i >> (32 - shift))) % 2**32
+
+
+def md5_me(message: bytes) -> bytes:
+ """
+ Returns the 32-char MD5 hash of a given message.
-def md5me(test_string):
- """[summary]
- Returns a 32-bit hash code of the string 'testString'
+ Reference: https://en.wikipedia.org/wiki/MD5#Algorithm
Arguments:
- testString {[string]} -- [message]
+ message {[string]} -- [message]
+
+ Returns:
+ 32-char MD5 hash string
+
+ >>> md5_me(b"")
+ b'd41d8cd98f00b204e9800998ecf8427e'
+ >>> md5_me(b"The quick brown fox jumps over the lazy dog")
+ b'9e107d9d372bb6826bd81d3542a419d6'
+ >>> md5_me(b"The quick brown fox jumps over the lazy dog.")
+ b'e4d909c290d0fb1ca068ffaddf22cbd0'
+
+ >>> import hashlib
+ >>> from string import ascii_letters
+ >>> msgs = [b"", ascii_letters.encode("utf-8"), "Üñîçø∂é".encode("utf-8"),
+ ... b"The quick brown fox jumps over the lazy dog."]
+ >>> all(md5_me(msg) == hashlib.md5(msg).hexdigest().encode("utf-8") for msg in msgs)
+ True
"""
- bs = ""
- for i in test_string:
- bs += format(ord(i), "08b")
- bs = pad(bs)
+ # Convert to bit string, add padding and append message length
+ bit_string = preprocess(message)
- tvals = [int(2**32 * abs(math.sin(i + 1))) for i in range(64)]
+ added_consts = [int(2**32 * abs(sin(i + 1))) for i in range(64)]
+ # Starting states
a0 = 0x67452301
b0 = 0xEFCDAB89
c0 = 0x98BADCFE
d0 = 0x10325476
- s = [
+ shift_amounts = [
7,
12,
17,
@@ -188,51 +399,46 @@ def md5me(test_string):
21,
]
- for m in get_block(bs):
+ # Process bit string in chunks, each with 16 32-char words
+ for block_words in get_block_words(bit_string):
a = a0
b = b0
c = c0
d = d0
+
+ # Hash current chunk
for i in range(64):
if i <= 15:
- # f = (B & C) | (not32(B) & D)
+ # f = (b & c) | (not_32(b) & d) # Alternate definition for f
f = d ^ (b & (c ^ d))
g = i
elif i <= 31:
- # f = (D & B) | (not32(D) & C)
+ # f = (d & b) | (not_32(d) & c) # Alternate definition for f
f = c ^ (d & (b ^ c))
g = (5 * i + 1) % 16
elif i <= 47:
f = b ^ c ^ d
g = (3 * i + 5) % 16
else:
- f = c ^ (b | not32(d))
+ f = c ^ (b | not_32(d))
g = (7 * i) % 16
- dtemp = d
+ f = (f + a + added_consts[i] + block_words[g]) % 2**32
+ a = d
d = c
c = b
- b = sum32(b, leftrot32((a + f + tvals[i] + m[g]) % 2**32, s[i]))
- a = dtemp
- a0 = sum32(a0, a)
- b0 = sum32(b0, b)
- c0 = sum32(c0, c)
- d0 = sum32(d0, d)
+ b = sum_32(b, left_rotate_32(f, shift_amounts[i]))
+
+ # Add hashed chunk to running total
+ a0 = sum_32(a0, a)
+ b0 = sum_32(b0, b)
+ c0 = sum_32(c0, c)
+ d0 = sum_32(d0, d)
digest = reformat_hex(a0) + reformat_hex(b0) + reformat_hex(c0) + reformat_hex(d0)
return digest
-def test():
- assert md5me("") == "d41d8cd98f00b204e9800998ecf8427e"
- assert (
- md5me("The quick brown fox jumps over the lazy dog")
- == "9e107d9d372bb6826bd81d3542a419d6"
- )
- print("Success.")
-
-
if __name__ == "__main__":
- test()
import doctest
doctest.testmod()
From 5ca71895630719cc41f8171aba8be461fb8cc9d2 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Sun, 2 Apr 2023 06:48:19 +0200
Subject: [PATCH 039/808] Rename quantum_random.py.DISABLED.txt to
quantum_random.py (#8601)
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 +
quantum/{quantum_random.py.DISABLED.txt => quantum_random.py} | 0
2 files changed, 1 insertion(+)
rename quantum/{quantum_random.py.DISABLED.txt => quantum_random.py} (100%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index b1adc23f6e61..8dd3fb5d9af1 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -1033,6 +1033,7 @@
* [Q Fourier Transform](quantum/q_fourier_transform.py)
* [Q Full Adder](quantum/q_full_adder.py)
* [Quantum Entanglement](quantum/quantum_entanglement.py)
+ * [Quantum Random](quantum/quantum_random.py)
* [Quantum Teleportation](quantum/quantum_teleportation.py)
* [Ripple Adder Classic](quantum/ripple_adder_classic.py)
* [Single Qubit Measure](quantum/single_qubit_measure.py)
diff --git a/quantum/quantum_random.py.DISABLED.txt b/quantum/quantum_random.py
similarity index 100%
rename from quantum/quantum_random.py.DISABLED.txt
rename to quantum/quantum_random.py
From ebc2d5d79f837931e80f7d5e7e1dece9ef48f760 Mon Sep 17 00:00:00 2001
From: Ishab
Date: Sun, 2 Apr 2023 13:04:11 +0100
Subject: [PATCH 040/808] Add Project Euler problem 79 solution 1 (#8607)
Co-authored-by: Dhruv Manilawala
---
project_euler/problem_079/__init__.py | 0
project_euler/problem_079/keylog.txt | 50 ++++++++++++++++
project_euler/problem_079/keylog_test.txt | 16 ++++++
project_euler/problem_079/sol1.py | 69 +++++++++++++++++++++++
4 files changed, 135 insertions(+)
create mode 100644 project_euler/problem_079/__init__.py
create mode 100644 project_euler/problem_079/keylog.txt
create mode 100644 project_euler/problem_079/keylog_test.txt
create mode 100644 project_euler/problem_079/sol1.py
diff --git a/project_euler/problem_079/__init__.py b/project_euler/problem_079/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/project_euler/problem_079/keylog.txt b/project_euler/problem_079/keylog.txt
new file mode 100644
index 000000000000..41f15673248d
--- /dev/null
+++ b/project_euler/problem_079/keylog.txt
@@ -0,0 +1,50 @@
+319
+680
+180
+690
+129
+620
+762
+689
+762
+318
+368
+710
+720
+710
+629
+168
+160
+689
+716
+731
+736
+729
+316
+729
+729
+710
+769
+290
+719
+680
+318
+389
+162
+289
+162
+718
+729
+319
+790
+680
+890
+362
+319
+760
+316
+729
+380
+319
+728
+716
diff --git a/project_euler/problem_079/keylog_test.txt b/project_euler/problem_079/keylog_test.txt
new file mode 100644
index 000000000000..2c7024bde948
--- /dev/null
+++ b/project_euler/problem_079/keylog_test.txt
@@ -0,0 +1,16 @@
+319
+680
+180
+690
+129
+620
+698
+318
+328
+310
+320
+610
+629
+198
+190
+631
diff --git a/project_euler/problem_079/sol1.py b/project_euler/problem_079/sol1.py
new file mode 100644
index 000000000000..d34adcd243b0
--- /dev/null
+++ b/project_euler/problem_079/sol1.py
@@ -0,0 +1,69 @@
+"""
+Project Euler Problem 79: https://projecteuler.net/problem=79
+
+Passcode derivation
+
+A common security method used for online banking is to ask the user for three
+random characters from a passcode. For example, if the passcode was 531278,
+they may ask for the 2nd, 3rd, and 5th characters; the expected reply would
+be: 317.
+
+The text file, keylog.txt, contains fifty successful login attempts.
+
+Given that the three characters are always asked for in order, analyse the file
+so as to determine the shortest possible secret passcode of unknown length.
+"""
+import itertools
+from pathlib import Path
+
+
+def find_secret_passcode(logins: list[str]) -> int:
+ """
+ Returns the shortest possible secret passcode of unknown length.
+
+ >>> find_secret_passcode(["135", "259", "235", "189", "690", "168", "120",
+ ... "136", "289", "589", "160", "165", "580", "369", "250", "280"])
+ 12365890
+
+ >>> find_secret_passcode(["426", "281", "061", "819" "268", "406", "420",
+ ... "428", "209", "689", "019", "421", "469", "261", "681", "201"])
+ 4206819
+ """
+
+ # Split each login by character e.g. '319' -> ('3', '1', '9')
+ split_logins = [tuple(login) for login in logins]
+
+ unique_chars = {char for login in split_logins for char in login}
+
+ for permutation in itertools.permutations(unique_chars):
+ satisfied = True
+ for login in logins:
+ if not (
+ permutation.index(login[0])
+ < permutation.index(login[1])
+ < permutation.index(login[2])
+ ):
+ satisfied = False
+ break
+
+ if satisfied:
+ return int("".join(permutation))
+
+ raise Exception("Unable to find the secret passcode")
+
+
+def solution(input_file: str = "keylog.txt") -> int:
+ """
+ Returns the shortest possible secret passcode of unknown length
+ for successful login attempts given by `input_file` text file.
+
+ >>> solution("keylog_test.txt")
+ 6312980
+ """
+ logins = Path(__file__).parent.joinpath(input_file).read_text().splitlines()
+
+ return find_secret_passcode(logins)
+
+
+if __name__ == "__main__":
+ print(f"{solution() = }")
From 740ecfb121009612310ab9e1bc9d6ffe22b62ae4 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 4 Apr 2023 07:00:31 +0530
Subject: [PATCH 041/808] [pre-commit.ci] pre-commit autoupdate (#8611)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.259 → v0.0.260](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.259...v0.0.260)
- [github.com/psf/black: 23.1.0 → 23.3.0](https://github.com/psf/black/compare/23.1.0...23.3.0)
- [github.com/abravalheri/validate-pyproject: v0.12.1 → v0.12.2](https://github.com/abravalheri/validate-pyproject/compare/v0.12.1...v0.12.2)
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.pre-commit-config.yaml | 6 +++---
DIRECTORY.md | 2 ++
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 72a878387e15..d54ce5adddce 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,12 +16,12 @@ repos:
- id: auto-walrus
- repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.259
+ rev: v0.0.260
hooks:
- id: ruff
- repo: https://github.com/psf/black
- rev: 23.1.0
+ rev: 23.3.0
hooks:
- id: black
@@ -46,7 +46,7 @@ repos:
pass_filenames: false
- repo: https://github.com/abravalheri/validate-pyproject
- rev: v0.12.1
+ rev: v0.12.2
hooks:
- id: validate-pyproject
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 8dd3fb5d9af1..3764c471ce70 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -922,6 +922,8 @@
* [Sol1](project_euler/problem_077/sol1.py)
* Problem 078
* [Sol1](project_euler/problem_078/sol1.py)
+ * Problem 079
+ * [Sol1](project_euler/problem_079/sol1.py)
* Problem 080
* [Sol1](project_euler/problem_080/sol1.py)
* Problem 081
From b2b8585e63664a0c7aa18b95528e345c2738c4ae Mon Sep 17 00:00:00 2001
From: Ishan Dutta
Date: Fri, 7 Apr 2023 21:21:25 +0530
Subject: [PATCH 042/808] Add LeNet Implementation in PyTorch (#7070)
* add torch to requirements
* add lenet architecture in pytorch
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* add type hints
* remove file
* add type hints
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update variable name
* add fail test
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* add newline
* reformatting
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
computer_vision/lenet_pytorch.py | 82 ++++++++++++++++++++++++++++++++
requirements.txt | 1 +
2 files changed, 83 insertions(+)
create mode 100644 computer_vision/lenet_pytorch.py
diff --git a/computer_vision/lenet_pytorch.py b/computer_vision/lenet_pytorch.py
new file mode 100644
index 000000000000..177a5ebfcdb4
--- /dev/null
+++ b/computer_vision/lenet_pytorch.py
@@ -0,0 +1,82 @@
+"""
+LeNet Network
+
+Paper: http://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf
+"""
+
+import numpy
+import torch
+import torch.nn as nn
+
+
+class LeNet(nn.Module):
+ def __init__(self) -> None:
+ super().__init__()
+
+ self.tanh = nn.Tanh()
+ self.avgpool = nn.AvgPool2d(kernel_size=2, stride=2)
+
+ self.conv1 = nn.Conv2d(
+ in_channels=1,
+ out_channels=6,
+ kernel_size=(5, 5),
+ stride=(1, 1),
+ padding=(0, 0),
+ )
+ self.conv2 = nn.Conv2d(
+ in_channels=6,
+ out_channels=16,
+ kernel_size=(5, 5),
+ stride=(1, 1),
+ padding=(0, 0),
+ )
+ self.conv3 = nn.Conv2d(
+ in_channels=16,
+ out_channels=120,
+ kernel_size=(5, 5),
+ stride=(1, 1),
+ padding=(0, 0),
+ )
+
+ self.linear1 = nn.Linear(120, 84)
+ self.linear2 = nn.Linear(84, 10)
+
+ def forward(self, image_array: numpy.ndarray) -> numpy.ndarray:
+ image_array = self.tanh(self.conv1(image_array))
+ image_array = self.avgpool(image_array)
+ image_array = self.tanh(self.conv2(image_array))
+ image_array = self.avgpool(image_array)
+ image_array = self.tanh(self.conv3(image_array))
+
+ image_array = image_array.reshape(image_array.shape[0], -1)
+ image_array = self.tanh(self.linear1(image_array))
+ image_array = self.linear2(image_array)
+ return image_array
+
+
+def test_model(image_tensor: torch.tensor) -> bool:
+ """
+ Test the model on an input batch of 64 images
+
+ Args:
+ image_tensor (torch.tensor): Batch of Images for the model
+
+ >>> test_model(torch.randn(64, 1, 32, 32))
+ True
+
+ """
+ try:
+ model = LeNet()
+ output = model(image_tensor)
+ except RuntimeError:
+ return False
+
+ return output.shape == torch.zeros([64, 10]).shape
+
+
+if __name__ == "__main__":
+ random_image_1 = torch.randn(64, 1, 32, 32)
+ random_image_2 = torch.randn(1, 32, 32)
+
+ print(f"random_image_1 Model Passed: {test_model(random_image_1)}")
+ print(f"\nrandom_image_2 Model Passed: {test_model(random_image_2)}")
diff --git a/requirements.txt b/requirements.txt
index acfbc823e77f..e159fe010dc4 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -17,6 +17,7 @@ statsmodels
sympy
tensorflow
texttable
+torch
tweepy
xgboost
yulewalker
From 179298e3a291470ef30e850f23d98c2fb9055202 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Sat, 8 Apr 2023 02:52:26 +0200
Subject: [PATCH 043/808] Revert "Add LeNet Implementation in PyTorch (#7070)"
(#8621)
This reverts commit b2b8585e63664a0c7aa18b95528e345c2738c4ae.
---
computer_vision/lenet_pytorch.py | 82 --------------------------------
requirements.txt | 1 -
2 files changed, 83 deletions(-)
delete mode 100644 computer_vision/lenet_pytorch.py
diff --git a/computer_vision/lenet_pytorch.py b/computer_vision/lenet_pytorch.py
deleted file mode 100644
index 177a5ebfcdb4..000000000000
--- a/computer_vision/lenet_pytorch.py
+++ /dev/null
@@ -1,82 +0,0 @@
-"""
-LeNet Network
-
-Paper: http://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf
-"""
-
-import numpy
-import torch
-import torch.nn as nn
-
-
-class LeNet(nn.Module):
- def __init__(self) -> None:
- super().__init__()
-
- self.tanh = nn.Tanh()
- self.avgpool = nn.AvgPool2d(kernel_size=2, stride=2)
-
- self.conv1 = nn.Conv2d(
- in_channels=1,
- out_channels=6,
- kernel_size=(5, 5),
- stride=(1, 1),
- padding=(0, 0),
- )
- self.conv2 = nn.Conv2d(
- in_channels=6,
- out_channels=16,
- kernel_size=(5, 5),
- stride=(1, 1),
- padding=(0, 0),
- )
- self.conv3 = nn.Conv2d(
- in_channels=16,
- out_channels=120,
- kernel_size=(5, 5),
- stride=(1, 1),
- padding=(0, 0),
- )
-
- self.linear1 = nn.Linear(120, 84)
- self.linear2 = nn.Linear(84, 10)
-
- def forward(self, image_array: numpy.ndarray) -> numpy.ndarray:
- image_array = self.tanh(self.conv1(image_array))
- image_array = self.avgpool(image_array)
- image_array = self.tanh(self.conv2(image_array))
- image_array = self.avgpool(image_array)
- image_array = self.tanh(self.conv3(image_array))
-
- image_array = image_array.reshape(image_array.shape[0], -1)
- image_array = self.tanh(self.linear1(image_array))
- image_array = self.linear2(image_array)
- return image_array
-
-
-def test_model(image_tensor: torch.tensor) -> bool:
- """
- Test the model on an input batch of 64 images
-
- Args:
- image_tensor (torch.tensor): Batch of Images for the model
-
- >>> test_model(torch.randn(64, 1, 32, 32))
- True
-
- """
- try:
- model = LeNet()
- output = model(image_tensor)
- except RuntimeError:
- return False
-
- return output.shape == torch.zeros([64, 10]).shape
-
-
-if __name__ == "__main__":
- random_image_1 = torch.randn(64, 1, 32, 32)
- random_image_2 = torch.randn(1, 32, 32)
-
- print(f"random_image_1 Model Passed: {test_model(random_image_1)}")
- print(f"\nrandom_image_2 Model Passed: {test_model(random_image_2)}")
diff --git a/requirements.txt b/requirements.txt
index e159fe010dc4..acfbc823e77f 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -17,7 +17,6 @@ statsmodels
sympy
tensorflow
texttable
-torch
tweepy
xgboost
yulewalker
From 5cb0a000c47398c6d8af1ac43e2f83ae018f7182 Mon Sep 17 00:00:00 2001
From: amirsoroush <114881632+amirsoroush@users.noreply.github.com>
Date: Sat, 8 Apr 2023 14:41:08 +0300
Subject: [PATCH 044/808] Queue implementation using two Stacks (#8617)
* Queue implementation using two Stacks
* fix typo in queue/queue_on_two_stacks.py
* add 'iterable' to queue_on_two_stacks initializer
* make queue_on_two_stacks.py generic class
* fix ruff-UP007 in queue_on_two_stacks.py
* enhance readability in queue_on_two_stacks.py
* Create queue_by_two_stacks.py
---------
Co-authored-by: Christian Clauss
---
data_structures/queue/queue_by_two_stacks.py | 115 ++++++++++++++++
data_structures/queue/queue_on_two_stacks.py | 137 +++++++++++++++++++
2 files changed, 252 insertions(+)
create mode 100644 data_structures/queue/queue_by_two_stacks.py
create mode 100644 data_structures/queue/queue_on_two_stacks.py
diff --git a/data_structures/queue/queue_by_two_stacks.py b/data_structures/queue/queue_by_two_stacks.py
new file mode 100644
index 000000000000..cd62f155a63b
--- /dev/null
+++ b/data_structures/queue/queue_by_two_stacks.py
@@ -0,0 +1,115 @@
+"""Queue implementation using two stacks"""
+
+from collections.abc import Iterable
+from typing import Generic, TypeVar
+
+_T = TypeVar("_T")
+
+
+class QueueByTwoStacks(Generic[_T]):
+ def __init__(self, iterable: Iterable[_T] | None = None) -> None:
+ """
+ >>> QueueByTwoStacks()
+ Queue(())
+ >>> QueueByTwoStacks([10, 20, 30])
+ Queue((10, 20, 30))
+ >>> QueueByTwoStacks((i**2 for i in range(1, 4)))
+ Queue((1, 4, 9))
+ """
+ self._stack1: list[_T] = list(iterable or [])
+ self._stack2: list[_T] = []
+
+ def __len__(self) -> int:
+ """
+ >>> len(QueueByTwoStacks())
+ 0
+ >>> from string import ascii_lowercase
+ >>> len(QueueByTwoStacks(ascii_lowercase))
+ 26
+ >>> queue = QueueByTwoStacks()
+ >>> for i in range(1, 11):
+ ... queue.put(i)
+ ...
+ >>> len(queue)
+ 10
+ >>> for i in range(2):
+ ... queue.get()
+ 1
+ 2
+ >>> len(queue)
+ 8
+ """
+
+ return len(self._stack1) + len(self._stack2)
+
+ def __repr__(self) -> str:
+ """
+ >>> queue = QueueByTwoStacks()
+ >>> queue
+ Queue(())
+ >>> str(queue)
+ 'Queue(())'
+ >>> queue.put(10)
+ >>> queue
+ Queue((10,))
+ >>> queue.put(20)
+ >>> queue.put(30)
+ >>> queue
+ Queue((10, 20, 30))
+ """
+ return f"Queue({tuple(self._stack2[::-1] + self._stack1)})"
+
+ def put(self, item: _T) -> None:
+ """
+ Put `item` into the Queue
+
+ >>> queue = QueueByTwoStacks()
+ >>> queue.put(10)
+ >>> queue.put(20)
+ >>> len(queue)
+ 2
+ >>> queue
+ Queue((10, 20))
+ """
+
+ self._stack1.append(item)
+
+ def get(self) -> _T:
+ """
+ Get `item` from the Queue
+
+ >>> queue = QueueByTwoStacks((10, 20, 30))
+ >>> queue.get()
+ 10
+ >>> queue.put(40)
+ >>> queue.get()
+ 20
+ >>> queue.get()
+ 30
+ >>> len(queue)
+ 1
+ >>> queue.get()
+ 40
+ >>> queue.get()
+ Traceback (most recent call last):
+ ...
+ IndexError: Queue is empty
+ """
+
+ # To reduce number of attribute look-ups in `while` loop.
+ stack1_pop = self._stack1.pop
+ stack2_append = self._stack2.append
+
+ if not self._stack2:
+ while self._stack1:
+ stack2_append(stack1_pop())
+
+ if not self._stack2:
+ raise IndexError("Queue is empty")
+ return self._stack2.pop()
+
+
+if __name__ == "__main__":
+ from doctest import testmod
+
+ testmod()
diff --git a/data_structures/queue/queue_on_two_stacks.py b/data_structures/queue/queue_on_two_stacks.py
new file mode 100644
index 000000000000..61db2b512136
--- /dev/null
+++ b/data_structures/queue/queue_on_two_stacks.py
@@ -0,0 +1,137 @@
+"""Queue implementation using two stacks"""
+
+from collections.abc import Iterable
+from typing import Generic, TypeVar
+
+_T = TypeVar("_T")
+
+
+class QueueByTwoStacks(Generic[_T]):
+ def __init__(self, iterable: Iterable[_T] | None = None) -> None:
+ """
+ >>> queue1 = QueueByTwoStacks()
+ >>> str(queue1)
+ 'Queue([])'
+ >>> queue2 = QueueByTwoStacks([10, 20, 30])
+ >>> str(queue2)
+ 'Queue([10, 20, 30])'
+ >>> queue3 = QueueByTwoStacks((i**2 for i in range(1, 4)))
+ >>> str(queue3)
+ 'Queue([1, 4, 9])'
+ """
+
+ self._stack1: list[_T] = [] if iterable is None else list(iterable)
+ self._stack2: list[_T] = []
+
+ def __len__(self) -> int:
+ """
+ >>> queue = QueueByTwoStacks()
+ >>> for i in range(1, 11):
+ ... queue.put(i)
+ ...
+ >>> len(queue) == 10
+ True
+ >>> for i in range(2):
+ ... queue.get()
+ 1
+ 2
+ >>> len(queue) == 8
+ True
+ """
+
+ return len(self._stack1) + len(self._stack2)
+
+ def __repr__(self) -> str:
+ """
+ >>> queue = QueueByTwoStacks()
+ >>> queue
+ Queue([])
+ >>> str(queue)
+ 'Queue([])'
+ >>> queue.put(10)
+ >>> queue
+ Queue([10])
+ >>> queue.put(20)
+ >>> queue.put(30)
+ >>> queue
+ Queue([10, 20, 30])
+ """
+
+ items = self._stack2[::-1] + self._stack1
+ return f"Queue({items})"
+
+ def put(self, item: _T) -> None:
+ """
+ Put `item` into the Queue
+
+ >>> queue = QueueByTwoStacks()
+ >>> queue.put(10)
+ >>> queue.put(20)
+ >>> len(queue) == 2
+ True
+ >>> str(queue)
+ 'Queue([10, 20])'
+ """
+
+ self._stack1.append(item)
+
+ def get(self) -> _T:
+ """
+ Get `item` from the Queue
+
+ >>> queue = QueueByTwoStacks()
+ >>> for i in (10, 20, 30):
+ ... queue.put(i)
+ >>> queue.get()
+ 10
+ >>> queue.put(40)
+ >>> queue.get()
+ 20
+ >>> queue.get()
+ 30
+ >>> len(queue) == 1
+ True
+ >>> queue.get()
+ 40
+ >>> queue.get()
+ Traceback (most recent call last):
+ ...
+ IndexError: Queue is empty
+ """
+
+ # To reduce number of attribute look-ups in `while` loop.
+ stack1_pop = self._stack1.pop
+ stack2_append = self._stack2.append
+
+ if not self._stack2:
+ while self._stack1:
+ stack2_append(stack1_pop())
+
+ if not self._stack2:
+ raise IndexError("Queue is empty")
+ return self._stack2.pop()
+
+ def size(self) -> int:
+ """
+ Returns the length of the Queue
+
+ >>> queue = QueueByTwoStacks()
+ >>> queue.size()
+ 0
+ >>> queue.put(10)
+ >>> queue.put(20)
+ >>> queue.size()
+ 2
+ >>> queue.get()
+ 10
+ >>> queue.size() == 1
+ True
+ """
+
+ return len(self)
+
+
+if __name__ == "__main__":
+ from doctest import testmod
+
+ testmod()
From 2f9b03393c75f3ab14b491becae4ac5caf26de17 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Sat, 8 Apr 2023 14:16:19 +0200
Subject: [PATCH 045/808] Delete queue_on_two_stacks.py which duplicates
queue_by_two_stacks.py (#8624)
* Delete queue_on_two_stacks.py which duplicates queue_by_two_stacks.py
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 +
data_structures/queue/queue_on_two_stacks.py | 137 -------------------
2 files changed, 1 insertion(+), 137 deletions(-)
delete mode 100644 data_structures/queue/queue_on_two_stacks.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 3764c471ce70..e3e0748ecf75 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -232,6 +232,7 @@
* [Double Ended Queue](data_structures/queue/double_ended_queue.py)
* [Linked Queue](data_structures/queue/linked_queue.py)
* [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py)
+ * [Queue By Two Stacks](data_structures/queue/queue_by_two_stacks.py)
* [Queue On List](data_structures/queue/queue_on_list.py)
* [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py)
* Stacks
diff --git a/data_structures/queue/queue_on_two_stacks.py b/data_structures/queue/queue_on_two_stacks.py
deleted file mode 100644
index 61db2b512136..000000000000
--- a/data_structures/queue/queue_on_two_stacks.py
+++ /dev/null
@@ -1,137 +0,0 @@
-"""Queue implementation using two stacks"""
-
-from collections.abc import Iterable
-from typing import Generic, TypeVar
-
-_T = TypeVar("_T")
-
-
-class QueueByTwoStacks(Generic[_T]):
- def __init__(self, iterable: Iterable[_T] | None = None) -> None:
- """
- >>> queue1 = QueueByTwoStacks()
- >>> str(queue1)
- 'Queue([])'
- >>> queue2 = QueueByTwoStacks([10, 20, 30])
- >>> str(queue2)
- 'Queue([10, 20, 30])'
- >>> queue3 = QueueByTwoStacks((i**2 for i in range(1, 4)))
- >>> str(queue3)
- 'Queue([1, 4, 9])'
- """
-
- self._stack1: list[_T] = [] if iterable is None else list(iterable)
- self._stack2: list[_T] = []
-
- def __len__(self) -> int:
- """
- >>> queue = QueueByTwoStacks()
- >>> for i in range(1, 11):
- ... queue.put(i)
- ...
- >>> len(queue) == 10
- True
- >>> for i in range(2):
- ... queue.get()
- 1
- 2
- >>> len(queue) == 8
- True
- """
-
- return len(self._stack1) + len(self._stack2)
-
- def __repr__(self) -> str:
- """
- >>> queue = QueueByTwoStacks()
- >>> queue
- Queue([])
- >>> str(queue)
- 'Queue([])'
- >>> queue.put(10)
- >>> queue
- Queue([10])
- >>> queue.put(20)
- >>> queue.put(30)
- >>> queue
- Queue([10, 20, 30])
- """
-
- items = self._stack2[::-1] + self._stack1
- return f"Queue({items})"
-
- def put(self, item: _T) -> None:
- """
- Put `item` into the Queue
-
- >>> queue = QueueByTwoStacks()
- >>> queue.put(10)
- >>> queue.put(20)
- >>> len(queue) == 2
- True
- >>> str(queue)
- 'Queue([10, 20])'
- """
-
- self._stack1.append(item)
-
- def get(self) -> _T:
- """
- Get `item` from the Queue
-
- >>> queue = QueueByTwoStacks()
- >>> for i in (10, 20, 30):
- ... queue.put(i)
- >>> queue.get()
- 10
- >>> queue.put(40)
- >>> queue.get()
- 20
- >>> queue.get()
- 30
- >>> len(queue) == 1
- True
- >>> queue.get()
- 40
- >>> queue.get()
- Traceback (most recent call last):
- ...
- IndexError: Queue is empty
- """
-
- # To reduce number of attribute look-ups in `while` loop.
- stack1_pop = self._stack1.pop
- stack2_append = self._stack2.append
-
- if not self._stack2:
- while self._stack1:
- stack2_append(stack1_pop())
-
- if not self._stack2:
- raise IndexError("Queue is empty")
- return self._stack2.pop()
-
- def size(self) -> int:
- """
- Returns the length of the Queue
-
- >>> queue = QueueByTwoStacks()
- >>> queue.size()
- 0
- >>> queue.put(10)
- >>> queue.put(20)
- >>> queue.size()
- 2
- >>> queue.get()
- 10
- >>> queue.size() == 1
- True
- """
-
- return len(self)
-
-
-if __name__ == "__main__":
- from doctest import testmod
-
- testmod()
From 14bdd174bba7828ac2bf476f3697aa13fa179492 Mon Sep 17 00:00:00 2001
From: isidroas
Date: Sat, 8 Apr 2023 19:39:24 +0200
Subject: [PATCH 046/808] Bloom Filter (#8615)
* Bloom filter with tests
* has functions constant
* fix type
* isort
* passing ruff
* type hints
* type hints
* from fail to erro
* captital leter
* type hints requested by boot
* descriptive name for m
* more descriptibe arguments II
* moved movies_test to doctest
* commented doctest
* removed test_probability
* estimated error
* added types
* again hash_
* Update data_structures/hashing/bloom_filter.py
Co-authored-by: Christian Clauss
* from b to bloom
* Update data_structures/hashing/bloom_filter.py
Co-authored-by: Christian Clauss
* Update data_structures/hashing/bloom_filter.py
Co-authored-by: Christian Clauss
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* syntax error in dict comprehension
* from goodfather to godfather
* removed Interestellar
* forgot the last Godfather
* Revert "removed Interestellar"
This reverts commit 35fa5f5c4bf101d073aad43c37b0a423d8975071.
* pretty dict
* Apply suggestions from code review
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update bloom_filter.py
---------
Co-authored-by: Christian Clauss
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
data_structures/hashing/bloom_filter.py | 105 ++++++++++++++++++++++++
1 file changed, 105 insertions(+)
create mode 100644 data_structures/hashing/bloom_filter.py
diff --git a/data_structures/hashing/bloom_filter.py b/data_structures/hashing/bloom_filter.py
new file mode 100644
index 000000000000..7fd0985bdc33
--- /dev/null
+++ b/data_structures/hashing/bloom_filter.py
@@ -0,0 +1,105 @@
+"""
+See https://en.wikipedia.org/wiki/Bloom_filter
+
+The use of this data structure is to test membership in a set.
+Compared to Python's built-in set() it is more space-efficient.
+In the following example, only 8 bits of memory will be used:
+>>> bloom = Bloom(size=8)
+
+Initially, the filter contains all zeros:
+>>> bloom.bitstring
+'00000000'
+
+When an element is added, two bits are set to 1
+since there are 2 hash functions in this implementation:
+>>> "Titanic" in bloom
+False
+>>> bloom.add("Titanic")
+>>> bloom.bitstring
+'01100000'
+>>> "Titanic" in bloom
+True
+
+However, sometimes only one bit is added
+because both hash functions return the same value
+>>> bloom.add("Avatar")
+>>> "Avatar" in bloom
+True
+>>> bloom.format_hash("Avatar")
+'00000100'
+>>> bloom.bitstring
+'01100100'
+
+Not added elements should return False ...
+>>> not_present_films = ("The Godfather", "Interstellar", "Parasite", "Pulp Fiction")
+>>> {
+... film: bloom.format_hash(film) for film in not_present_films
+... } # doctest: +NORMALIZE_WHITESPACE
+{'The Godfather': '00000101',
+ 'Interstellar': '00000011',
+ 'Parasite': '00010010',
+ 'Pulp Fiction': '10000100'}
+>>> any(film in bloom for film in not_present_films)
+False
+
+but sometimes there are false positives:
+>>> "Ratatouille" in bloom
+True
+>>> bloom.format_hash("Ratatouille")
+'01100000'
+
+The probability increases with the number of elements added.
+The probability decreases with the number of bits in the bitarray.
+>>> bloom.estimated_error_rate
+0.140625
+>>> bloom.add("The Godfather")
+>>> bloom.estimated_error_rate
+0.25
+>>> bloom.bitstring
+'01100101'
+"""
+from hashlib import md5, sha256
+
+HASH_FUNCTIONS = (sha256, md5)
+
+
+class Bloom:
+ def __init__(self, size: int = 8) -> None:
+ self.bitarray = 0b0
+ self.size = size
+
+ def add(self, value: str) -> None:
+ h = self.hash_(value)
+ self.bitarray |= h
+
+ def exists(self, value: str) -> bool:
+ h = self.hash_(value)
+ return (h & self.bitarray) == h
+
+ def __contains__(self, other: str) -> bool:
+ return self.exists(other)
+
+ def format_bin(self, bitarray: int) -> str:
+ res = bin(bitarray)[2:]
+ return res.zfill(self.size)
+
+ @property
+ def bitstring(self) -> str:
+ return self.format_bin(self.bitarray)
+
+ def hash_(self, value: str) -> int:
+ res = 0b0
+ for func in HASH_FUNCTIONS:
+ position = (
+ int.from_bytes(func(value.encode()).digest(), "little") % self.size
+ )
+ res |= 2**position
+ return res
+
+ def format_hash(self, value: str) -> str:
+ return self.format_bin(self.hash_(value))
+
+ @property
+ def estimated_error_rate(self) -> float:
+ n_ones = bin(self.bitarray).count("1")
+ return (n_ones / self.size) ** len(HASH_FUNCTIONS)
From d182f95646aa7c515afe0912a34e8c2a11a34ca3 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Mon, 10 Apr 2023 23:43:17 +0200
Subject: [PATCH 047/808] [pre-commit.ci] pre-commit autoupdate (#8634)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.260 → v0.0.261](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.260...v0.0.261)
- [github.com/pre-commit/mirrors-mypy: v1.1.1 → v1.2.0](https://github.com/pre-commit/mirrors-mypy/compare/v1.1.1...v1.2.0)
* updating DIRECTORY.md
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.pre-commit-config.yaml | 4 ++--
DIRECTORY.md | 1 +
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index d54ce5adddce..55345a574ce9 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.260
+ rev: v0.0.261
hooks:
- id: ruff
@@ -51,7 +51,7 @@ repos:
- id: validate-pyproject
- repo: https://github.com/pre-commit/mirrors-mypy
- rev: v1.1.1
+ rev: v1.2.0
hooks:
- id: mypy
args:
diff --git a/DIRECTORY.md b/DIRECTORY.md
index e3e0748ecf75..36f5a752c48b 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -195,6 +195,7 @@
* [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py)
* [Disjoint Set](data_structures/disjoint_set/disjoint_set.py)
* Hashing
+ * [Bloom Filter](data_structures/hashing/bloom_filter.py)
* [Double Hash](data_structures/hashing/double_hash.py)
* [Hash Map](data_structures/hashing/hash_map.py)
* [Hash Table](data_structures/hashing/hash_table.py)
From 54dedf844a30d39bd42c66ebf9cd67ec186f47bb Mon Sep 17 00:00:00 2001
From: Diego Gasco <62801631+Diegomangasco@users.noreply.github.com>
Date: Mon, 17 Apr 2023 00:34:22 +0200
Subject: [PATCH 048/808] Dimensionality reduction (#8590)
---
machine_learning/dimensionality_reduction.py | 198 +++++++++++++++++++
1 file changed, 198 insertions(+)
create mode 100644 machine_learning/dimensionality_reduction.py
diff --git a/machine_learning/dimensionality_reduction.py b/machine_learning/dimensionality_reduction.py
new file mode 100644
index 000000000000..d2046f81af04
--- /dev/null
+++ b/machine_learning/dimensionality_reduction.py
@@ -0,0 +1,198 @@
+# Copyright (c) 2023 Diego Gasco (diego.gasco99@gmail.com), Diegomangasco on GitHub
+
+"""
+Requirements:
+ - numpy version 1.21
+ - scipy version 1.3.3
+Notes:
+ - Each column of the features matrix corresponds to a class item
+"""
+
+import logging
+
+import numpy as np
+import pytest
+from scipy.linalg import eigh
+
+logging.basicConfig(level=logging.INFO, format="%(message)s")
+
+
+def column_reshape(input_array: np.ndarray) -> np.ndarray:
+ """Function to reshape a row Numpy array into a column Numpy array
+ >>> input_array = np.array([1, 2, 3])
+ >>> column_reshape(input_array)
+ array([[1],
+ [2],
+ [3]])
+ """
+
+ return input_array.reshape((input_array.size, 1))
+
+
+def covariance_within_classes(
+ features: np.ndarray, labels: np.ndarray, classes: int
+) -> np.ndarray:
+ """Function to compute the covariance matrix inside each class.
+ >>> features = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
+ >>> labels = np.array([0, 1, 0])
+ >>> covariance_within_classes(features, labels, 2)
+ array([[0.66666667, 0.66666667, 0.66666667],
+ [0.66666667, 0.66666667, 0.66666667],
+ [0.66666667, 0.66666667, 0.66666667]])
+ """
+
+ covariance_sum = np.nan
+ for i in range(classes):
+ data = features[:, labels == i]
+ data_mean = data.mean(1)
+ # Centralize the data of class i
+ centered_data = data - column_reshape(data_mean)
+ if i > 0:
+ # If covariance_sum is not None
+ covariance_sum += np.dot(centered_data, centered_data.T)
+ else:
+ # If covariance_sum is np.nan (i.e. first loop)
+ covariance_sum = np.dot(centered_data, centered_data.T)
+
+ return covariance_sum / features.shape[1]
+
+
+def covariance_between_classes(
+ features: np.ndarray, labels: np.ndarray, classes: int
+) -> np.ndarray:
+ """Function to compute the covariance matrix between multiple classes
+ >>> features = np.array([[9, 2, 3], [4, 3, 6], [1, 8, 9]])
+ >>> labels = np.array([0, 1, 0])
+ >>> covariance_between_classes(features, labels, 2)
+ array([[ 3.55555556, 1.77777778, -2.66666667],
+ [ 1.77777778, 0.88888889, -1.33333333],
+ [-2.66666667, -1.33333333, 2. ]])
+ """
+
+ general_data_mean = features.mean(1)
+ covariance_sum = np.nan
+ for i in range(classes):
+ data = features[:, labels == i]
+ device_data = data.shape[1]
+ data_mean = data.mean(1)
+ if i > 0:
+ # If covariance_sum is not None
+ covariance_sum += device_data * np.dot(
+ column_reshape(data_mean) - column_reshape(general_data_mean),
+ (column_reshape(data_mean) - column_reshape(general_data_mean)).T,
+ )
+ else:
+ # If covariance_sum is np.nan (i.e. first loop)
+ covariance_sum = device_data * np.dot(
+ column_reshape(data_mean) - column_reshape(general_data_mean),
+ (column_reshape(data_mean) - column_reshape(general_data_mean)).T,
+ )
+
+ return covariance_sum / features.shape[1]
+
+
+def principal_component_analysis(features: np.ndarray, dimensions: int) -> np.ndarray:
+ """
+ Principal Component Analysis.
+
+ For more details, see: https://en.wikipedia.org/wiki/Principal_component_analysis.
+ Parameters:
+ * features: the features extracted from the dataset
+ * dimensions: to filter the projected data for the desired dimension
+
+ >>> test_principal_component_analysis()
+ """
+
+ # Check if the features have been loaded
+ if features.any():
+ data_mean = features.mean(1)
+ # Center the dataset
+ centered_data = features - np.reshape(data_mean, (data_mean.size, 1))
+ covariance_matrix = np.dot(centered_data, centered_data.T) / features.shape[1]
+ _, eigenvectors = np.linalg.eigh(covariance_matrix)
+ # Take all the columns in the reverse order (-1), and then takes only the first
+ filtered_eigenvectors = eigenvectors[:, ::-1][:, 0:dimensions]
+ # Project the database on the new space
+ projected_data = np.dot(filtered_eigenvectors.T, features)
+ logging.info("Principal Component Analysis computed")
+
+ return projected_data
+ else:
+ logging.basicConfig(level=logging.ERROR, format="%(message)s", force=True)
+ logging.error("Dataset empty")
+ raise AssertionError
+
+
+def linear_discriminant_analysis(
+ features: np.ndarray, labels: np.ndarray, classes: int, dimensions: int
+) -> np.ndarray:
+ """
+ Linear Discriminant Analysis.
+
+ For more details, see: https://en.wikipedia.org/wiki/Linear_discriminant_analysis.
+ Parameters:
+ * features: the features extracted from the dataset
+ * labels: the class labels of the features
+ * classes: the number of classes present in the dataset
+ * dimensions: to filter the projected data for the desired dimension
+
+ >>> test_linear_discriminant_analysis()
+ """
+
+ # Check if the dimension desired is less than the number of classes
+ assert classes > dimensions
+
+ # Check if features have been already loaded
+ if features.any:
+ _, eigenvectors = eigh(
+ covariance_between_classes(features, labels, classes),
+ covariance_within_classes(features, labels, classes),
+ )
+ filtered_eigenvectors = eigenvectors[:, ::-1][:, :dimensions]
+ svd_matrix, _, _ = np.linalg.svd(filtered_eigenvectors)
+ filtered_svd_matrix = svd_matrix[:, 0:dimensions]
+ projected_data = np.dot(filtered_svd_matrix.T, features)
+ logging.info("Linear Discriminant Analysis computed")
+
+ return projected_data
+ else:
+ logging.basicConfig(level=logging.ERROR, format="%(message)s", force=True)
+ logging.error("Dataset empty")
+ raise AssertionError
+
+
+def test_linear_discriminant_analysis() -> None:
+ # Create dummy dataset with 2 classes and 3 features
+ features = np.array([[1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7]])
+ labels = np.array([0, 0, 0, 1, 1])
+ classes = 2
+ dimensions = 2
+
+ # Assert that the function raises an AssertionError if dimensions > classes
+ with pytest.raises(AssertionError) as error_info:
+ projected_data = linear_discriminant_analysis(
+ features, labels, classes, dimensions
+ )
+ if isinstance(projected_data, np.ndarray):
+ raise AssertionError(
+ "Did not raise AssertionError for dimensions > classes"
+ )
+ assert error_info.type is AssertionError
+
+
+def test_principal_component_analysis() -> None:
+ features = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
+ dimensions = 2
+ expected_output = np.array([[6.92820323, 8.66025404, 10.39230485], [3.0, 3.0, 3.0]])
+
+ with pytest.raises(AssertionError) as error_info:
+ output = principal_component_analysis(features, dimensions)
+ if not np.allclose(expected_output, output):
+ raise AssertionError
+ assert error_info.type is AssertionError
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 2b051a2de4adf711857f5453286dff47d1d87636 Mon Sep 17 00:00:00 2001
From: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
Date: Tue, 18 Apr 2023 03:47:48 +0530
Subject: [PATCH 049/808] Create real_and_reactive_power.py (#8665)
---
electronics/real_and_reactive_power.py | 49 ++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
create mode 100644 electronics/real_and_reactive_power.py
diff --git a/electronics/real_and_reactive_power.py b/electronics/real_and_reactive_power.py
new file mode 100644
index 000000000000..81dcba800e82
--- /dev/null
+++ b/electronics/real_and_reactive_power.py
@@ -0,0 +1,49 @@
+import math
+
+
+def real_power(apparent_power: float, power_factor: float) -> float:
+ """
+ Calculate real power from apparent power and power factor.
+
+ Examples:
+ >>> real_power(100, 0.9)
+ 90.0
+ >>> real_power(0, 0.8)
+ 0.0
+ >>> real_power(100, -0.9)
+ -90.0
+ """
+ if (
+ not isinstance(power_factor, (int, float))
+ or power_factor < -1
+ or power_factor > 1
+ ):
+ raise ValueError("power_factor must be a valid float value between -1 and 1.")
+ return apparent_power * power_factor
+
+
+def reactive_power(apparent_power: float, power_factor: float) -> float:
+ """
+ Calculate reactive power from apparent power and power factor.
+
+ Examples:
+ >>> reactive_power(100, 0.9)
+ 43.58898943540673
+ >>> reactive_power(0, 0.8)
+ 0.0
+ >>> reactive_power(100, -0.9)
+ 43.58898943540673
+ """
+ if (
+ not isinstance(power_factor, (int, float))
+ or power_factor < -1
+ or power_factor > 1
+ ):
+ raise ValueError("power_factor must be a valid float value between -1 and 1.")
+ return apparent_power * math.sqrt(1 - power_factor**2)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From b5047cfa114c6343b92370419772b9cf0f13e634 Mon Sep 17 00:00:00 2001
From: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
Date: Tue, 18 Apr 2023 13:00:01 +0530
Subject: [PATCH 050/808] Create apparent_power.py (#8664)
* Create apparent_power.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update apparent_power.py
* Update apparent_power.py
* Update apparent_power.py
* Update electronics/apparent_power.py
Co-authored-by: Christian Clauss
* Update electronics/apparent_power.py
Co-authored-by: Christian Clauss
* Update apparent_power.py
* Update electronics/apparent_power.py
Co-authored-by: Christian Clauss
* Update apparent_power.py
* Update apparent_power.py
* Update apparent_power.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update apparent_power.py
* Update apparent_power.py
* Update apparent_power.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
electronics/apparent_power.py | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
create mode 100644 electronics/apparent_power.py
diff --git a/electronics/apparent_power.py b/electronics/apparent_power.py
new file mode 100644
index 000000000000..a6f1a50822f7
--- /dev/null
+++ b/electronics/apparent_power.py
@@ -0,0 +1,35 @@
+import cmath
+import math
+
+
+def apparent_power(
+ voltage: float, current: float, voltage_angle: float, current_angle: float
+) -> complex:
+ """
+ Calculate the apparent power in a single-phase AC circuit.
+
+ >>> apparent_power(100, 5, 0, 0)
+ (500+0j)
+ >>> apparent_power(100, 5, 90, 0)
+ (3.061616997868383e-14+500j)
+ >>> apparent_power(100, 5, -45, -60)
+ (-129.40952255126027-482.9629131445341j)
+ >>> apparent_power(200, 10, -30, -90)
+ (-999.9999999999998-1732.0508075688776j)
+ """
+ # Convert angles from degrees to radians
+ voltage_angle_rad = math.radians(voltage_angle)
+ current_angle_rad = math.radians(current_angle)
+
+ # Convert voltage and current to rectangular form
+ voltage_rect = cmath.rect(voltage, voltage_angle_rad)
+ current_rect = cmath.rect(current, current_angle_rad)
+
+ # Calculate apparent power
+ return voltage_rect * current_rect
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 93ce8cb75da2740089df8db23fa493ce104a011b Mon Sep 17 00:00:00 2001
From: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
Date: Tue, 18 Apr 2023 13:14:06 +0530
Subject: [PATCH 051/808] added reference link. (#8667)
* added reference link.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
electronics/apparent_power.py | 2 ++
1 file changed, 2 insertions(+)
diff --git a/electronics/apparent_power.py b/electronics/apparent_power.py
index a6f1a50822f7..0ce1c2aa95b9 100644
--- a/electronics/apparent_power.py
+++ b/electronics/apparent_power.py
@@ -8,6 +8,8 @@ def apparent_power(
"""
Calculate the apparent power in a single-phase AC circuit.
+ Reference: https://en.wikipedia.org/wiki/AC_power#Apparent_power
+
>>> apparent_power(100, 5, 0, 0)
(500+0j)
>>> apparent_power(100, 5, 90, 0)
From 458debc237d41752c6c4223264a4bb23efb2ecec Mon Sep 17 00:00:00 2001
From: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
Date: Tue, 18 Apr 2023 13:32:20 +0530
Subject: [PATCH 052/808] added a problem with solution on sliding window.
(#8566)
* added a problem with solution on sliding window.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* added hint for return type and parameter
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update minimum_size_subarray_sum.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update minimum_size_subarray_sum.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update minimum_size_subarray_sum.py
* Update minimum_size_subarray_sum.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update minimum_size_subarray_sum.py
* removed un-necessary docs and added 2 test cases
* Rename sliding_window/minimum_size_subarray_sum.py to dynamic_programming/minimum_size_subarray_sum.py
* Update minimum_size_subarray_sum.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update minimum_size_subarray_sum.py
* Update minimum_size_subarray_sum.py
* Update minimum_size_subarray_sum.py
* Update minimum_size_subarray_sum.py
* Update minimum_size_subarray_sum.py
* Update minimum_size_subarray_sum.py
* Update dynamic_programming/minimum_size_subarray_sum.py
Co-authored-by: Christian Clauss
* Update dynamic_programming/minimum_size_subarray_sum.py
Co-authored-by: Christian Clauss
* Update dynamic_programming/minimum_size_subarray_sum.py
Co-authored-by: Christian Clauss
* Update dynamic_programming/minimum_size_subarray_sum.py
Co-authored-by: Christian Clauss
* Update dynamic_programming/minimum_size_subarray_sum.py
Co-authored-by: Christian Clauss
* Update dynamic_programming/minimum_size_subarray_sum.py
Co-authored-by: Christian Clauss
* Update dynamic_programming/minimum_size_subarray_sum.py
Co-authored-by: Christian Clauss
* Update minimum_size_subarray_sum.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update minimum_size_subarray_sum.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update minimum_size_subarray_sum.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update minimum_size_subarray_sum.py
* Update minimum_size_subarray_sum.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.../minimum_size_subarray_sum.py | 62 +++++++++++++++++++
1 file changed, 62 insertions(+)
create mode 100644 dynamic_programming/minimum_size_subarray_sum.py
diff --git a/dynamic_programming/minimum_size_subarray_sum.py b/dynamic_programming/minimum_size_subarray_sum.py
new file mode 100644
index 000000000000..3868d73535fb
--- /dev/null
+++ b/dynamic_programming/minimum_size_subarray_sum.py
@@ -0,0 +1,62 @@
+import sys
+
+
+def minimum_subarray_sum(target: int, numbers: list[int]) -> int:
+ """
+ Return the length of the shortest contiguous subarray in a list of numbers whose sum
+ is at least target. Reference: https://stackoverflow.com/questions/8269916
+
+ >>> minimum_subarray_sum(7, [2, 3, 1, 2, 4, 3])
+ 2
+ >>> minimum_subarray_sum(7, [2, 3, -1, 2, 4, -3])
+ 4
+ >>> minimum_subarray_sum(11, [1, 1, 1, 1, 1, 1, 1, 1])
+ 0
+ >>> minimum_subarray_sum(10, [1, 2, 3, 4, 5, 6, 7])
+ 2
+ >>> minimum_subarray_sum(5, [1, 1, 1, 1, 1, 5])
+ 1
+ >>> minimum_subarray_sum(0, [])
+ 0
+ >>> minimum_subarray_sum(0, [1, 2, 3])
+ 1
+ >>> minimum_subarray_sum(10, [10, 20, 30])
+ 1
+ >>> minimum_subarray_sum(7, [1, 1, 1, 1, 1, 1, 10])
+ 1
+ >>> minimum_subarray_sum(6, [])
+ 0
+ >>> minimum_subarray_sum(2, [1, 2, 3])
+ 1
+ >>> minimum_subarray_sum(-6, [])
+ 0
+ >>> minimum_subarray_sum(-6, [3, 4, 5])
+ 1
+ >>> minimum_subarray_sum(8, None)
+ 0
+ >>> minimum_subarray_sum(2, "ABC")
+ Traceback (most recent call last):
+ ...
+ ValueError: numbers must be an iterable of integers
+ """
+ if not numbers:
+ return 0
+ if target == 0 and target in numbers:
+ return 0
+ if not isinstance(numbers, (list, tuple)) or not all(
+ isinstance(number, int) for number in numbers
+ ):
+ raise ValueError("numbers must be an iterable of integers")
+
+ left = right = curr_sum = 0
+ min_len = sys.maxsize
+
+ while right < len(numbers):
+ curr_sum += numbers[right]
+ while curr_sum >= target and left <= right:
+ min_len = min(min_len, right - left + 1)
+ curr_sum -= numbers[left]
+ left += 1
+ right += 1
+
+ return 0 if min_len == sys.maxsize else min_len
From 11582943a555ae3b6a22938df6d3645b0327562e Mon Sep 17 00:00:00 2001
From: JulianStiebler <68881884+JulianStiebler@users.noreply.github.com>
Date: Tue, 18 Apr 2023 11:57:48 +0200
Subject: [PATCH 053/808] Create maths/pi_generator.py (#8666)
* Create pi_generator.py
* Update pi_generator.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update pi_generator.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update pi_generator.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update pi_generator.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update pi_generator.py
* Update pi_generator.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Updated commentary on line 28, added math.pi comparison & math.isclose() test
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Removed # noqa: E501
* printf() added as recommended by cclaus
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
maths/pi_generator.py | 94 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 94 insertions(+)
create mode 100644 maths/pi_generator.py
diff --git a/maths/pi_generator.py b/maths/pi_generator.py
new file mode 100644
index 000000000000..dcd218aae309
--- /dev/null
+++ b/maths/pi_generator.py
@@ -0,0 +1,94 @@
+def calculate_pi(limit: int) -> str:
+ """
+ https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80
+ Leibniz Formula for Pi
+
+ The Leibniz formula is the special case arctan 1 = 1/4 Pi .
+ Leibniz's formula converges extremely slowly: it exhibits sublinear convergence.
+
+ Convergence (https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80#Convergence)
+
+ We cannot try to prove against an interrupted, uncompleted generation.
+ https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80#Unusual_behaviour
+ The errors can in fact be predicted;
+ but those calculations also approach infinity for accuracy.
+
+ Our output will always be a string since we can defintely store all digits in there.
+ For simplicity' sake, let's just compare against known values and since our outpit
+ is a string, we need to convert to float.
+
+ >>> import math
+ >>> float(calculate_pi(15)) == math.pi
+ True
+
+ Since we cannot predict errors or interrupt any infinite alternating
+ series generation since they approach infinity,
+ or interrupt any alternating series, we are going to need math.isclose()
+
+ >>> math.isclose(float(calculate_pi(50)), math.pi)
+ True
+
+ >>> math.isclose(float(calculate_pi(100)), math.pi)
+ True
+
+ Since math.pi-constant contains only 16 digits, here some test with preknown values:
+
+ >>> calculate_pi(50)
+ '3.14159265358979323846264338327950288419716939937510'
+ >>> calculate_pi(80)
+ '3.14159265358979323846264338327950288419716939937510582097494459230781640628620899'
+
+ To apply the Leibniz formula for calculating pi,
+ the variables q, r, t, k, n, and l are used for the iteration process.
+ """
+ q = 1
+ r = 0
+ t = 1
+ k = 1
+ n = 3
+ l = 3
+ decimal = limit
+ counter = 0
+
+ result = ""
+
+ """
+ We will avoid using yield since we otherwise get a Generator-Object,
+ which we can't just compare against anything. We would have to make a list out of it
+ after the generation, so we will just stick to plain return logic:
+ """
+ while counter != decimal + 1:
+ if 4 * q + r - t < n * t:
+ result += str(n)
+ if counter == 0:
+ result += "."
+
+ if decimal == counter:
+ break
+
+ counter += 1
+ nr = 10 * (r - n * t)
+ n = ((10 * (3 * q + r)) // t) - 10 * n
+ q *= 10
+ r = nr
+ else:
+ nr = (2 * q + r) * l
+ nn = (q * (7 * k) + 2 + (r * l)) // (t * l)
+ q *= k
+ t *= l
+ l += 2
+ k += 1
+ n = nn
+ r = nr
+ return result
+
+
+def main() -> None:
+ print(f"{calculate_pi(50) = }")
+ import doctest
+
+ doctest.testmod()
+
+
+if __name__ == "__main__":
+ main()
From bf30b18192dd7ff9a43523ee6efe5c015ae6b99c Mon Sep 17 00:00:00 2001
From: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
Date: Mon, 24 Apr 2023 10:58:30 +0530
Subject: [PATCH 054/808] Update linear_discriminant_analysis.py and
rsa_cipher.py (#8680)
* Update rsa_cipher.py by replacing %s with {}
* Update rsa_cipher.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update linear_discriminant_analysis.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update linear_discriminant_analysis.py
* Update linear_discriminant_analysis.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update linear_discriminant_analysis.py
* Update linear_discriminant_analysis.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update linear_discriminant_analysis.py
* Update machine_learning/linear_discriminant_analysis.py
Co-authored-by: Christian Clauss
* Update linear_discriminant_analysis.py
* updated
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
ciphers/rsa_cipher.py | 14 ++++++++------
machine_learning/linear_discriminant_analysis.py | 2 +-
2 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/ciphers/rsa_cipher.py b/ciphers/rsa_cipher.py
index de26992f5eeb..9c41cdc5d472 100644
--- a/ciphers/rsa_cipher.py
+++ b/ciphers/rsa_cipher.py
@@ -76,10 +76,11 @@ def encrypt_and_write_to_file(
key_size, n, e = read_key_file(key_filename)
if key_size < block_size * 8:
sys.exit(
- "ERROR: Block size is %s bits and key size is %s bits. The RSA cipher "
+ "ERROR: Block size is {} bits and key size is {} bits. The RSA cipher "
"requires the block size to be equal to or greater than the key size. "
- "Either decrease the block size or use different keys."
- % (block_size * 8, key_size)
+ "Either decrease the block size or use different keys.".format(
+ block_size * 8, key_size
+ )
)
encrypted_blocks = [str(i) for i in encrypt_message(message, (n, e), block_size)]
@@ -101,10 +102,11 @@ def read_from_file_and_decrypt(message_filename: str, key_filename: str) -> str:
if key_size < block_size * 8:
sys.exit(
- "ERROR: Block size is %s bits and key size is %s bits. The RSA cipher "
+ "ERROR: Block size is {} bits and key size is {} bits. The RSA cipher "
"requires the block size to be equal to or greater than the key size. "
- "Did you specify the correct key file and encrypted file?"
- % (block_size * 8, key_size)
+ "Did you specify the correct key file and encrypted file?".format(
+ block_size * 8, key_size
+ )
)
encrypted_blocks = []
diff --git a/machine_learning/linear_discriminant_analysis.py b/machine_learning/linear_discriminant_analysis.py
index f4fb5ba76b64..c0a477be10c7 100644
--- a/machine_learning/linear_discriminant_analysis.py
+++ b/machine_learning/linear_discriminant_analysis.py
@@ -399,7 +399,7 @@ def main():
if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q":
print("\n" + "GoodBye!".center(100, "-") + "\n")
break
- system("cls" if name == "nt" else "clear")
+ system("clear" if name == "posix" else "cls") # noqa: S605
if __name__ == "__main__":
From a650426350dc7833ff1110bc2e434763caed631e Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 25 Apr 2023 06:05:45 +0200
Subject: [PATCH 055/808] [pre-commit.ci] pre-commit autoupdate (#8691)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.261 → v0.0.262](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.261...v0.0.262)
- [github.com/tox-dev/pyproject-fmt: 0.9.2 → 0.10.0](https://github.com/tox-dev/pyproject-fmt/compare/0.9.2...0.10.0)
* updating DIRECTORY.md
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.pre-commit-config.yaml | 4 ++--
DIRECTORY.md | 5 +++++
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 55345a574ce9..288473ca365f 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.261
+ rev: v0.0.262
hooks:
- id: ruff
@@ -33,7 +33,7 @@ repos:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
- rev: "0.9.2"
+ rev: "0.10.0"
hooks:
- id: pyproject-fmt
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 36f5a752c48b..8e67c85c6fa8 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -327,6 +327,7 @@
* [Minimum Coin Change](dynamic_programming/minimum_coin_change.py)
* [Minimum Cost Path](dynamic_programming/minimum_cost_path.py)
* [Minimum Partition](dynamic_programming/minimum_partition.py)
+ * [Minimum Size Subarray Sum](dynamic_programming/minimum_size_subarray_sum.py)
* [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py)
* [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py)
* [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py)
@@ -339,6 +340,7 @@
* [Word Break](dynamic_programming/word_break.py)
## Electronics
+ * [Apparent Power](electronics/apparent_power.py)
* [Builtin Voltage](electronics/builtin_voltage.py)
* [Carrier Concentration](electronics/carrier_concentration.py)
* [Circular Convolution](electronics/circular_convolution.py)
@@ -348,6 +350,7 @@
* [Electrical Impedance](electronics/electrical_impedance.py)
* [Ind Reactance](electronics/ind_reactance.py)
* [Ohms Law](electronics/ohms_law.py)
+ * [Real And Reactive Power](electronics/real_and_reactive_power.py)
* [Resistor Equivalence](electronics/resistor_equivalence.py)
* [Resonant Frequency](electronics/resonant_frequency.py)
@@ -483,6 +486,7 @@
* [Astar](machine_learning/astar.py)
* [Data Transformations](machine_learning/data_transformations.py)
* [Decision Tree](machine_learning/decision_tree.py)
+ * [Dimensionality Reduction](machine_learning/dimensionality_reduction.py)
* Forecasting
* [Run](machine_learning/forecasting/run.py)
* [Gradient Descent](machine_learning/gradient_descent.py)
@@ -604,6 +608,7 @@
* [Perfect Number](maths/perfect_number.py)
* [Perfect Square](maths/perfect_square.py)
* [Persistence](maths/persistence.py)
+ * [Pi Generator](maths/pi_generator.py)
* [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py)
* [Points Are Collinear 3D](maths/points_are_collinear_3d.py)
* [Pollard Rho](maths/pollard_rho.py)
From c1b3ea5355266bb47daba378ca10200c4d359453 Mon Sep 17 00:00:00 2001
From: Dipankar Mitra <50228537+Mitra-babu@users.noreply.github.com>
Date: Tue, 25 Apr 2023 21:36:14 +0530
Subject: [PATCH 056/808] The tanh activation function is added (#8689)
* tanh function been added
* tanh function been added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* tanh function is added
* tanh function is added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* tanh function added
* tanh function added
* tanh function is added
* Apply suggestions from code review
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
maths/tanh.py | 42 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
create mode 100644 maths/tanh.py
diff --git a/maths/tanh.py b/maths/tanh.py
new file mode 100644
index 000000000000..ddab3e1ab717
--- /dev/null
+++ b/maths/tanh.py
@@ -0,0 +1,42 @@
+"""
+This script demonstrates the implementation of the tangent hyperbolic
+or tanh function.
+
+The function takes a vector of K real numbers as input and
+then (e^x - e^(-x))/(e^x + e^(-x)). After through tanh, the
+element of the vector mostly -1 between 1.
+
+Script inspired from its corresponding Wikipedia article
+https://en.wikipedia.org/wiki/Activation_function
+"""
+import numpy as np
+
+
+def tangent_hyperbolic(vector: np.array) -> np.array:
+ """
+ Implements the tanh function
+
+ Parameters:
+ vector: np.array
+
+ Returns:
+ tanh (np.array): The input numpy array after applying tanh.
+
+ mathematically (e^x - e^(-x))/(e^x + e^(-x)) can be written as (2/(1+e^(-2x))-1
+
+ Examples:
+ >>> tangent_hyperbolic(np.array([1,5,6,-0.67]))
+ array([ 0.76159416, 0.9999092 , 0.99998771, -0.58497988])
+
+ >>> tangent_hyperbolic(np.array([8,10,2,-0.98,13]))
+ array([ 0.99999977, 1. , 0.96402758, -0.7530659 , 1. ])
+
+ """
+
+ return (2 / (1 + np.exp(-2 * vector))) - 1
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 4c1f876567673db0934ba65d662ea221465ec921 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Thu, 27 Apr 2023 19:32:07 +0200
Subject: [PATCH 057/808] Solving the `Top k most frequent words` problem using
a max-heap (#8685)
* Solving the `Top k most frequent words` problem using a max-heap
* Mentioning Python standard library solution in `Top k most frequent words` docstring
* ruff --fix .
* updating DIRECTORY.md
---------
Co-authored-by: Amos Paribocci
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 +
data_structures/heap/heap.py | 31 ++++--
.../linear_discriminant_analysis.py | 2 +-
strings/top_k_frequent_words.py | 101 ++++++++++++++++++
4 files changed, 128 insertions(+), 7 deletions(-)
create mode 100644 strings/top_k_frequent_words.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 8e67c85c6fa8..681d252b232d 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -1167,6 +1167,7 @@
* [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py)
* [Split](strings/split.py)
* [Text Justification](strings/text_justification.py)
+ * [Top K Frequent Words](strings/top_k_frequent_words.py)
* [Upper](strings/upper.py)
* [Wave](strings/wave.py)
* [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py)
diff --git a/data_structures/heap/heap.py b/data_structures/heap/heap.py
index b14c55d9db4c..c1004f349479 100644
--- a/data_structures/heap/heap.py
+++ b/data_structures/heap/heap.py
@@ -1,9 +1,28 @@
from __future__ import annotations
+from abc import abstractmethod
from collections.abc import Iterable
+from typing import Generic, Protocol, TypeVar
-class Heap:
+class Comparable(Protocol):
+ @abstractmethod
+ def __lt__(self: T, other: T) -> bool:
+ pass
+
+ @abstractmethod
+ def __gt__(self: T, other: T) -> bool:
+ pass
+
+ @abstractmethod
+ def __eq__(self: T, other: object) -> bool:
+ pass
+
+
+T = TypeVar("T", bound=Comparable)
+
+
+class Heap(Generic[T]):
"""A Max Heap Implementation
>>> unsorted = [103, 9, 1, 7, 11, 15, 25, 201, 209, 107, 5]
@@ -27,7 +46,7 @@ class Heap:
"""
def __init__(self) -> None:
- self.h: list[float] = []
+ self.h: list[T] = []
self.heap_size: int = 0
def __repr__(self) -> str:
@@ -79,7 +98,7 @@ def max_heapify(self, index: int) -> None:
# fix the subsequent violation recursively if any
self.max_heapify(violation)
- def build_max_heap(self, collection: Iterable[float]) -> None:
+ def build_max_heap(self, collection: Iterable[T]) -> None:
"""build max heap from an unsorted array"""
self.h = list(collection)
self.heap_size = len(self.h)
@@ -88,7 +107,7 @@ def build_max_heap(self, collection: Iterable[float]) -> None:
for i in range(self.heap_size // 2 - 1, -1, -1):
self.max_heapify(i)
- def extract_max(self) -> float:
+ def extract_max(self) -> T:
"""get and remove max from heap"""
if self.heap_size >= 2:
me = self.h[0]
@@ -102,7 +121,7 @@ def extract_max(self) -> float:
else:
raise Exception("Empty heap")
- def insert(self, value: float) -> None:
+ def insert(self, value: T) -> None:
"""insert a new value into the max heap"""
self.h.append(value)
idx = (self.heap_size - 1) // 2
@@ -144,7 +163,7 @@ def heap_sort(self) -> None:
]:
print(f"unsorted array: {unsorted}")
- heap = Heap()
+ heap: Heap[int] = Heap()
heap.build_max_heap(unsorted)
print(f"after build heap: {heap}")
diff --git a/machine_learning/linear_discriminant_analysis.py b/machine_learning/linear_discriminant_analysis.py
index c0a477be10c7..88c047157893 100644
--- a/machine_learning/linear_discriminant_analysis.py
+++ b/machine_learning/linear_discriminant_analysis.py
@@ -399,7 +399,7 @@ def main():
if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q":
print("\n" + "GoodBye!".center(100, "-") + "\n")
break
- system("clear" if name == "posix" else "cls") # noqa: S605
+ system("cls" if name == "nt" else "clear") # noqa: S605
if __name__ == "__main__":
diff --git a/strings/top_k_frequent_words.py b/strings/top_k_frequent_words.py
new file mode 100644
index 000000000000..f3d1e0cd5ca7
--- /dev/null
+++ b/strings/top_k_frequent_words.py
@@ -0,0 +1,101 @@
+"""
+Finds the top K most frequent words from the provided word list.
+
+This implementation aims to show how to solve the problem using the Heap class
+already present in this repository.
+Computing order statistics is, in fact, a typical usage of heaps.
+
+This is mostly shown for educational purposes, since the problem can be solved
+in a few lines using collections.Counter from the Python standard library:
+
+from collections import Counter
+def top_k_frequent_words(words, k_value):
+ return [x[0] for x in Counter(words).most_common(k_value)]
+"""
+
+
+from collections import Counter
+from functools import total_ordering
+
+from data_structures.heap.heap import Heap
+
+
+@total_ordering
+class WordCount:
+ def __init__(self, word: str, count: int) -> None:
+ self.word = word
+ self.count = count
+
+ def __eq__(self, other: object) -> bool:
+ """
+ >>> WordCount('a', 1).__eq__(WordCount('b', 1))
+ True
+ >>> WordCount('a', 1).__eq__(WordCount('a', 1))
+ True
+ >>> WordCount('a', 1).__eq__(WordCount('a', 2))
+ False
+ >>> WordCount('a', 1).__eq__(WordCount('b', 2))
+ False
+ >>> WordCount('a', 1).__eq__(1)
+ NotImplemented
+ """
+ if not isinstance(other, WordCount):
+ return NotImplemented
+ return self.count == other.count
+
+ def __lt__(self, other: object) -> bool:
+ """
+ >>> WordCount('a', 1).__lt__(WordCount('b', 1))
+ False
+ >>> WordCount('a', 1).__lt__(WordCount('a', 1))
+ False
+ >>> WordCount('a', 1).__lt__(WordCount('a', 2))
+ True
+ >>> WordCount('a', 1).__lt__(WordCount('b', 2))
+ True
+ >>> WordCount('a', 2).__lt__(WordCount('a', 1))
+ False
+ >>> WordCount('a', 2).__lt__(WordCount('b', 1))
+ False
+ >>> WordCount('a', 1).__lt__(1)
+ NotImplemented
+ """
+ if not isinstance(other, WordCount):
+ return NotImplemented
+ return self.count < other.count
+
+
+def top_k_frequent_words(words: list[str], k_value: int) -> list[str]:
+ """
+ Returns the `k_value` most frequently occurring words,
+ in non-increasing order of occurrence.
+ In this context, a word is defined as an element in the provided list.
+
+ In case `k_value` is greater than the number of distinct words, a value of k equal
+ to the number of distinct words will be considered, instead.
+
+ >>> top_k_frequent_words(['a', 'b', 'c', 'a', 'c', 'c'], 3)
+ ['c', 'a', 'b']
+ >>> top_k_frequent_words(['a', 'b', 'c', 'a', 'c', 'c'], 2)
+ ['c', 'a']
+ >>> top_k_frequent_words(['a', 'b', 'c', 'a', 'c', 'c'], 1)
+ ['c']
+ >>> top_k_frequent_words(['a', 'b', 'c', 'a', 'c', 'c'], 0)
+ []
+ >>> top_k_frequent_words([], 1)
+ []
+ >>> top_k_frequent_words(['a', 'a'], 2)
+ ['a']
+ """
+ heap: Heap[WordCount] = Heap()
+ count_by_word = Counter(words)
+ heap.build_max_heap(
+ [WordCount(word, count) for word, count in count_by_word.items()]
+ )
+ return [heap.extract_max().word for _ in range(min(k_value, len(count_by_word)))]
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From c4dcc44dd44f7e3e7c65debc8e173080fc693150 Mon Sep 17 00:00:00 2001
From: Sahil Goel <55365655+sahilg13@users.noreply.github.com>
Date: Sun, 30 Apr 2023 13:33:22 -0400
Subject: [PATCH 058/808] Added an algorithm to calculate the present value of
cash flows (#8700)
* Added an algorithm to calculate the present value of cash flows
* added doctest and reference
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Resolving deprecation issues with typing module
* Fixing argument type checks and adding doctest case
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fixing failing doctest case by requiring less precision due to floating point inprecision
* Updating return type
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Added test cases for more coverage
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Make improvements based on Rohan's suggestions
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update financial/present_value.py
Committed first suggestion
Co-authored-by: Christian Clauss
* Update financial/present_value.py
Committed second suggestion
Co-authored-by: Christian Clauss
* Update financial/present_value.py
Committed third suggestion
Co-authored-by: Christian Clauss
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
financial/present_value.py | 41 ++++++++++++++++++++++++++++++++++++++
1 file changed, 41 insertions(+)
create mode 100644 financial/present_value.py
diff --git a/financial/present_value.py b/financial/present_value.py
new file mode 100644
index 000000000000..dc8191a6ef53
--- /dev/null
+++ b/financial/present_value.py
@@ -0,0 +1,41 @@
+"""
+Reference: https://www.investopedia.com/terms/p/presentvalue.asp
+
+An algorithm that calculates the present value of a stream of yearly cash flows given...
+1. The discount rate (as a decimal, not a percent)
+2. An array of cash flows, with the index of the cash flow being the associated year
+
+Note: This algorithm assumes that cash flows are paid at the end of the specified year
+
+
+def present_value(discount_rate: float, cash_flows: list[float]) -> float:
+ """
+ >>> present_value(0.13, [10, 20.70, -293, 297])
+ 4.69
+ >>> present_value(0.07, [-109129.39, 30923.23, 15098.93, 29734,39])
+ -42739.63
+ >>> present_value(0.07, [109129.39, 30923.23, 15098.93, 29734,39])
+ 175519.15
+ >>> present_value(-1, [109129.39, 30923.23, 15098.93, 29734,39])
+ Traceback (most recent call last):
+ ...
+ ValueError: Discount rate cannot be negative
+ >>> present_value(0.03, [])
+ Traceback (most recent call last):
+ ...
+ ValueError: Cash flows list cannot be empty
+ """
+ if discount_rate < 0:
+ raise ValueError("Discount rate cannot be negative")
+ if not cash_flows:
+ raise ValueError("Cash flows list cannot be empty")
+ present_value = sum(
+ cash_flow / ((1 + discount_rate) ** i) for i, cash_flow in enumerate(cash_flows)
+ )
+ return round(present_value, ndigits=2)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From f6df26bf0f5c05d53b6fd24552de9e3eec2334aa Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Mon, 1 May 2023 02:59:42 +0200
Subject: [PATCH 059/808] Fix docstring in present_value.py (#8702)
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 2 ++
financial/present_value.py | 1 +
2 files changed, 3 insertions(+)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 681d252b232d..167d062b4a9f 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -363,6 +363,7 @@
## Financial
* [Equated Monthly Installments](financial/equated_monthly_installments.py)
* [Interest](financial/interest.py)
+ * [Present Value](financial/present_value.py)
* [Price Plus Tax](financial/price_plus_tax.py)
## Fractals
@@ -655,6 +656,7 @@
* [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py)
* [Sumset](maths/sumset.py)
* [Sylvester Sequence](maths/sylvester_sequence.py)
+ * [Tanh](maths/tanh.py)
* [Test Prime Check](maths/test_prime_check.py)
* [Trapezoidal Rule](maths/trapezoidal_rule.py)
* [Triplet Sum](maths/triplet_sum.py)
diff --git a/financial/present_value.py b/financial/present_value.py
index dc8191a6ef53..f74612b923af 100644
--- a/financial/present_value.py
+++ b/financial/present_value.py
@@ -6,6 +6,7 @@
2. An array of cash flows, with the index of the cash flow being the associated year
Note: This algorithm assumes that cash flows are paid at the end of the specified year
+"""
def present_value(discount_rate: float, cash_flows: list[float]) -> float:
From e966c5cc0f856afab11a8bb150ef3b48f0c63112 Mon Sep 17 00:00:00 2001
From: Himanshu Tomar
Date: Mon, 1 May 2023 15:53:03 +0530
Subject: [PATCH 060/808] Added minimum waiting time problem solution using
greedy algorithm (#8701)
* Added minimum waiting time problem solution using greedy algorithm
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* ruff --fix
* Add type hints
* Added two more doc test
* Removed unnecessary comments
* updated type hints
* Updated the code as per the code review
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
DIRECTORY.md | 1 +
greedy_methods/minimum_waiting_time.py | 48 ++++++++++++++++++++++++++
2 files changed, 49 insertions(+)
create mode 100644 greedy_methods/minimum_waiting_time.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 167d062b4a9f..021669d13b4a 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -450,6 +450,7 @@
* [Fractional Knapsack](greedy_methods/fractional_knapsack.py)
* [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py)
* [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py)
+ * [Minimum Waiting Time ](greedy_methods/minimum_waiting_time.py)
## Hashes
* [Adler32](hashes/adler32.py)
diff --git a/greedy_methods/minimum_waiting_time.py b/greedy_methods/minimum_waiting_time.py
new file mode 100644
index 000000000000..aaae8cf8f720
--- /dev/null
+++ b/greedy_methods/minimum_waiting_time.py
@@ -0,0 +1,48 @@
+"""
+Calculate the minimum waiting time using a greedy algorithm.
+reference: https://www.youtube.com/watch?v=Sf3eiO12eJs
+
+For doctests run following command:
+python -m doctest -v minimum_waiting_time.py
+
+The minimum_waiting_time function uses a greedy algorithm to calculate the minimum
+time for queries to complete. It sorts the list in non-decreasing order, calculates
+the waiting time for each query by multiplying its position in the list with the
+sum of all remaining query times, and returns the total waiting time. A doctest
+ensures that the function produces the correct output.
+"""
+
+
+def minimum_waiting_time(queries: list[int]) -> int:
+ """
+ This function takes a list of query times and returns the minimum waiting time
+ for all queries to be completed.
+
+ Args:
+ queries: A list of queries measured in picoseconds
+
+ Returns:
+ total_waiting_time: Minimum waiting time measured in picoseconds
+
+ Examples:
+ >>> minimum_waiting_time([3, 2, 1, 2, 6])
+ 17
+ >>> minimum_waiting_time([3, 2, 1])
+ 4
+ >>> minimum_waiting_time([1, 2, 3, 4])
+ 10
+ >>> minimum_waiting_time([5, 5, 5, 5])
+ 30
+ >>> minimum_waiting_time([])
+ 0
+ """
+ n = len(queries)
+ if n in (0, 1):
+ return 0
+ return sum(query * (n - i - 1) for i, query in enumerate(sorted(queries)))
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 777f966893d7042d350b44b05ce7f8431f561509 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Mon, 1 May 2023 23:48:56 +0200
Subject: [PATCH 061/808] [pre-commit.ci] pre-commit autoupdate (#8704)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.262 → v0.0.263](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.262...v0.0.263)
- [github.com/tox-dev/pyproject-fmt: 0.10.0 → 0.11.1](https://github.com/tox-dev/pyproject-fmt/compare/0.10.0...0.11.1)
* updating DIRECTORY.md
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.pre-commit-config.yaml | 4 ++--
DIRECTORY.md | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 288473ca365f..accb57da35d3 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.262
+ rev: v0.0.263
hooks:
- id: ruff
@@ -33,7 +33,7 @@ repos:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
- rev: "0.10.0"
+ rev: "0.11.1"
hooks:
- id: pyproject-fmt
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 021669d13b4a..826bd6fd39d4 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -449,8 +449,8 @@
## Greedy Methods
* [Fractional Knapsack](greedy_methods/fractional_knapsack.py)
* [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py)
+ * [Minimum Waiting Time](greedy_methods/minimum_waiting_time.py)
* [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py)
- * [Minimum Waiting Time ](greedy_methods/minimum_waiting_time.py)
## Hashes
* [Adler32](hashes/adler32.py)
From 73105145090f0ce972f6fa29cc5d71f012dd8c92 Mon Sep 17 00:00:00 2001
From: Dipankar Mitra <50228537+Mitra-babu@users.noreply.github.com>
Date: Tue, 2 May 2023 20:06:28 +0530
Subject: [PATCH 062/808] The ELU activation is added (#8699)
* tanh function been added
* tanh function been added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* tanh function is added
* tanh function is added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* tanh function added
* tanh function added
* tanh function is added
* Apply suggestions from code review
* ELU activation function is added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* elu activation is added
* ELU activation is added
* Update maths/elu_activation.py
Co-authored-by: Christian Clauss
* Exponential_linear_unit activation is added
* Exponential_linear_unit activation is added
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.../exponential_linear_unit.py | 40 +++++++++++++++++++
1 file changed, 40 insertions(+)
create mode 100644 neural_network/activation_functions/exponential_linear_unit.py
diff --git a/neural_network/activation_functions/exponential_linear_unit.py b/neural_network/activation_functions/exponential_linear_unit.py
new file mode 100644
index 000000000000..7a3cf1d84e71
--- /dev/null
+++ b/neural_network/activation_functions/exponential_linear_unit.py
@@ -0,0 +1,40 @@
+"""
+Implements the Exponential Linear Unit or ELU function.
+
+The function takes a vector of K real numbers and a real number alpha as
+input and then applies the ELU function to each element of the vector.
+
+Script inspired from its corresponding Wikipedia article
+https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
+"""
+
+import numpy as np
+
+
+def exponential_linear_unit(vector: np.ndarray, alpha: float) -> np.ndarray:
+ """
+ Implements the ELU activation function.
+ Parameters:
+ vector: the array containing input of elu activation
+ alpha: hyper-parameter
+ return:
+ elu (np.array): The input numpy array after applying elu.
+
+ Mathematically, f(x) = x, x>0 else (alpha * (e^x -1)), x<=0, alpha >=0
+
+ Examples:
+ >>> exponential_linear_unit(vector=np.array([2.3,0.6,-2,-3.8]), alpha=0.3)
+ array([ 2.3 , 0.6 , -0.25939942, -0.29328877])
+
+ >>> exponential_linear_unit(vector=np.array([-9.2,-0.3,0.45,-4.56]), alpha=0.067)
+ array([-0.06699323, -0.01736518, 0.45 , -0.06629904])
+
+
+ """
+ return np.where(vector > 0, vector, (alpha * (np.exp(vector) - 1)))
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 91cc3a240f05922024d4c5523422138857c48ae0 Mon Sep 17 00:00:00 2001
From: Pronoy Mandal
Date: Wed, 10 May 2023 15:04:36 +0530
Subject: [PATCH 063/808] Update game_of_life.py (#8703)
Rectify spelling in docstring
---
cellular_automata/game_of_life.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cellular_automata/game_of_life.py b/cellular_automata/game_of_life.py
index 8e54702519b9..3382af7b5db6 100644
--- a/cellular_automata/game_of_life.py
+++ b/cellular_automata/game_of_life.py
@@ -34,7 +34,7 @@
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
-usage_doc = "Usage of script: script_nama "
+usage_doc = "Usage of script: script_name "
choice = [0] * 100 + [1] * 10
random.shuffle(choice)
From 209a59ee562dd4b0358d8d1a12b112ec3f3e68ed Mon Sep 17 00:00:00 2001
From: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
Date: Wed, 10 May 2023 15:08:52 +0530
Subject: [PATCH 064/808] Update and_gate.py (#8690)
* Update and_gate.py
addressing issue #8656 by calling `test_and_gate()` , ensuring that all the assertions are verified before the actual output is printed.
* Update and_gate.py
addressing issue #8632
---
boolean_algebra/and_gate.py | 2 ++
1 file changed, 2 insertions(+)
diff --git a/boolean_algebra/and_gate.py b/boolean_algebra/and_gate.py
index cbbcfde79f33..834116772ee7 100644
--- a/boolean_algebra/and_gate.py
+++ b/boolean_algebra/and_gate.py
@@ -43,6 +43,8 @@ def test_and_gate() -> None:
if __name__ == "__main__":
+ test_and_gate()
+ print(and_gate(1, 0))
print(and_gate(0, 0))
print(and_gate(0, 1))
print(and_gate(1, 1))
From 44aa17fb86b0c04508580425b588c0f8a0cf4ce9 Mon Sep 17 00:00:00 2001
From: shricubed
Date: Wed, 10 May 2023 14:50:32 -0400
Subject: [PATCH 065/808] Working binary insertion sort in Python (#8024)
---
sorts/binary_insertion_sort.py | 61 ++++++++++++++++++++++++++++++++++
1 file changed, 61 insertions(+)
create mode 100644 sorts/binary_insertion_sort.py
diff --git a/sorts/binary_insertion_sort.py b/sorts/binary_insertion_sort.py
new file mode 100644
index 000000000000..8d41025583b1
--- /dev/null
+++ b/sorts/binary_insertion_sort.py
@@ -0,0 +1,61 @@
+"""
+This is a pure Python implementation of the binary insertion sort algorithm
+
+For doctests run following command:
+python -m doctest -v binary_insertion_sort.py
+or
+python3 -m doctest -v binary_insertion_sort.py
+
+For manual testing run:
+python binary_insertion_sort.py
+"""
+
+
+def binary_insertion_sort(collection: list) -> list:
+ """Pure implementation of the binary insertion sort algorithm in Python
+ :param collection: some mutable ordered collection with heterogeneous
+ comparable items inside
+ :return: the same collection ordered by ascending
+
+ Examples:
+ >>> binary_insertion_sort([0, 4, 1234, 4, 1])
+ [0, 1, 4, 4, 1234]
+ >>> binary_insertion_sort([]) == sorted([])
+ True
+ >>> binary_insertion_sort([-1, -2, -3]) == sorted([-1, -2, -3])
+ True
+ >>> lst = ['d', 'a', 'b', 'e', 'c']
+ >>> binary_insertion_sort(lst) == sorted(lst)
+ True
+ >>> import random
+ >>> collection = random.sample(range(-50, 50), 100)
+ >>> binary_insertion_sort(collection) == sorted(collection)
+ True
+ >>> import string
+ >>> collection = random.choices(string.ascii_letters + string.digits, k=100)
+ >>> binary_insertion_sort(collection) == sorted(collection)
+ True
+ """
+
+ n = len(collection)
+ for i in range(1, n):
+ val = collection[i]
+ low = 0
+ high = i - 1
+
+ while low <= high:
+ mid = (low + high) // 2
+ if val < collection[mid]:
+ high = mid - 1
+ else:
+ low = mid + 1
+ for j in range(i, low, -1):
+ collection[j] = collection[j - 1]
+ collection[low] = val
+ return collection
+
+
+if __name__ == "__main__":
+ user_input = input("Enter numbers separated by a comma:\n").strip()
+ unsorted = [int(item) for item in user_input.split(",")]
+ print(binary_insertion_sort(unsorted))
From 997d56fb633e3bd726c1fac32a2d37277361d5e9 Mon Sep 17 00:00:00 2001
From: Margaret <62753112+meg-1@users.noreply.github.com>
Date: Wed, 10 May 2023 21:53:47 +0300
Subject: [PATCH 066/808] Switch case (#7995)
---
strings/string_switch_case.py | 108 ++++++++++++++++++++++++++++++++++
1 file changed, 108 insertions(+)
create mode 100644 strings/string_switch_case.py
diff --git a/strings/string_switch_case.py b/strings/string_switch_case.py
new file mode 100644
index 000000000000..9a07472dfd71
--- /dev/null
+++ b/strings/string_switch_case.py
@@ -0,0 +1,108 @@
+import re
+
+"""
+general info:
+https://en.wikipedia.org/wiki/Naming_convention_(programming)#Python_and_Ruby
+
+pascal case [ an upper Camel Case ]: https://en.wikipedia.org/wiki/Camel_case
+
+camel case: https://en.wikipedia.org/wiki/Camel_case
+
+kebab case [ can be found in general info ]:
+https://en.wikipedia.org/wiki/Naming_convention_(programming)#Python_and_Ruby
+
+snake case: https://en.wikipedia.org/wiki/Snake_case
+"""
+
+
+# assistant functions
+def split_input(str_: str) -> list:
+ """
+ >>> split_input("one two 31235three4four")
+ [['one', 'two', '31235three4four']]
+ """
+ return [char.split() for char in re.split(r"[^ a-z A-Z 0-9 \s]", str_)]
+
+
+def to_simple_case(str_: str) -> str:
+ """
+ >>> to_simple_case("one two 31235three4four")
+ 'OneTwo31235three4four'
+ """
+ string_split = split_input(str_)
+ return "".join(
+ ["".join([char.capitalize() for char in sub_str]) for sub_str in string_split]
+ )
+
+
+def to_complex_case(text: str, upper: bool, separator: str) -> str:
+ """
+ >>> to_complex_case("one two 31235three4four", True, "_")
+ 'ONE_TWO_31235THREE4FOUR'
+ >>> to_complex_case("one two 31235three4four", False, "-")
+ 'one-two-31235three4four'
+ """
+ try:
+ string_split = split_input(text)
+ if upper:
+ res_str = "".join(
+ [
+ separator.join([char.upper() for char in sub_str])
+ for sub_str in string_split
+ ]
+ )
+ else:
+ res_str = "".join(
+ [
+ separator.join([char.lower() for char in sub_str])
+ for sub_str in string_split
+ ]
+ )
+ return res_str
+ except IndexError:
+ return "not valid string"
+
+
+# main content
+def to_pascal_case(text: str) -> str:
+ """
+ >>> to_pascal_case("one two 31235three4four")
+ 'OneTwo31235three4four'
+ """
+ return to_simple_case(text)
+
+
+def to_camel_case(text: str) -> str:
+ """
+ >>> to_camel_case("one two 31235three4four")
+ 'oneTwo31235three4four'
+ """
+ try:
+ res_str = to_simple_case(text)
+ return res_str[0].lower() + res_str[1:]
+ except IndexError:
+ return "not valid string"
+
+
+def to_snake_case(text: str, upper: bool) -> str:
+ """
+ >>> to_snake_case("one two 31235three4four", True)
+ 'ONE_TWO_31235THREE4FOUR'
+ >>> to_snake_case("one two 31235three4four", False)
+ 'one_two_31235three4four'
+ """
+ return to_complex_case(text, upper, "_")
+
+
+def to_kebab_case(text: str, upper: bool) -> str:
+ """
+ >>> to_kebab_case("one two 31235three4four", True)
+ 'ONE-TWO-31235THREE4FOUR'
+ >>> to_kebab_case("one two 31235three4four", False)
+ 'one-two-31235three4four'
+ """
+ return to_complex_case(text, upper, "-")
+
+
+if __name__ == "__main__":
+ __import__("doctest").testmod()
From 6939538a41202bf05f958c9c2d7c1c20e2f87430 Mon Sep 17 00:00:00 2001
From: Margaret <62753112+meg-1@users.noreply.github.com>
Date: Wed, 10 May 2023 21:55:48 +0300
Subject: [PATCH 067/808] adding the remove digit algorithm (#6708)
---
maths/remove_digit.py | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
create mode 100644 maths/remove_digit.py
diff --git a/maths/remove_digit.py b/maths/remove_digit.py
new file mode 100644
index 000000000000..db14ac902a6f
--- /dev/null
+++ b/maths/remove_digit.py
@@ -0,0 +1,37 @@
+def remove_digit(num: int) -> int:
+ """
+
+ returns the biggest possible result
+ that can be achieved by removing
+ one digit from the given number
+
+ >>> remove_digit(152)
+ 52
+ >>> remove_digit(6385)
+ 685
+ >>> remove_digit(-11)
+ 1
+ >>> remove_digit(2222222)
+ 222222
+ >>> remove_digit("2222222")
+ Traceback (most recent call last):
+ TypeError: only integers accepted as input
+ >>> remove_digit("string input")
+ Traceback (most recent call last):
+ TypeError: only integers accepted as input
+ """
+
+ if not isinstance(num, int):
+ raise TypeError("only integers accepted as input")
+ else:
+ num_str = str(abs(num))
+ num_transpositions = [list(num_str) for char in range(len(num_str))]
+ for index in range(len(num_str)):
+ num_transpositions[index].pop(index)
+ return max(
+ int("".join(list(transposition))) for transposition in num_transpositions
+ )
+
+
+if __name__ == "__main__":
+ __import__("doctest").testmod()
From 793e564e1d4bd6e00b6e2f80869c5fd1fd2872b3 Mon Sep 17 00:00:00 2001
From: Pronoy Mandal
Date: Thu, 11 May 2023 00:30:59 +0530
Subject: [PATCH 068/808] Create maximum_subsequence.py (#7811)
---
DIRECTORY.md | 1 +
other/maximum_subsequence.py | 42 ++++++++++++++++++++++++++++++++++++
2 files changed, 43 insertions(+)
create mode 100644 other/maximum_subsequence.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 826bd6fd39d4..a70ad6861d6f 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -716,6 +716,7 @@
* [Lru Cache](other/lru_cache.py)
* [Magicdiamondpattern](other/magicdiamondpattern.py)
* [Maximum Subarray](other/maximum_subarray.py)
+ * [Maximum Subsequence](other/maximum_subsequence.py)
* [Nested Brackets](other/nested_brackets.py)
* [Password](other/password.py)
* [Quine](other/quine.py)
diff --git a/other/maximum_subsequence.py b/other/maximum_subsequence.py
new file mode 100644
index 000000000000..f81717596532
--- /dev/null
+++ b/other/maximum_subsequence.py
@@ -0,0 +1,42 @@
+from collections.abc import Sequence
+
+
+def max_subsequence_sum(nums: Sequence[int] | None = None) -> int:
+ """Return the maximum possible sum amongst all non - empty subsequences.
+
+ Raises:
+ ValueError: when nums is empty.
+
+ >>> max_subsequence_sum([1,2,3,4,-2])
+ 10
+ >>> max_subsequence_sum([-2, -3, -1, -4, -6])
+ -1
+ >>> max_subsequence_sum([])
+ Traceback (most recent call last):
+ . . .
+ ValueError: Input sequence should not be empty
+ >>> max_subsequence_sum()
+ Traceback (most recent call last):
+ . . .
+ ValueError: Input sequence should not be empty
+ """
+ if nums is None or not nums:
+ raise ValueError("Input sequence should not be empty")
+
+ ans = nums[0]
+ for i in range(1, len(nums)):
+ num = nums[i]
+ ans = max(ans, ans + num, num)
+
+ return ans
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
+ # Try on a sample input from the user
+ n = int(input("Enter number of elements : ").strip())
+ array = list(map(int, input("\nEnter the numbers : ").strip().split()))[:n]
+ print(max_subsequence_sum(array))
From 1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44 Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Sun, 14 May 2023 22:03:13 +0100
Subject: [PATCH 069/808] Correct ruff failures (#8732)
* fix: Correct ruff problems
* updating DIRECTORY.md
* fix: Fix pre-commit errors
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 6 +++++-
conversions/prefix_conversions_string.py | 4 ++--
conversions/rgb_hsv_conversion.py | 4 ++--
.../test_digital_image_processing.py | 2 +-
...ion.py => strassen_matrix_multiplication.py.BROKEN} | 2 +-
dynamic_programming/fibonacci.py | 2 +-
maths/euclidean_distance.py | 6 +++---
physics/horizontal_projectile_motion.py | 6 +++---
searches/binary_tree_traversal.py | 10 ++++------
9 files changed, 22 insertions(+), 20 deletions(-)
rename divide_and_conquer/{strassen_matrix_multiplication.py => strassen_matrix_multiplication.py.BROKEN} (99%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index a70ad6861d6f..fc6cbaf7ff41 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -294,7 +294,6 @@
* [Mergesort](divide_and_conquer/mergesort.py)
* [Peak](divide_and_conquer/peak.py)
* [Power](divide_and_conquer/power.py)
- * [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py)
## Dynamic Programming
* [Abbreviation](dynamic_programming/abbreviation.py)
@@ -632,6 +631,7 @@
* [Radians](maths/radians.py)
* [Radix2 Fft](maths/radix2_fft.py)
* [Relu](maths/relu.py)
+ * [Remove Digit](maths/remove_digit.py)
* [Runge Kutta](maths/runge_kutta.py)
* [Segmented Sieve](maths/segmented_sieve.py)
* Series
@@ -694,6 +694,8 @@
## Neural Network
* [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py)
+ * Activation Functions
+ * [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Input Data](neural_network/input_data.py)
@@ -1080,6 +1082,7 @@
## Sorts
* [Bead Sort](sorts/bead_sort.py)
+ * [Binary Insertion Sort](sorts/binary_insertion_sort.py)
* [Bitonic Sort](sorts/bitonic_sort.py)
* [Bogo Sort](sorts/bogo_sort.py)
* [Bubble Sort](sorts/bubble_sort.py)
@@ -1170,6 +1173,7 @@
* [Reverse Words](strings/reverse_words.py)
* [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py)
* [Split](strings/split.py)
+ * [String Switch Case](strings/string_switch_case.py)
* [Text Justification](strings/text_justification.py)
* [Top K Frequent Words](strings/top_k_frequent_words.py)
* [Upper](strings/upper.py)
diff --git a/conversions/prefix_conversions_string.py b/conversions/prefix_conversions_string.py
index 3851d7c8b993..9344c9672a1f 100644
--- a/conversions/prefix_conversions_string.py
+++ b/conversions/prefix_conversions_string.py
@@ -96,7 +96,7 @@ def add_si_prefix(value: float) -> str:
for name_prefix, value_prefix in prefixes.items():
numerical_part = value / (10**value_prefix)
if numerical_part > 1:
- return f"{str(numerical_part)} {name_prefix}"
+ return f"{numerical_part!s} {name_prefix}"
return str(value)
@@ -111,7 +111,7 @@ def add_binary_prefix(value: float) -> str:
for prefix in BinaryUnit:
numerical_part = value / (2**prefix.value)
if numerical_part > 1:
- return f"{str(numerical_part)} {prefix.name}"
+ return f"{numerical_part!s} {prefix.name}"
return str(value)
diff --git a/conversions/rgb_hsv_conversion.py b/conversions/rgb_hsv_conversion.py
index 081cfe1d75e0..74b3d33e49e7 100644
--- a/conversions/rgb_hsv_conversion.py
+++ b/conversions/rgb_hsv_conversion.py
@@ -121,8 +121,8 @@ def rgb_to_hsv(red: int, green: int, blue: int) -> list[float]:
float_red = red / 255
float_green = green / 255
float_blue = blue / 255
- value = max(max(float_red, float_green), float_blue)
- chroma = value - min(min(float_red, float_green), float_blue)
+ value = max(float_red, float_green, float_blue)
+ chroma = value - min(float_red, float_green, float_blue)
saturation = 0 if value == 0 else chroma / value
if chroma == 0:
diff --git a/digital_image_processing/test_digital_image_processing.py b/digital_image_processing/test_digital_image_processing.py
index c999464ce85e..fee7ab247b55 100644
--- a/digital_image_processing/test_digital_image_processing.py
+++ b/digital_image_processing/test_digital_image_processing.py
@@ -96,7 +96,7 @@ def test_nearest_neighbour(
def test_local_binary_pattern():
- file_path: str = "digital_image_processing/image_data/lena.jpg"
+ file_path = "digital_image_processing/image_data/lena.jpg"
# Reading the image and converting it to grayscale.
image = imread(file_path, 0)
diff --git a/divide_and_conquer/strassen_matrix_multiplication.py b/divide_and_conquer/strassen_matrix_multiplication.py.BROKEN
similarity index 99%
rename from divide_and_conquer/strassen_matrix_multiplication.py
rename to divide_and_conquer/strassen_matrix_multiplication.py.BROKEN
index 371605d6d4d4..2ca91c63bf4c 100644
--- a/divide_and_conquer/strassen_matrix_multiplication.py
+++ b/divide_and_conquer/strassen_matrix_multiplication.py.BROKEN
@@ -122,7 +122,7 @@ def strassen(matrix1: list, matrix2: list) -> list:
if dimension1[0] == dimension1[1] and dimension2[0] == dimension2[1]:
return [matrix1, matrix2]
- maximum = max(max(dimension1), max(dimension2))
+ maximum = max(dimension1, dimension2)
maxim = int(math.pow(2, math.ceil(math.log2(maximum))))
new_matrix1 = matrix1
new_matrix2 = matrix2
diff --git a/dynamic_programming/fibonacci.py b/dynamic_programming/fibonacci.py
index 7ec5993ef38d..c102493aa00b 100644
--- a/dynamic_programming/fibonacci.py
+++ b/dynamic_programming/fibonacci.py
@@ -24,7 +24,7 @@ def get(self, index: int) -> list:
return self.sequence[:index]
-def main():
+def main() -> None:
print(
"Fibonacci Series Using Dynamic Programming\n",
"Enter the index of the Fibonacci number you want to calculate ",
diff --git a/maths/euclidean_distance.py b/maths/euclidean_distance.py
index 22012e92c9cf..9b29b37b0ce6 100644
--- a/maths/euclidean_distance.py
+++ b/maths/euclidean_distance.py
@@ -1,12 +1,12 @@
from __future__ import annotations
+import typing
from collections.abc import Iterable
-from typing import Union
import numpy as np
-Vector = Union[Iterable[float], Iterable[int], np.ndarray]
-VectorOut = Union[np.float64, int, float]
+Vector = typing.Union[Iterable[float], Iterable[int], np.ndarray] # noqa: UP007
+VectorOut = typing.Union[np.float64, int, float] # noqa: UP007
def euclidean_distance(vector_1: Vector, vector_2: Vector) -> VectorOut:
diff --git a/physics/horizontal_projectile_motion.py b/physics/horizontal_projectile_motion.py
index dbde3660f62f..80f85a1b7146 100644
--- a/physics/horizontal_projectile_motion.py
+++ b/physics/horizontal_projectile_motion.py
@@ -147,6 +147,6 @@ def test_motion() -> None:
# Print results
print()
print("Results: ")
- print(f"Horizontal Distance: {str(horizontal_distance(init_vel, angle))} [m]")
- print(f"Maximum Height: {str(max_height(init_vel, angle))} [m]")
- print(f"Total Time: {str(total_time(init_vel, angle))} [s]")
+ print(f"Horizontal Distance: {horizontal_distance(init_vel, angle)!s} [m]")
+ print(f"Maximum Height: {max_height(init_vel, angle)!s} [m]")
+ print(f"Total Time: {total_time(init_vel, angle)!s} [s]")
diff --git a/searches/binary_tree_traversal.py b/searches/binary_tree_traversal.py
index 76e80df25a13..6fb841af4294 100644
--- a/searches/binary_tree_traversal.py
+++ b/searches/binary_tree_traversal.py
@@ -13,11 +13,9 @@ def __init__(self, data):
self.left = None
-def build_tree():
+def build_tree() -> TreeNode:
print("\n********Press N to stop entering at any point of time********\n")
- check = input("Enter the value of the root node: ").strip().lower() or "n"
- if check == "n":
- return None
+ check = input("Enter the value of the root node: ").strip().lower()
q: queue.Queue = queue.Queue()
tree_node = TreeNode(int(check))
q.put(tree_node)
@@ -37,7 +35,7 @@ def build_tree():
right_node = TreeNode(int(check))
node_found.right = right_node
q.put(right_node)
- return None
+ raise
def pre_order(node: TreeNode) -> None:
@@ -272,7 +270,7 @@ def prompt(s: str = "", width=50, char="*") -> str:
doctest.testmod()
print(prompt("Binary Tree Traversals"))
- node = build_tree()
+ node: TreeNode = build_tree()
print(prompt("Pre Order Traversal"))
pre_order(node)
print(prompt() + "\n")
From 2a57dafce096b51b4b28d1495116e79472c8a3f4 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Mon, 15 May 2023 22:27:59 +0100
Subject: [PATCH 070/808] [pre-commit.ci] pre-commit autoupdate (#8716)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.263 → v0.0.267](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.263...v0.0.267)
- [github.com/tox-dev/pyproject-fmt: 0.11.1 → 0.11.2](https://github.com/tox-dev/pyproject-fmt/compare/0.11.1...0.11.2)
- [github.com/pre-commit/mirrors-mypy: v1.2.0 → v1.3.0](https://github.com/pre-commit/mirrors-mypy/compare/v1.2.0...v1.3.0)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.pre-commit-config.yaml | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index accb57da35d3..6bdbc7370c9c 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.263
+ rev: v0.0.267
hooks:
- id: ruff
@@ -33,7 +33,7 @@ repos:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
- rev: "0.11.1"
+ rev: "0.11.2"
hooks:
- id: pyproject-fmt
@@ -51,7 +51,7 @@ repos:
- id: validate-pyproject
- repo: https://github.com/pre-commit/mirrors-mypy
- rev: v1.2.0
+ rev: v1.3.0
hooks:
- id: mypy
args:
From c0892a06515b8ea5030db2e8344dee2292bb10ad Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Tue, 16 May 2023 00:47:50 +0300
Subject: [PATCH 071/808] Reduce the complexity of
genetic_algorithm/basic_string.py (#8606)
---
genetic_algorithm/basic_string.py | 95 ++++++++++++++++---------------
1 file changed, 50 insertions(+), 45 deletions(-)
diff --git a/genetic_algorithm/basic_string.py b/genetic_algorithm/basic_string.py
index 45b8be651f6e..388e7219f54b 100644
--- a/genetic_algorithm/basic_string.py
+++ b/genetic_algorithm/basic_string.py
@@ -21,6 +21,54 @@
random.seed(random.randint(0, 1000))
+def evaluate(item: str, main_target: str) -> tuple[str, float]:
+ """
+ Evaluate how similar the item is with the target by just
+ counting each char in the right position
+ >>> evaluate("Helxo Worlx", "Hello World")
+ ('Helxo Worlx', 9.0)
+ """
+ score = len([g for position, g in enumerate(item) if g == main_target[position]])
+ return (item, float(score))
+
+
+def crossover(parent_1: str, parent_2: str) -> tuple[str, str]:
+ """Slice and combine two string at a random point."""
+ random_slice = random.randint(0, len(parent_1) - 1)
+ child_1 = parent_1[:random_slice] + parent_2[random_slice:]
+ child_2 = parent_2[:random_slice] + parent_1[random_slice:]
+ return (child_1, child_2)
+
+
+def mutate(child: str, genes: list[str]) -> str:
+ """Mutate a random gene of a child with another one from the list."""
+ child_list = list(child)
+ if random.uniform(0, 1) < MUTATION_PROBABILITY:
+ child_list[random.randint(0, len(child)) - 1] = random.choice(genes)
+ return "".join(child_list)
+
+
+# Select, crossover and mutate a new population.
+def select(
+ parent_1: tuple[str, float],
+ population_score: list[tuple[str, float]],
+ genes: list[str],
+) -> list[str]:
+ """Select the second parent and generate new population"""
+ pop = []
+ # Generate more children proportionally to the fitness score.
+ child_n = int(parent_1[1] * 100) + 1
+ child_n = 10 if child_n >= 10 else child_n
+ for _ in range(child_n):
+ parent_2 = population_score[random.randint(0, N_SELECTED)][0]
+
+ child_1, child_2 = crossover(parent_1[0], parent_2)
+ # Append new string to the population list.
+ pop.append(mutate(child_1, genes))
+ pop.append(mutate(child_2, genes))
+ return pop
+
+
def basic(target: str, genes: list[str], debug: bool = True) -> tuple[int, int, str]:
"""
Verify that the target contains no genes besides the ones inside genes variable.
@@ -70,17 +118,6 @@ def basic(target: str, genes: list[str], debug: bool = True) -> tuple[int, int,
total_population += len(population)
# Random population created. Now it's time to evaluate.
- def evaluate(item: str, main_target: str = target) -> tuple[str, float]:
- """
- Evaluate how similar the item is with the target by just
- counting each char in the right position
- >>> evaluate("Helxo Worlx", Hello World)
- ["Helxo Worlx", 9]
- """
- score = len(
- [g for position, g in enumerate(item) if g == main_target[position]]
- )
- return (item, float(score))
# Adding a bit of concurrency can make everything faster,
#
@@ -94,7 +131,7 @@ def evaluate(item: str, main_target: str = target) -> tuple[str, float]:
#
# but with a simple algorithm like this, it will probably be slower.
# We just need to call evaluate for every item inside the population.
- population_score = [evaluate(item) for item in population]
+ population_score = [evaluate(item, target) for item in population]
# Check if there is a matching evolution.
population_score = sorted(population_score, key=lambda x: x[1], reverse=True)
@@ -121,41 +158,9 @@ def evaluate(item: str, main_target: str = target) -> tuple[str, float]:
(item, score / len(target)) for item, score in population_score
]
- # Select, crossover and mutate a new population.
- def select(parent_1: tuple[str, float]) -> list[str]:
- """Select the second parent and generate new population"""
- pop = []
- # Generate more children proportionally to the fitness score.
- child_n = int(parent_1[1] * 100) + 1
- child_n = 10 if child_n >= 10 else child_n
- for _ in range(child_n):
- parent_2 = population_score[ # noqa: B023
- random.randint(0, N_SELECTED)
- ][0]
-
- child_1, child_2 = crossover(parent_1[0], parent_2)
- # Append new string to the population list.
- pop.append(mutate(child_1))
- pop.append(mutate(child_2))
- return pop
-
- def crossover(parent_1: str, parent_2: str) -> tuple[str, str]:
- """Slice and combine two string at a random point."""
- random_slice = random.randint(0, len(parent_1) - 1)
- child_1 = parent_1[:random_slice] + parent_2[random_slice:]
- child_2 = parent_2[:random_slice] + parent_1[random_slice:]
- return (child_1, child_2)
-
- def mutate(child: str) -> str:
- """Mutate a random gene of a child with another one from the list."""
- child_list = list(child)
- if random.uniform(0, 1) < MUTATION_PROBABILITY:
- child_list[random.randint(0, len(child)) - 1] = random.choice(genes)
- return "".join(child_list)
-
# This is selection
for i in range(N_SELECTED):
- population.extend(select(population_score[int(i)]))
+ population.extend(select(population_score[int(i)], population_score, genes))
# Check if the population has already reached the maximum value and if so,
# break the cycle. If this check is disabled, the algorithm will take
# forever to compute large strings, but will also calculate small strings in
From 8102424950f2d3801eda7817d7f69288fd984a63 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Tue, 16 May 2023 17:05:55 -0700
Subject: [PATCH 072/808] `local_weighted_learning.py`: fix `mypy` errors and
more (#8073)
---
.../local_weighted_learning.py | 188 +++++++++++-------
1 file changed, 112 insertions(+), 76 deletions(-)
diff --git a/machine_learning/local_weighted_learning/local_weighted_learning.py b/machine_learning/local_weighted_learning/local_weighted_learning.py
index 6260e9ac6bfe..8dd0e55d41df 100644
--- a/machine_learning/local_weighted_learning/local_weighted_learning.py
+++ b/machine_learning/local_weighted_learning/local_weighted_learning.py
@@ -1,14 +1,55 @@
+"""
+Locally weighted linear regression, also called local regression, is a type of
+non-parametric linear regression that prioritizes data closest to a given
+prediction point. The algorithm estimates the vector of model coefficients β
+using weighted least squares regression:
+
+β = (XᵀWX)⁻¹(XᵀWy),
+
+where X is the design matrix, y is the response vector, and W is the diagonal
+weight matrix.
+
+This implementation calculates wᵢ, the weight of the ith training sample, using
+the Gaussian weight:
+
+wᵢ = exp(-‖xᵢ - x‖²/(2τ²)),
+
+where xᵢ is the ith training sample, x is the prediction point, τ is the
+"bandwidth", and ‖x‖ is the Euclidean norm (also called the 2-norm or the L²
+norm). The bandwidth τ controls how quickly the weight of a training sample
+decreases as its distance from the prediction point increases. One can think of
+the Gaussian weight as a bell curve centered around the prediction point: a
+training sample is weighted lower if it's farther from the center, and τ
+controls the spread of the bell curve.
+
+Other types of locally weighted regression such as locally estimated scatterplot
+smoothing (LOESS) typically use different weight functions.
+
+References:
+ - https://en.wikipedia.org/wiki/Local_regression
+ - https://en.wikipedia.org/wiki/Weighted_least_squares
+ - https://cs229.stanford.edu/notes2022fall/main_notes.pdf
+"""
+
import matplotlib.pyplot as plt
import numpy as np
-def weighted_matrix(
- point: np.array, training_data_x: np.array, bandwidth: float
-) -> np.array:
+def weight_matrix(point: np.ndarray, x_train: np.ndarray, tau: float) -> np.ndarray:
"""
- Calculate the weight for every point in the data set.
- point --> the x value at which we want to make predictions
- >>> weighted_matrix(
+ Calculate the weight of every point in the training data around a given
+ prediction point
+
+ Args:
+ point: x-value at which the prediction is being made
+ x_train: ndarray of x-values for training
+ tau: bandwidth value, controls how quickly the weight of training values
+ decreases as the distance from the prediction point increases
+
+ Returns:
+ m x m weight matrix around the prediction point, where m is the size of
+ the training set
+ >>> weight_matrix(
... np.array([1., 1.]),
... np.array([[16.99, 10.34], [21.01,23.68], [24.59,25.69]]),
... 0.6
@@ -17,25 +58,30 @@ def weighted_matrix(
[0.00000000e+000, 0.00000000e+000, 0.00000000e+000],
[0.00000000e+000, 0.00000000e+000, 0.00000000e+000]])
"""
- m, _ = np.shape(training_data_x) # m is the number of training samples
- weights = np.eye(m) # Initializing weights as identity matrix
-
- # calculating weights for all training examples [x(i)'s]
+ m = len(x_train) # Number of training samples
+ weights = np.eye(m) # Initialize weights as identity matrix
for j in range(m):
- diff = point - training_data_x[j]
- weights[j, j] = np.exp(diff @ diff.T / (-2.0 * bandwidth**2))
+ diff = point - x_train[j]
+ weights[j, j] = np.exp(diff @ diff.T / (-2.0 * tau**2))
+
return weights
def local_weight(
- point: np.array,
- training_data_x: np.array,
- training_data_y: np.array,
- bandwidth: float,
-) -> np.array:
+ point: np.ndarray, x_train: np.ndarray, y_train: np.ndarray, tau: float
+) -> np.ndarray:
"""
- Calculate the local weights using the weight_matrix function on training data.
- Return the weighted matrix.
+ Calculate the local weights at a given prediction point using the weight
+ matrix for that point
+
+ Args:
+ point: x-value at which the prediction is being made
+ x_train: ndarray of x-values for training
+ y_train: ndarray of y-values for training
+ tau: bandwidth value, controls how quickly the weight of training values
+ decreases as the distance from the prediction point increases
+ Returns:
+ ndarray of local weights
>>> local_weight(
... np.array([1., 1.]),
... np.array([[16.99, 10.34], [21.01,23.68], [24.59,25.69]]),
@@ -45,19 +91,28 @@ def local_weight(
array([[0.00873174],
[0.08272556]])
"""
- weight = weighted_matrix(point, training_data_x, bandwidth)
- w = np.linalg.inv(training_data_x.T @ (weight @ training_data_x)) @ (
- training_data_x.T @ weight @ training_data_y.T
+ weight_mat = weight_matrix(point, x_train, tau)
+ weight = np.linalg.inv(x_train.T @ weight_mat @ x_train) @ (
+ x_train.T @ weight_mat @ y_train.T
)
- return w
+ return weight
def local_weight_regression(
- training_data_x: np.array, training_data_y: np.array, bandwidth: float
-) -> np.array:
+ x_train: np.ndarray, y_train: np.ndarray, tau: float
+) -> np.ndarray:
"""
- Calculate predictions for each data point on axis
+ Calculate predictions for each point in the training data
+
+ Args:
+ x_train: ndarray of x-values for training
+ y_train: ndarray of y-values for training
+ tau: bandwidth value, controls how quickly the weight of training values
+ decreases as the distance from the prediction point increases
+
+ Returns:
+ ndarray of predictions
>>> local_weight_regression(
... np.array([[16.99, 10.34], [21.01, 23.68], [24.59, 25.69]]),
... np.array([[1.01, 1.66, 3.5]]),
@@ -65,77 +120,57 @@ def local_weight_regression(
... )
array([1.07173261, 1.65970737, 3.50160179])
"""
- m, _ = np.shape(training_data_x)
- ypred = np.zeros(m)
+ y_pred = np.zeros(len(x_train)) # Initialize array of predictions
+ for i, item in enumerate(x_train):
+ y_pred[i] = item @ local_weight(item, x_train, y_train, tau)
- for i, item in enumerate(training_data_x):
- ypred[i] = item @ local_weight(
- item, training_data_x, training_data_y, bandwidth
- )
-
- return ypred
+ return y_pred
def load_data(
- dataset_name: str, cola_name: str, colb_name: str
-) -> tuple[np.array, np.array, np.array, np.array]:
+ dataset_name: str, x_name: str, y_name: str
+) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
"""
Load data from seaborn and split it into x and y points
+ >>> pass # No doctests, function is for demo purposes only
"""
import seaborn as sns
data = sns.load_dataset(dataset_name)
- col_a = np.array(data[cola_name]) # total_bill
- col_b = np.array(data[colb_name]) # tip
-
- mcol_a = col_a.copy()
- mcol_b = col_b.copy()
-
- one = np.ones(np.shape(mcol_b)[0], dtype=int)
+ x_data = np.array(data[x_name])
+ y_data = np.array(data[y_name])
- # pairing elements of one and mcol_a
- training_data_x = np.column_stack((one, mcol_a))
+ one = np.ones(len(y_data))
- return training_data_x, mcol_b, col_a, col_b
+ # pairing elements of one and x_data
+ x_train = np.column_stack((one, x_data))
-
-def get_preds(training_data_x: np.array, mcol_b: np.array, tau: float) -> np.array:
- """
- Get predictions with minimum error for each training data
- >>> get_preds(
- ... np.array([[16.99, 10.34], [21.01, 23.68], [24.59, 25.69]]),
- ... np.array([[1.01, 1.66, 3.5]]),
- ... 0.6
- ... )
- array([1.07173261, 1.65970737, 3.50160179])
- """
- ypred = local_weight_regression(training_data_x, mcol_b, tau)
- return ypred
+ return x_train, x_data, y_data
def plot_preds(
- training_data_x: np.array,
- predictions: np.array,
- col_x: np.array,
- col_y: np.array,
- cola_name: str,
- colb_name: str,
-) -> plt.plot:
+ x_train: np.ndarray,
+ preds: np.ndarray,
+ x_data: np.ndarray,
+ y_data: np.ndarray,
+ x_name: str,
+ y_name: str,
+) -> None:
"""
Plot predictions and display the graph
+ >>> pass # No doctests, function is for demo purposes only
"""
- xsort = training_data_x.copy()
- xsort.sort(axis=0)
- plt.scatter(col_x, col_y, color="blue")
+ x_train_sorted = np.sort(x_train, axis=0)
+ plt.scatter(x_data, y_data, color="blue")
plt.plot(
- xsort[:, 1],
- predictions[training_data_x[:, 1].argsort(0)],
+ x_train_sorted[:, 1],
+ preds[x_train[:, 1].argsort(0)],
color="yellow",
linewidth=5,
)
plt.title("Local Weighted Regression")
- plt.xlabel(cola_name)
- plt.ylabel(colb_name)
+ plt.xlabel(x_name)
+ plt.ylabel(y_name)
plt.show()
@@ -144,6 +179,7 @@ def plot_preds(
doctest.testmod()
- training_data_x, mcol_b, col_a, col_b = load_data("tips", "total_bill", "tip")
- predictions = get_preds(training_data_x, mcol_b, 0.5)
- plot_preds(training_data_x, predictions, col_a, col_b, "total_bill", "tip")
+ # Demo with a dataset from the seaborn module
+ training_data_x, total_bill, tip = load_data("tips", "total_bill", "tip")
+ predictions = local_weight_regression(training_data_x, tip, 5)
+ plot_preds(training_data_x, predictions, total_bill, tip, "total_bill", "tip")
From 3dc143f7218a1221f346c0fccb516d1199850e18 Mon Sep 17 00:00:00 2001
From: Rohan Saraogi <62804340+r0sa2@users.noreply.github.com>
Date: Wed, 17 May 2023 05:38:56 +0530
Subject: [PATCH 073/808] Added odd_sieve.py (#8740)
---
maths/odd_sieve.py | 42 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
create mode 100644 maths/odd_sieve.py
diff --git a/maths/odd_sieve.py b/maths/odd_sieve.py
new file mode 100644
index 000000000000..60e92921a94c
--- /dev/null
+++ b/maths/odd_sieve.py
@@ -0,0 +1,42 @@
+from itertools import compress, repeat
+from math import ceil, sqrt
+
+
+def odd_sieve(num: int) -> list[int]:
+ """
+ Returns the prime numbers < `num`. The prime numbers are calculated using an
+ odd sieve implementation of the Sieve of Eratosthenes algorithm
+ (see for reference https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes).
+
+ >>> odd_sieve(2)
+ []
+ >>> odd_sieve(3)
+ [2]
+ >>> odd_sieve(10)
+ [2, 3, 5, 7]
+ >>> odd_sieve(20)
+ [2, 3, 5, 7, 11, 13, 17, 19]
+ """
+
+ if num <= 2:
+ return []
+ if num == 3:
+ return [2]
+
+ # Odd sieve for numbers in range [3, num - 1]
+ sieve = bytearray(b"\x01") * ((num >> 1) - 1)
+
+ for i in range(3, int(sqrt(num)) + 1, 2):
+ if sieve[(i >> 1) - 1]:
+ i_squared = i**2
+ sieve[(i_squared >> 1) - 1 :: i] = repeat(
+ 0, ceil((num - i_squared) / (i << 1))
+ )
+
+ return [2] + list(compress(range(3, num, 2), sieve))
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 61cfb43d2b9246d1e2019ce7f03cb91f452ed2ba Mon Sep 17 00:00:00 2001
From: Alexander Pantyukhin
Date: Wed, 17 May 2023 04:21:16 +0400
Subject: [PATCH 074/808] Add h index (#8036)
---
DIRECTORY.md | 1 +
other/h_index.py | 71 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 72 insertions(+)
create mode 100644 other/h_index.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index fc6cbaf7ff41..46bd51ce91ea 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -712,6 +712,7 @@
* [Gauss Easter](other/gauss_easter.py)
* [Graham Scan](other/graham_scan.py)
* [Greedy](other/greedy.py)
+ * [H Index](other/h_index.py)
* [Least Recently Used](other/least_recently_used.py)
* [Lfu Cache](other/lfu_cache.py)
* [Linear Congruential Generator](other/linear_congruential_generator.py)
diff --git a/other/h_index.py b/other/h_index.py
new file mode 100644
index 000000000000..e91389675b16
--- /dev/null
+++ b/other/h_index.py
@@ -0,0 +1,71 @@
+"""
+Task:
+Given an array of integers citations where citations[i] is the number of
+citations a researcher received for their ith paper, return compute the
+researcher's h-index.
+
+According to the definition of h-index on Wikipedia: A scientist has an
+index h if h of their n papers have at least h citations each, and the other
+n - h papers have no more than h citations each.
+
+If there are several possible values for h, the maximum one is taken as the
+h-index.
+
+H-Index link: https://en.wikipedia.org/wiki/H-index
+
+Implementation notes:
+Use sorting of array
+
+Leetcode link: https://leetcode.com/problems/h-index/description/
+
+n = len(citations)
+Runtime Complexity: O(n * log(n))
+Space Complexity: O(1)
+
+"""
+
+
+def h_index(citations: list[int]) -> int:
+ """
+ Return H-index of citations
+
+ >>> h_index([3, 0, 6, 1, 5])
+ 3
+ >>> h_index([1, 3, 1])
+ 1
+ >>> h_index([1, 2, 3])
+ 2
+ >>> h_index('test')
+ Traceback (most recent call last):
+ ...
+ ValueError: The citations should be a list of non negative integers.
+ >>> h_index([1,2,'3'])
+ Traceback (most recent call last):
+ ...
+ ValueError: The citations should be a list of non negative integers.
+ >>> h_index([1,2,-3])
+ Traceback (most recent call last):
+ ...
+ ValueError: The citations should be a list of non negative integers.
+ """
+
+ # validate:
+ if not isinstance(citations, list) or not all(
+ isinstance(item, int) and item >= 0 for item in citations
+ ):
+ raise ValueError("The citations should be a list of non negative integers.")
+
+ citations.sort()
+ len_citations = len(citations)
+
+ for i in range(len_citations):
+ if citations[len_citations - 1 - i] <= i:
+ return i
+
+ return len_citations
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From a2783c6597a154a87f60bb5878770d2f152a1d09 Mon Sep 17 00:00:00 2001
From: Harkishan Khuva <78949167+hakiKhuva@users.noreply.github.com>
Date: Wed, 17 May 2023 05:52:24 +0530
Subject: [PATCH 075/808] Create guess_the_number_search.py (#7937)
---
other/guess_the_number_search.py | 165 +++++++++++++++++++++++++++++++
1 file changed, 165 insertions(+)
create mode 100644 other/guess_the_number_search.py
diff --git a/other/guess_the_number_search.py b/other/guess_the_number_search.py
new file mode 100644
index 000000000000..0439223f2ec9
--- /dev/null
+++ b/other/guess_the_number_search.py
@@ -0,0 +1,165 @@
+"""
+guess the number using lower,higher and the value to find or guess
+
+solution works by dividing lower and higher of number guessed
+
+suppose lower is 0, higher is 1000 and the number to guess is 355
+
+>>> guess_the_number(10, 1000, 17)
+started...
+guess the number : 17
+details : [505, 257, 133, 71, 40, 25, 17]
+
+"""
+
+
+def temp_input_value(
+ min_val: int = 10, max_val: int = 1000, option: bool = True
+) -> int:
+ """
+ Temporary input values for tests
+
+ >>> temp_input_value(option=True)
+ 10
+
+ >>> temp_input_value(option=False)
+ 1000
+
+ >>> temp_input_value(min_val=100, option=True)
+ 100
+
+ >>> temp_input_value(min_val=100, max_val=50)
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid value for min_val or max_val (min_value < max_value)
+
+ >>> temp_input_value("ten","fifty",1)
+ Traceback (most recent call last):
+ ...
+ AssertionError: Invalid type of value(s) specified to function!
+
+ >>> temp_input_value(min_val=-100, max_val=500)
+ -100
+
+ >>> temp_input_value(min_val=-5100, max_val=-100)
+ -5100
+ """
+ assert (
+ isinstance(min_val, int)
+ and isinstance(max_val, int)
+ and isinstance(option, bool)
+ ), "Invalid type of value(s) specified to function!"
+
+ if min_val > max_val:
+ raise ValueError("Invalid value for min_val or max_val (min_value < max_value)")
+ return min_val if option else max_val
+
+
+def get_avg(number_1: int, number_2: int) -> int:
+ """
+ Return the mid-number(whole) of two integers a and b
+
+ >>> get_avg(10, 15)
+ 12
+
+ >>> get_avg(20, 300)
+ 160
+
+ >>> get_avg("abcd", 300)
+ Traceback (most recent call last):
+ ...
+ TypeError: can only concatenate str (not "int") to str
+
+ >>> get_avg(10.5,50.25)
+ 30
+ """
+ return int((number_1 + number_2) / 2)
+
+
+def guess_the_number(lower: int, higher: int, to_guess: int) -> None:
+ """
+ The `guess_the_number` function that guess the number by some operations
+ and using inner functions
+
+ >>> guess_the_number(10, 1000, 17)
+ started...
+ guess the number : 17
+ details : [505, 257, 133, 71, 40, 25, 17]
+
+ >>> guess_the_number(-10000, 10000, 7)
+ started...
+ guess the number : 7
+ details : [0, 5000, 2500, 1250, 625, 312, 156, 78, 39, 19, 9, 4, 6, 7]
+
+ >>> guess_the_number(10, 1000, "a")
+ Traceback (most recent call last):
+ ...
+ AssertionError: argument values must be type of "int"
+
+ >>> guess_the_number(10, 1000, 5)
+ Traceback (most recent call last):
+ ...
+ ValueError: guess value must be within the range of lower and higher value
+
+ >>> guess_the_number(10000, 100, 5)
+ Traceback (most recent call last):
+ ...
+ ValueError: argument value for lower and higher must be(lower > higher)
+ """
+ assert (
+ isinstance(lower, int) and isinstance(higher, int) and isinstance(to_guess, int)
+ ), 'argument values must be type of "int"'
+
+ if lower > higher:
+ raise ValueError("argument value for lower and higher must be(lower > higher)")
+
+ if not lower < to_guess < higher:
+ raise ValueError(
+ "guess value must be within the range of lower and higher value"
+ )
+
+ def answer(number: int) -> str:
+ """
+ Returns value by comparing with entered `to_guess` number
+ """
+ if number > to_guess:
+ return "high"
+ elif number < to_guess:
+ return "low"
+ else:
+ return "same"
+
+ print("started...")
+
+ last_lowest = lower
+ last_highest = higher
+
+ last_numbers = []
+
+ while True:
+ number = get_avg(last_lowest, last_highest)
+ last_numbers.append(number)
+
+ if answer(number) == "low":
+ last_lowest = number
+ elif answer(number) == "high":
+ last_highest = number
+ else:
+ break
+
+ print(f"guess the number : {last_numbers[-1]}")
+ print(f"details : {str(last_numbers)}")
+
+
+def main() -> None:
+ """
+ starting point or function of script
+ """
+ lower = int(input("Enter lower value : ").strip())
+ higher = int(input("Enter high value : ").strip())
+ guess = int(input("Enter value to guess : ").strip())
+ guess_the_number(lower, higher, guess)
+
+
+if __name__ == "__main__":
+ main()
From 9b3e4028c6927a17656e590e878c2a101bc4e951 Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Wed, 17 May 2023 07:47:23 +0100
Subject: [PATCH 076/808] Fixes broken "Create guess_the_number_search.py"
(#8746)
---
DIRECTORY.md | 2 ++
other/guess_the_number_search.py | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 46bd51ce91ea..82791cde183d 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -605,6 +605,7 @@
* [Newton Raphson](maths/newton_raphson.py)
* [Number Of Digits](maths/number_of_digits.py)
* [Numerical Integration](maths/numerical_integration.py)
+ * [Odd Sieve](maths/odd_sieve.py)
* [Perfect Cube](maths/perfect_cube.py)
* [Perfect Number](maths/perfect_number.py)
* [Perfect Square](maths/perfect_square.py)
@@ -712,6 +713,7 @@
* [Gauss Easter](other/gauss_easter.py)
* [Graham Scan](other/graham_scan.py)
* [Greedy](other/greedy.py)
+ * [Guess The Number Search](other/guess_the_number_search.py)
* [H Index](other/h_index.py)
* [Least Recently Used](other/least_recently_used.py)
* [Lfu Cache](other/lfu_cache.py)
diff --git a/other/guess_the_number_search.py b/other/guess_the_number_search.py
index 0439223f2ec9..01e8898bbb8a 100644
--- a/other/guess_the_number_search.py
+++ b/other/guess_the_number_search.py
@@ -148,7 +148,7 @@ def answer(number: int) -> str:
break
print(f"guess the number : {last_numbers[-1]}")
- print(f"details : {str(last_numbers)}")
+ print(f"details : {last_numbers!s}")
def main() -> None:
From cf5e34d4794fbba04d18c98d5d09854029c83466 Mon Sep 17 00:00:00 2001
From: Rohan Saraogi <62804340+r0sa2@users.noreply.github.com>
Date: Fri, 19 May 2023 05:18:22 +0530
Subject: [PATCH 077/808] Added is_palindrome.py (#8748)
---
maths/is_palindrome.py | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
create mode 100644 maths/is_palindrome.py
diff --git a/maths/is_palindrome.py b/maths/is_palindrome.py
new file mode 100644
index 000000000000..ba60573ab022
--- /dev/null
+++ b/maths/is_palindrome.py
@@ -0,0 +1,34 @@
+def is_palindrome(num: int) -> bool:
+ """
+ Returns whether `num` is a palindrome or not
+ (see for reference https://en.wikipedia.org/wiki/Palindromic_number).
+
+ >>> is_palindrome(-121)
+ False
+ >>> is_palindrome(0)
+ True
+ >>> is_palindrome(10)
+ False
+ >>> is_palindrome(11)
+ True
+ >>> is_palindrome(101)
+ True
+ >>> is_palindrome(120)
+ False
+ """
+ if num < 0:
+ return False
+
+ num_copy: int = num
+ rev_num: int = 0
+ while num > 0:
+ rev_num = rev_num * 10 + (num % 10)
+ num //= 10
+
+ return num_copy == rev_num
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From edc17b60e00e704cb4109a0e6b18c6ad43234c26 Mon Sep 17 00:00:00 2001
From: Daniel Luo <103051750+DanielLuo7@users.noreply.github.com>
Date: Thu, 18 May 2023 20:40:52 -0400
Subject: [PATCH 078/808] add __main__ around print (#8747)
---
ciphers/mixed_keyword_cypher.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/ciphers/mixed_keyword_cypher.py b/ciphers/mixed_keyword_cypher.py
index 806004faa079..93a0e3acb7b1 100644
--- a/ciphers/mixed_keyword_cypher.py
+++ b/ciphers/mixed_keyword_cypher.py
@@ -65,4 +65,5 @@ def mixed_keyword(key: str = "college", pt: str = "UNIVERSITY") -> str:
return cypher
-print(mixed_keyword("college", "UNIVERSITY"))
+if __name__ == "__main__":
+ print(mixed_keyword("college", "UNIVERSITY"))
From ce43a8ac4ad14e1639014d374b1137906218cfe3 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 23 May 2023 05:54:30 +0200
Subject: [PATCH 079/808] [pre-commit.ci] pre-commit autoupdate (#8759)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.267 → v0.0.269](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.267...v0.0.269)
- [github.com/abravalheri/validate-pyproject: v0.12.2 → v0.13](https://github.com/abravalheri/validate-pyproject/compare/v0.12.2...v0.13)
* updating DIRECTORY.md
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.pre-commit-config.yaml | 4 ++--
DIRECTORY.md | 1 +
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 6bdbc7370c9c..bd5bca8f05ab 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.267
+ rev: v0.0.269
hooks:
- id: ruff
@@ -46,7 +46,7 @@ repos:
pass_filenames: false
- repo: https://github.com/abravalheri/validate-pyproject
- rev: v0.12.2
+ rev: v0.13
hooks:
- id: validate-pyproject
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 82791cde183d..3181a93f393d 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -577,6 +577,7 @@
* [Hexagonal Number](maths/hexagonal_number.py)
* [Integration By Simpson Approx](maths/integration_by_simpson_approx.py)
* [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py)
+ * [Is Palindrome](maths/is_palindrome.py)
* [Is Square Free](maths/is_square_free.py)
* [Jaccard Similarity](maths/jaccard_similarity.py)
* [Juggler Sequence](maths/juggler_sequence.py)
From df88771905e68c0639069a92144d6b7af1d491ce Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Thu, 25 May 2023 06:59:15 +0100
Subject: [PATCH 080/808] Mark fetch anime and play as broken (#8763)
* updating DIRECTORY.md
* updating DIRECTORY.md
* fix: Correct ruff errors
* fix: Mark anime algorithm as broken
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 -
.../{fetch_anime_and_play.py => fetch_anime_and_play.py.BROKEN} | 0
2 files changed, 1 deletion(-)
rename web_programming/{fetch_anime_and_play.py => fetch_anime_and_play.py.BROKEN} (100%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 3181a93f393d..71bdf30b2ddb 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -1199,7 +1199,6 @@
* [Daily Horoscope](web_programming/daily_horoscope.py)
* [Download Images From Google Query](web_programming/download_images_from_google_query.py)
* [Emails From Url](web_programming/emails_from_url.py)
- * [Fetch Anime And Play](web_programming/fetch_anime_and_play.py)
* [Fetch Bbc News](web_programming/fetch_bbc_news.py)
* [Fetch Github Info](web_programming/fetch_github_info.py)
* [Fetch Jobs](web_programming/fetch_jobs.py)
diff --git a/web_programming/fetch_anime_and_play.py b/web_programming/fetch_anime_and_play.py.BROKEN
similarity index 100%
rename from web_programming/fetch_anime_and_play.py
rename to web_programming/fetch_anime_and_play.py.BROKEN
From 200429fc4739c3757180635016614b984cfd2206 Mon Sep 17 00:00:00 2001
From: Chris O <46587501+ChrisO345@users.noreply.github.com>
Date: Thu, 25 May 2023 18:04:42 +1200
Subject: [PATCH 081/808] Dual Number Automatic Differentiation (#8760)
* Added dual_number_automatic_differentiation.py
* updating DIRECTORY.md
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update maths/dual_number_automatic_differentiation.py
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
DIRECTORY.md | 1 +
.../dual_number_automatic_differentiation.py | 141 ++++++++++++++++++
2 files changed, 142 insertions(+)
create mode 100644 maths/dual_number_automatic_differentiation.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 71bdf30b2ddb..a75723369b06 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -549,6 +549,7 @@
* [Dodecahedron](maths/dodecahedron.py)
* [Double Factorial Iterative](maths/double_factorial_iterative.py)
* [Double Factorial Recursive](maths/double_factorial_recursive.py)
+ * [Dual Number Automatic Differentiation](maths/dual_number_automatic_differentiation.py)
* [Entropy](maths/entropy.py)
* [Euclidean Distance](maths/euclidean_distance.py)
* [Euclidean Gcd](maths/euclidean_gcd.py)
diff --git a/maths/dual_number_automatic_differentiation.py b/maths/dual_number_automatic_differentiation.py
new file mode 100644
index 000000000000..9aa75830c4a1
--- /dev/null
+++ b/maths/dual_number_automatic_differentiation.py
@@ -0,0 +1,141 @@
+from math import factorial
+
+"""
+https://en.wikipedia.org/wiki/Automatic_differentiation#Automatic_differentiation_using_dual_numbers
+https://blog.jliszka.org/2013/10/24/exact-numeric-nth-derivatives.html
+
+Note this only works for basic functions, f(x) where the power of x is positive.
+"""
+
+
+class Dual:
+ def __init__(self, real, rank):
+ self.real = real
+ if isinstance(rank, int):
+ self.duals = [1] * rank
+ else:
+ self.duals = rank
+
+ def __repr__(self):
+ return (
+ f"{self.real}+"
+ f"{'+'.join(str(dual)+'E'+str(n+1)for n,dual in enumerate(self.duals))}"
+ )
+
+ def reduce(self):
+ cur = self.duals.copy()
+ while cur[-1] == 0:
+ cur.pop(-1)
+ return Dual(self.real, cur)
+
+ def __add__(self, other):
+ if not isinstance(other, Dual):
+ return Dual(self.real + other, self.duals)
+ s_dual = self.duals.copy()
+ o_dual = other.duals.copy()
+ if len(s_dual) > len(o_dual):
+ o_dual.extend([1] * (len(s_dual) - len(o_dual)))
+ elif len(s_dual) < len(o_dual):
+ s_dual.extend([1] * (len(o_dual) - len(s_dual)))
+ new_duals = []
+ for i in range(len(s_dual)):
+ new_duals.append(s_dual[i] + o_dual[i])
+ return Dual(self.real + other.real, new_duals)
+
+ __radd__ = __add__
+
+ def __sub__(self, other):
+ return self + other * -1
+
+ def __mul__(self, other):
+ if not isinstance(other, Dual):
+ new_duals = []
+ for i in self.duals:
+ new_duals.append(i * other)
+ return Dual(self.real * other, new_duals)
+ new_duals = [0] * (len(self.duals) + len(other.duals) + 1)
+ for i, item in enumerate(self.duals):
+ for j, jtem in enumerate(other.duals):
+ new_duals[i + j + 1] += item * jtem
+ for k in range(len(self.duals)):
+ new_duals[k] += self.duals[k] * other.real
+ for index in range(len(other.duals)):
+ new_duals[index] += other.duals[index] * self.real
+ return Dual(self.real * other.real, new_duals)
+
+ __rmul__ = __mul__
+
+ def __truediv__(self, other):
+ if not isinstance(other, Dual):
+ new_duals = []
+ for i in self.duals:
+ new_duals.append(i / other)
+ return Dual(self.real / other, new_duals)
+ raise ValueError()
+
+ def __floordiv__(self, other):
+ if not isinstance(other, Dual):
+ new_duals = []
+ for i in self.duals:
+ new_duals.append(i // other)
+ return Dual(self.real // other, new_duals)
+ raise ValueError()
+
+ def __pow__(self, n):
+ if n < 0 or isinstance(n, float):
+ raise ValueError("power must be a positive integer")
+ if n == 0:
+ return 1
+ if n == 1:
+ return self
+ x = self
+ for _ in range(n - 1):
+ x *= self
+ return x
+
+
+def differentiate(func, position, order):
+ """
+ >>> differentiate(lambda x: x**2, 2, 2)
+ 2
+ >>> differentiate(lambda x: x**2 * x**4, 9, 2)
+ 196830
+ >>> differentiate(lambda y: 0.5 * (y + 3) ** 6, 3.5, 4)
+ 7605.0
+ >>> differentiate(lambda y: y ** 2, 4, 3)
+ 0
+ >>> differentiate(8, 8, 8)
+ Traceback (most recent call last):
+ ...
+ ValueError: differentiate() requires a function as input for func
+ >>> differentiate(lambda x: x **2, "", 1)
+ Traceback (most recent call last):
+ ...
+ ValueError: differentiate() requires a float as input for position
+ >>> differentiate(lambda x: x**2, 3, "")
+ Traceback (most recent call last):
+ ...
+ ValueError: differentiate() requires an int as input for order
+ """
+ if not callable(func):
+ raise ValueError("differentiate() requires a function as input for func")
+ if not isinstance(position, (float, int)):
+ raise ValueError("differentiate() requires a float as input for position")
+ if not isinstance(order, int):
+ raise ValueError("differentiate() requires an int as input for order")
+ d = Dual(position, 1)
+ result = func(d)
+ if order == 0:
+ return result.real
+ return result.duals[order - 1] * factorial(order)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
+ def f(y):
+ return y**2 * y**4
+
+ print(differentiate(f, 9, 2))
From a6631487b0a9d6a310d8c45d211e8b7b7bd93cab Mon Sep 17 00:00:00 2001
From: Ratnesh Kumar <89133941+ratneshrt@users.noreply.github.com>
Date: Thu, 25 May 2023 16:04:11 +0530
Subject: [PATCH 082/808] Fix CI badge in the README.md (#8137)
From cfbbfd9896cc96379f7374a68ff04b245bb3527c Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Thu, 25 May 2023 11:56:23 +0100
Subject: [PATCH 083/808] Merge and add benchmarks to palindrome algorithms in
the strings/ directory (#8749)
* refactor: Merge and add benchmarks to palindrome
* updating DIRECTORY.md
* chore: Fix failing tests
* Update strings/palindrome.py
Co-authored-by: Christian Clauss
* Update palindrome.py
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
DIRECTORY.md | 1 -
strings/is_palindrome.py | 41 ----------------------------------------
strings/palindrome.py | 40 ++++++++++++++++++++++++++++++++++++++-
3 files changed, 39 insertions(+), 43 deletions(-)
delete mode 100644 strings/is_palindrome.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index a75723369b06..fe4baac863d0 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -1156,7 +1156,6 @@
* [Indian Phone Validator](strings/indian_phone_validator.py)
* [Is Contains Unique Chars](strings/is_contains_unique_chars.py)
* [Is Isogram](strings/is_isogram.py)
- * [Is Palindrome](strings/is_palindrome.py)
* [Is Pangram](strings/is_pangram.py)
* [Is Spain National Id](strings/is_spain_national_id.py)
* [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py)
diff --git a/strings/is_palindrome.py b/strings/is_palindrome.py
deleted file mode 100644
index 406aa2e8d3c3..000000000000
--- a/strings/is_palindrome.py
+++ /dev/null
@@ -1,41 +0,0 @@
-def is_palindrome(s: str) -> bool:
- """
- Determine if the string s is a palindrome.
-
- >>> is_palindrome("A man, A plan, A canal -- Panama!")
- True
- >>> is_palindrome("Hello")
- False
- >>> is_palindrome("Able was I ere I saw Elba")
- True
- >>> is_palindrome("racecar")
- True
- >>> is_palindrome("Mr. Owl ate my metal worm?")
- True
- """
- # Since punctuation, capitalization, and spaces are often ignored while checking
- # palindromes, we first remove them from our string.
- s = "".join(character for character in s.lower() if character.isalnum())
- # return s == s[::-1] the slicing method
- # uses extra spaces we can
- # better with iteration method.
-
- end = len(s) // 2
- n = len(s)
-
- # We need to traverse till half of the length of string
- # as we can get access of the i'th last element from
- # i'th index.
- # eg: [0,1,2,3,4,5] => 4th index can be accessed
- # with the help of 1st index (i==n-i-1)
- # where n is length of string
-
- return all(s[i] == s[n - i - 1] for i in range(end))
-
-
-if __name__ == "__main__":
- s = input("Please enter a string to see if it is a palindrome: ")
- if is_palindrome(s):
- print(f"'{s}' is a palindrome.")
- else:
- print(f"'{s}' is not a palindrome.")
diff --git a/strings/palindrome.py b/strings/palindrome.py
index dd1fe316f479..bfdb3ddcf396 100644
--- a/strings/palindrome.py
+++ b/strings/palindrome.py
@@ -1,5 +1,7 @@
# Algorithms to determine if a string is palindrome
+from timeit import timeit
+
test_data = {
"MALAYALAM": True,
"String": False,
@@ -33,6 +35,25 @@ def is_palindrome(s: str) -> bool:
return True
+def is_palindrome_traversal(s: str) -> bool:
+ """
+ Return True if s is a palindrome otherwise return False.
+
+ >>> all(is_palindrome_traversal(key) is value for key, value in test_data.items())
+ True
+ """
+ end = len(s) // 2
+ n = len(s)
+
+ # We need to traverse till half of the length of string
+ # as we can get access of the i'th last element from
+ # i'th index.
+ # eg: [0,1,2,3,4,5] => 4th index can be accessed
+ # with the help of 1st index (i==n-i-1)
+ # where n is length of string
+ return all(s[i] == s[n - i - 1] for i in range(end))
+
+
def is_palindrome_recursive(s: str) -> bool:
"""
Return True if s is a palindrome otherwise return False.
@@ -40,7 +61,7 @@ def is_palindrome_recursive(s: str) -> bool:
>>> all(is_palindrome_recursive(key) is value for key, value in test_data.items())
True
"""
- if len(s) <= 1:
+ if len(s) <= 2:
return True
if s[0] == s[len(s) - 1]:
return is_palindrome_recursive(s[1:-1])
@@ -58,9 +79,26 @@ def is_palindrome_slice(s: str) -> bool:
return s == s[::-1]
+def benchmark_function(name: str) -> None:
+ stmt = f"all({name}(key) is value for key, value in test_data.items())"
+ setup = f"from __main__ import test_data, {name}"
+ number = 500000
+ result = timeit(stmt=stmt, setup=setup, number=number)
+ print(f"{name:<35} finished {number:,} runs in {result:.5f} seconds")
+
+
if __name__ == "__main__":
for key, value in test_data.items():
assert is_palindrome(key) is is_palindrome_recursive(key)
assert is_palindrome(key) is is_palindrome_slice(key)
print(f"{key:21} {value}")
print("a man a plan a canal panama")
+
+ # finished 500,000 runs in 0.46793 seconds
+ benchmark_function("is_palindrome_slice")
+ # finished 500,000 runs in 0.85234 seconds
+ benchmark_function("is_palindrome")
+ # finished 500,000 runs in 1.32028 seconds
+ benchmark_function("is_palindrome_recursive")
+ # finished 500,000 runs in 2.08679 seconds
+ benchmark_function("is_palindrome_traversal")
From a17791d022bdc942c8badabc52307c354069a7ae Mon Sep 17 00:00:00 2001
From: Juyoung Kim <61103343+JadeKim042386@users.noreply.github.com>
Date: Thu, 25 May 2023 21:54:18 +0900
Subject: [PATCH 084/808] fix: graphs/greedy_best_first typo (#8766)
#8764
---
graphs/greedy_best_first.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/graphs/greedy_best_first.py b/graphs/greedy_best_first.py
index d49e65b9d814..35f7ca9feeef 100644
--- a/graphs/greedy_best_first.py
+++ b/graphs/greedy_best_first.py
@@ -58,8 +58,8 @@ def calculate_heuristic(self) -> float:
The heuristic here is the Manhattan Distance
Could elaborate to offer more than one choice
"""
- dy = abs(self.pos_x - self.goal_x)
- dx = abs(self.pos_y - self.goal_y)
+ dx = abs(self.pos_x - self.goal_x)
+ dy = abs(self.pos_y - self.goal_y)
return dx + dy
def __lt__(self, other) -> bool:
From dd3b499bfa972507759d0705b77e2e1946f42596 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Fri, 26 May 2023 08:50:33 +0200
Subject: [PATCH 085/808] Rename is_palindrome.py to is_int_palindrome.py
(#8768)
* Rename is_palindrome.py to is_int_palindrome.py
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 2 +-
maths/{is_palindrome.py => is_int_palindrome.py} | 14 +++++++-------
2 files changed, 8 insertions(+), 8 deletions(-)
rename maths/{is_palindrome.py => is_int_palindrome.py} (67%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index fe4baac863d0..11ff93c91430 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -577,8 +577,8 @@
* [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py)
* [Hexagonal Number](maths/hexagonal_number.py)
* [Integration By Simpson Approx](maths/integration_by_simpson_approx.py)
+ * [Is Int Palindrome](maths/is_int_palindrome.py)
* [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py)
- * [Is Palindrome](maths/is_palindrome.py)
* [Is Square Free](maths/is_square_free.py)
* [Jaccard Similarity](maths/jaccard_similarity.py)
* [Juggler Sequence](maths/juggler_sequence.py)
diff --git a/maths/is_palindrome.py b/maths/is_int_palindrome.py
similarity index 67%
rename from maths/is_palindrome.py
rename to maths/is_int_palindrome.py
index ba60573ab022..63dc9e2138e8 100644
--- a/maths/is_palindrome.py
+++ b/maths/is_int_palindrome.py
@@ -1,19 +1,19 @@
-def is_palindrome(num: int) -> bool:
+def is_int_palindrome(num: int) -> bool:
"""
Returns whether `num` is a palindrome or not
(see for reference https://en.wikipedia.org/wiki/Palindromic_number).
- >>> is_palindrome(-121)
+ >>> is_int_palindrome(-121)
False
- >>> is_palindrome(0)
+ >>> is_int_palindrome(0)
True
- >>> is_palindrome(10)
+ >>> is_int_palindrome(10)
False
- >>> is_palindrome(11)
+ >>> is_int_palindrome(11)
True
- >>> is_palindrome(101)
+ >>> is_int_palindrome(101)
True
- >>> is_palindrome(120)
+ >>> is_int_palindrome(120)
False
"""
if num < 0:
From 4b79d771cd81a820c195e62430100c416a1618ea Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Fri, 26 May 2023 09:34:17 +0200
Subject: [PATCH 086/808] Add more ruff rules (#8767)
* Add more ruff rules
* Add more ruff rules
* pre-commit: Update ruff v0.0.269 -> v0.0.270
* Apply suggestions from code review
* Fix doctest
* Fix doctest (ignore whitespace)
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: Dhruv Manilawala
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.pre-commit-config.yaml | 2 +-
.../jacobi_iteration_method.py | 30 ++--
arithmetic_analysis/lu_decomposition.py | 5 +-
audio_filters/iir_filter.py | 14 +-
backtracking/knight_tour.py | 3 +-
bit_manipulation/reverse_bits.py | 3 +-
ciphers/base64.py | 12 +-
ciphers/beaufort_cipher.py | 2 +-
ciphers/cryptomath_module.py | 3 +-
ciphers/enigma_machine2.py | 30 ++--
ciphers/hill_cipher.py | 7 +-
.../astronomical_length_scale_conversion.py | 6 +-
conversions/length_conversion.py | 6 +-
conversions/speed_conversions.py | 3 +-
conversions/weight_conversion.py | 3 +-
.../binary_search_tree_recursive.py | 6 +-
.../binary_tree/binary_tree_mirror.py | 3 +-
data_structures/disjoint_set/disjoint_set.py | 3 +-
.../linked_list/circular_linked_list.py | 8 +-
.../linked_list/doubly_linked_list.py | 4 +-
.../linked_list/singly_linked_list.py | 4 +-
data_structures/stacks/stack.py | 6 +-
digital_image_processing/dithering/burkes.py | 3 +-
divide_and_conquer/convex_hull.py | 8 +-
dynamic_programming/knapsack.py | 15 +-
dynamic_programming/minimum_steps_to_one.py | 3 +-
dynamic_programming/rod_cutting.py | 10 +-
dynamic_programming/viterbi.py | 17 ++-
electronics/resistor_equivalence.py | 6 +-
genetic_algorithm/basic_string.py | 8 +-
graphics/vector3_for_2d_rendering.py | 8 +-
graphs/breadth_first_search_shortest_path.py | 3 +-
linear_algebra/src/schur_complement.py | 14 +-
machine_learning/similarity_search.py | 21 +--
machine_learning/support_vector_machines.py | 3 +-
maths/3n_plus_1.py | 6 +-
maths/automorphic_number.py | 3 +-
maths/catalan_number.py | 6 +-
.../dual_number_automatic_differentiation.py | 4 +-
maths/hexagonal_number.py | 3 +-
maths/juggler_sequence.py | 6 +-
maths/liouville_lambda.py | 3 +-
maths/manhattan_distance.py | 18 +--
maths/pronic_number.py | 3 +-
maths/proth_number.py | 6 +-
maths/radix2_fft.py | 2 +-
maths/sieve_of_eratosthenes.py | 3 +-
maths/sylvester_sequence.py | 3 +-
maths/twin_prime.py | 3 +-
matrix/matrix_operation.py | 12 +-
matrix/sherman_morrison.py | 3 +-
neural_network/input_data.py | 12 +-
other/nested_brackets.py | 2 +-
other/scoring_algorithm.py | 3 +-
project_euler/problem_054/sol1.py | 6 +-
project_euler/problem_068/sol1.py | 3 +-
project_euler/problem_131/sol1.py | 5 +-
pyproject.toml | 139 +++++++++++++-----
scripts/build_directory_md.py | 2 +-
sorts/dutch_national_flag_sort.py | 5 +-
strings/barcode_validator.py | 3 +-
strings/capitalize.py | 2 +-
strings/is_spain_national_id.py | 3 +-
strings/snake_case_to_camel_pascal_case.py | 8 +-
web_programming/reddit.py | 3 +-
web_programming/search_books_by_isbn.py | 3 +-
web_programming/slack_message.py | 7 +-
67 files changed, 349 insertions(+), 223 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index bd5bca8f05ab..4c70ae219f74 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.269
+ rev: v0.0.270
hooks:
- id: ruff
diff --git a/arithmetic_analysis/jacobi_iteration_method.py b/arithmetic_analysis/jacobi_iteration_method.py
index fe506a94a65d..17edf4bf4b8b 100644
--- a/arithmetic_analysis/jacobi_iteration_method.py
+++ b/arithmetic_analysis/jacobi_iteration_method.py
@@ -49,7 +49,9 @@ def jacobi_iteration_method(
>>> constant = np.array([[2], [-6]])
>>> init_val = [0.5, -0.5, -0.5]
>>> iterations = 3
- >>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
+ >>> jacobi_iteration_method(
+ ... coefficient, constant, init_val, iterations
+ ... ) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: Coefficient and constant matrices dimensions must be nxn and nx1 but
@@ -59,7 +61,9 @@ def jacobi_iteration_method(
>>> constant = np.array([[2], [-6], [-4]])
>>> init_val = [0.5, -0.5]
>>> iterations = 3
- >>> jacobi_iteration_method(coefficient, constant, init_val, iterations)
+ >>> jacobi_iteration_method(
+ ... coefficient, constant, init_val, iterations
+ ... ) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: Number of initial values must be equal to number of rows in coefficient
@@ -79,24 +83,26 @@ def jacobi_iteration_method(
rows2, cols2 = constant_matrix.shape
if rows1 != cols1:
- raise ValueError(
- f"Coefficient matrix dimensions must be nxn but received {rows1}x{cols1}"
- )
+ msg = f"Coefficient matrix dimensions must be nxn but received {rows1}x{cols1}"
+ raise ValueError(msg)
if cols2 != 1:
- raise ValueError(f"Constant matrix must be nx1 but received {rows2}x{cols2}")
+ msg = f"Constant matrix must be nx1 but received {rows2}x{cols2}"
+ raise ValueError(msg)
if rows1 != rows2:
- raise ValueError(
- f"""Coefficient and constant matrices dimensions must be nxn and nx1 but
- received {rows1}x{cols1} and {rows2}x{cols2}"""
+ msg = (
+ "Coefficient and constant matrices dimensions must be nxn and nx1 but "
+ f"received {rows1}x{cols1} and {rows2}x{cols2}"
)
+ raise ValueError(msg)
if len(init_val) != rows1:
- raise ValueError(
- f"""Number of initial values must be equal to number of rows in coefficient
- matrix but received {len(init_val)} and {rows1}"""
+ msg = (
+ "Number of initial values must be equal to number of rows in coefficient "
+ f"matrix but received {len(init_val)} and {rows1}"
)
+ raise ValueError(msg)
if iterations <= 0:
raise ValueError("Iterations must be at least 1")
diff --git a/arithmetic_analysis/lu_decomposition.py b/arithmetic_analysis/lu_decomposition.py
index 941c1dadf556..eaabce5449c5 100644
--- a/arithmetic_analysis/lu_decomposition.py
+++ b/arithmetic_analysis/lu_decomposition.py
@@ -80,10 +80,11 @@ def lower_upper_decomposition(table: np.ndarray) -> tuple[np.ndarray, np.ndarray
# Ensure that table is a square array
rows, columns = np.shape(table)
if rows != columns:
- raise ValueError(
- f"'table' has to be of square shaped array but got a "
+ msg = (
+ "'table' has to be of square shaped array but got a "
f"{rows}x{columns} array:\n{table}"
)
+ raise ValueError(msg)
lower = np.zeros((rows, columns))
upper = np.zeros((rows, columns))
diff --git a/audio_filters/iir_filter.py b/audio_filters/iir_filter.py
index bd448175f6f3..f3c1ad43b001 100644
--- a/audio_filters/iir_filter.py
+++ b/audio_filters/iir_filter.py
@@ -50,16 +50,18 @@ def set_coefficients(self, a_coeffs: list[float], b_coeffs: list[float]) -> None
a_coeffs = [1.0, *a_coeffs]
if len(a_coeffs) != self.order + 1:
- raise ValueError(
- f"Expected a_coeffs to have {self.order + 1} elements for {self.order}"
- f"-order filter, got {len(a_coeffs)}"
+ msg = (
+ f"Expected a_coeffs to have {self.order + 1} elements "
+ f"for {self.order}-order filter, got {len(a_coeffs)}"
)
+ raise ValueError(msg)
if len(b_coeffs) != self.order + 1:
- raise ValueError(
- f"Expected b_coeffs to have {self.order + 1} elements for {self.order}"
- f"-order filter, got {len(a_coeffs)}"
+ msg = (
+ f"Expected b_coeffs to have {self.order + 1} elements "
+ f"for {self.order}-order filter, got {len(a_coeffs)}"
)
+ raise ValueError(msg)
self.a_coeffs = a_coeffs
self.b_coeffs = b_coeffs
diff --git a/backtracking/knight_tour.py b/backtracking/knight_tour.py
index bb650ece3f5e..cc88307b7fe8 100644
--- a/backtracking/knight_tour.py
+++ b/backtracking/knight_tour.py
@@ -91,7 +91,8 @@ def open_knight_tour(n: int) -> list[list[int]]:
return board
board[i][j] = 0
- raise ValueError(f"Open Kight Tour cannot be performed on a board of size {n}")
+ msg = f"Open Kight Tour cannot be performed on a board of size {n}"
+ raise ValueError(msg)
if __name__ == "__main__":
diff --git a/bit_manipulation/reverse_bits.py b/bit_manipulation/reverse_bits.py
index 55608ae12908..a8c77c11bfdd 100644
--- a/bit_manipulation/reverse_bits.py
+++ b/bit_manipulation/reverse_bits.py
@@ -14,10 +14,11 @@ def get_reverse_bit_string(number: int) -> str:
TypeError: operation can not be conducted on a object of type str
"""
if not isinstance(number, int):
- raise TypeError(
+ msg = (
"operation can not be conducted on a object of type "
f"{type(number).__name__}"
)
+ raise TypeError(msg)
bit_string = ""
for _ in range(0, 32):
bit_string += str(number % 2)
diff --git a/ciphers/base64.py b/ciphers/base64.py
index 38a952acc307..2b950b1be37d 100644
--- a/ciphers/base64.py
+++ b/ciphers/base64.py
@@ -34,9 +34,8 @@ def base64_encode(data: bytes) -> bytes:
"""
# Make sure the supplied data is a bytes-like object
if not isinstance(data, bytes):
- raise TypeError(
- f"a bytes-like object is required, not '{data.__class__.__name__}'"
- )
+ msg = f"a bytes-like object is required, not '{data.__class__.__name__}'"
+ raise TypeError(msg)
binary_stream = "".join(bin(byte)[2:].zfill(8) for byte in data)
@@ -88,10 +87,11 @@ def base64_decode(encoded_data: str) -> bytes:
"""
# Make sure encoded_data is either a string or a bytes-like object
if not isinstance(encoded_data, bytes) and not isinstance(encoded_data, str):
- raise TypeError(
- "argument should be a bytes-like object or ASCII string, not "
- f"'{encoded_data.__class__.__name__}'"
+ msg = (
+ "argument should be a bytes-like object or ASCII string, "
+ f"not '{encoded_data.__class__.__name__}'"
)
+ raise TypeError(msg)
# In case encoded_data is a bytes-like object, make sure it contains only
# ASCII characters so we convert it to a string object
diff --git a/ciphers/beaufort_cipher.py b/ciphers/beaufort_cipher.py
index 8eae847a7ff7..788fc72b89c3 100644
--- a/ciphers/beaufort_cipher.py
+++ b/ciphers/beaufort_cipher.py
@@ -5,7 +5,7 @@
from string import ascii_uppercase
dict1 = {char: i for i, char in enumerate(ascii_uppercase)}
-dict2 = {i: char for i, char in enumerate(ascii_uppercase)}
+dict2 = dict(enumerate(ascii_uppercase))
# This function generates the key in
diff --git a/ciphers/cryptomath_module.py b/ciphers/cryptomath_module.py
index be8764ff38c3..6f15f7b733e6 100644
--- a/ciphers/cryptomath_module.py
+++ b/ciphers/cryptomath_module.py
@@ -6,7 +6,8 @@ def gcd(a: int, b: int) -> int:
def find_mod_inverse(a: int, m: int) -> int:
if gcd(a, m) != 1:
- raise ValueError(f"mod inverse of {a!r} and {m!r} does not exist")
+ msg = f"mod inverse of {a!r} and {m!r} does not exist"
+ raise ValueError(msg)
u1, u2, u3 = 1, 0, a
v1, v2, v3 = 0, 1, m
while v3 != 0:
diff --git a/ciphers/enigma_machine2.py b/ciphers/enigma_machine2.py
index 07d21893f192..ec0d44e4a6c6 100644
--- a/ciphers/enigma_machine2.py
+++ b/ciphers/enigma_machine2.py
@@ -87,22 +87,20 @@ def _validator(
# Checks if there are 3 unique rotors
if (unique_rotsel := len(set(rotsel))) < 3:
- raise Exception(f"Please use 3 unique rotors (not {unique_rotsel})")
+ msg = f"Please use 3 unique rotors (not {unique_rotsel})"
+ raise Exception(msg)
# Checks if rotor positions are valid
rotorpos1, rotorpos2, rotorpos3 = rotpos
if not 0 < rotorpos1 <= len(abc):
- raise ValueError(
- "First rotor position is not within range of 1..26 (" f"{rotorpos1}"
- )
+ msg = f"First rotor position is not within range of 1..26 ({rotorpos1}"
+ raise ValueError(msg)
if not 0 < rotorpos2 <= len(abc):
- raise ValueError(
- "Second rotor position is not within range of 1..26 (" f"{rotorpos2})"
- )
+ msg = f"Second rotor position is not within range of 1..26 ({rotorpos2})"
+ raise ValueError(msg)
if not 0 < rotorpos3 <= len(abc):
- raise ValueError(
- "Third rotor position is not within range of 1..26 (" f"{rotorpos3})"
- )
+ msg = f"Third rotor position is not within range of 1..26 ({rotorpos3})"
+ raise ValueError(msg)
# Validates string and returns dict
pbdict = _plugboard(pb)
@@ -130,9 +128,11 @@ def _plugboard(pbstring: str) -> dict[str, str]:
# a) is type string
# b) has even length (so pairs can be made)
if not isinstance(pbstring, str):
- raise TypeError(f"Plugboard setting isn't type string ({type(pbstring)})")
+ msg = f"Plugboard setting isn't type string ({type(pbstring)})"
+ raise TypeError(msg)
elif len(pbstring) % 2 != 0:
- raise Exception(f"Odd number of symbols ({len(pbstring)})")
+ msg = f"Odd number of symbols ({len(pbstring)})"
+ raise Exception(msg)
elif pbstring == "":
return {}
@@ -142,9 +142,11 @@ def _plugboard(pbstring: str) -> dict[str, str]:
tmppbl = set()
for i in pbstring:
if i not in abc:
- raise Exception(f"'{i}' not in list of symbols")
+ msg = f"'{i}' not in list of symbols"
+ raise Exception(msg)
elif i in tmppbl:
- raise Exception(f"Duplicate symbol ({i})")
+ msg = f"Duplicate symbol ({i})"
+ raise Exception(msg)
else:
tmppbl.add(i)
del tmppbl
diff --git a/ciphers/hill_cipher.py b/ciphers/hill_cipher.py
index f646d567b4c8..b4424e82298e 100644
--- a/ciphers/hill_cipher.py
+++ b/ciphers/hill_cipher.py
@@ -104,10 +104,11 @@ def check_determinant(self) -> None:
req_l = len(self.key_string)
if greatest_common_divisor(det, len(self.key_string)) != 1:
- raise ValueError(
- f"determinant modular {req_l} of encryption key({det}) is not co prime "
- f"w.r.t {req_l}.\nTry another key."
+ msg = (
+ f"determinant modular {req_l} of encryption key({det}) "
+ f"is not co prime w.r.t {req_l}.\nTry another key."
)
+ raise ValueError(msg)
def process_text(self, text: str) -> str:
"""
diff --git a/conversions/astronomical_length_scale_conversion.py b/conversions/astronomical_length_scale_conversion.py
index 804d82487a25..0f413644906d 100644
--- a/conversions/astronomical_length_scale_conversion.py
+++ b/conversions/astronomical_length_scale_conversion.py
@@ -77,15 +77,17 @@ def length_conversion(value: float, from_type: str, to_type: str) -> float:
to_sanitized = UNIT_SYMBOL.get(to_sanitized, to_sanitized)
if from_sanitized not in METRIC_CONVERSION:
- raise ValueError(
+ msg = (
f"Invalid 'from_type' value: {from_type!r}.\n"
f"Conversion abbreviations are: {', '.join(METRIC_CONVERSION)}"
)
+ raise ValueError(msg)
if to_sanitized not in METRIC_CONVERSION:
- raise ValueError(
+ msg = (
f"Invalid 'to_type' value: {to_type!r}.\n"
f"Conversion abbreviations are: {', '.join(METRIC_CONVERSION)}"
)
+ raise ValueError(msg)
from_exponent = METRIC_CONVERSION[from_sanitized]
to_exponent = METRIC_CONVERSION[to_sanitized]
exponent = 1
diff --git a/conversions/length_conversion.py b/conversions/length_conversion.py
index 790d9c116845..d8f39515255e 100644
--- a/conversions/length_conversion.py
+++ b/conversions/length_conversion.py
@@ -104,15 +104,17 @@ def length_conversion(value: float, from_type: str, to_type: str) -> float:
new_to = to_type.lower().rstrip("s")
new_to = TYPE_CONVERSION.get(new_to, new_to)
if new_from not in METRIC_CONVERSION:
- raise ValueError(
+ msg = (
f"Invalid 'from_type' value: {from_type!r}.\n"
f"Conversion abbreviations are: {', '.join(METRIC_CONVERSION)}"
)
+ raise ValueError(msg)
if new_to not in METRIC_CONVERSION:
- raise ValueError(
+ msg = (
f"Invalid 'to_type' value: {to_type!r}.\n"
f"Conversion abbreviations are: {', '.join(METRIC_CONVERSION)}"
)
+ raise ValueError(msg)
return value * METRIC_CONVERSION[new_from].from_ * METRIC_CONVERSION[new_to].to
diff --git a/conversions/speed_conversions.py b/conversions/speed_conversions.py
index 62da9e137bc7..ba497119d3f5 100644
--- a/conversions/speed_conversions.py
+++ b/conversions/speed_conversions.py
@@ -57,10 +57,11 @@ def convert_speed(speed: float, unit_from: str, unit_to: str) -> float:
115.078
"""
if unit_to not in speed_chart or unit_from not in speed_chart_inverse:
- raise ValueError(
+ msg = (
f"Incorrect 'from_type' or 'to_type' value: {unit_from!r}, {unit_to!r}\n"
f"Valid values are: {', '.join(speed_chart_inverse)}"
)
+ raise ValueError(msg)
return round(speed * speed_chart[unit_from] * speed_chart_inverse[unit_to], 3)
diff --git a/conversions/weight_conversion.py b/conversions/weight_conversion.py
index 5c032a497a7b..e8326e0b688f 100644
--- a/conversions/weight_conversion.py
+++ b/conversions/weight_conversion.py
@@ -299,10 +299,11 @@ def weight_conversion(from_type: str, to_type: str, value: float) -> float:
1.999999998903455
"""
if to_type not in KILOGRAM_CHART or from_type not in WEIGHT_TYPE_CHART:
- raise ValueError(
+ msg = (
f"Invalid 'from_type' or 'to_type' value: {from_type!r}, {to_type!r}\n"
f"Supported values are: {', '.join(WEIGHT_TYPE_CHART)}"
)
+ raise ValueError(msg)
return value * KILOGRAM_CHART[to_type] * WEIGHT_TYPE_CHART[from_type]
diff --git a/data_structures/binary_tree/binary_search_tree_recursive.py b/data_structures/binary_tree/binary_search_tree_recursive.py
index 97eb8e25bedd..b5b983b9ba4c 100644
--- a/data_structures/binary_tree/binary_search_tree_recursive.py
+++ b/data_structures/binary_tree/binary_search_tree_recursive.py
@@ -77,7 +77,8 @@ def _put(self, node: Node | None, label: int, parent: Node | None = None) -> Nod
elif label > node.label:
node.right = self._put(node.right, label, node)
else:
- raise Exception(f"Node with label {label} already exists")
+ msg = f"Node with label {label} already exists"
+ raise Exception(msg)
return node
@@ -100,7 +101,8 @@ def search(self, label: int) -> Node:
def _search(self, node: Node | None, label: int) -> Node:
if node is None:
- raise Exception(f"Node with label {label} does not exist")
+ msg = f"Node with label {label} does not exist"
+ raise Exception(msg)
else:
if label < node.label:
node = self._search(node.left, label)
diff --git a/data_structures/binary_tree/binary_tree_mirror.py b/data_structures/binary_tree/binary_tree_mirror.py
index 1ef950ad62d7..b8548f4ec515 100644
--- a/data_structures/binary_tree/binary_tree_mirror.py
+++ b/data_structures/binary_tree/binary_tree_mirror.py
@@ -31,7 +31,8 @@ def binary_tree_mirror(binary_tree: dict, root: int = 1) -> dict:
if not binary_tree:
raise ValueError("binary tree cannot be empty")
if root not in binary_tree:
- raise ValueError(f"root {root} is not present in the binary_tree")
+ msg = f"root {root} is not present in the binary_tree"
+ raise ValueError(msg)
binary_tree_mirror_dictionary = dict(binary_tree)
binary_tree_mirror_dict(binary_tree_mirror_dictionary, root)
return binary_tree_mirror_dictionary
diff --git a/data_structures/disjoint_set/disjoint_set.py b/data_structures/disjoint_set/disjoint_set.py
index f8500bf2c3af..12dafb2d935e 100644
--- a/data_structures/disjoint_set/disjoint_set.py
+++ b/data_structures/disjoint_set/disjoint_set.py
@@ -56,7 +56,8 @@ def find_python_set(node: Node) -> set:
for s in sets:
if node.data in s:
return s
- raise ValueError(f"{node.data} is not in {sets}")
+ msg = f"{node.data} is not in {sets}"
+ raise ValueError(msg)
def test_disjoint_set() -> None:
diff --git a/data_structures/linked_list/circular_linked_list.py b/data_structures/linked_list/circular_linked_list.py
index 9092fb29e3ff..325d91026137 100644
--- a/data_structures/linked_list/circular_linked_list.py
+++ b/data_structures/linked_list/circular_linked_list.py
@@ -94,25 +94,25 @@ def test_circular_linked_list() -> None:
try:
circular_linked_list.delete_front()
- raise AssertionError() # This should not happen
+ raise AssertionError # This should not happen
except IndexError:
assert True # This should happen
try:
circular_linked_list.delete_tail()
- raise AssertionError() # This should not happen
+ raise AssertionError # This should not happen
except IndexError:
assert True # This should happen
try:
circular_linked_list.delete_nth(-1)
- raise AssertionError()
+ raise AssertionError
except IndexError:
assert True
try:
circular_linked_list.delete_nth(0)
- raise AssertionError()
+ raise AssertionError
except IndexError:
assert True
diff --git a/data_structures/linked_list/doubly_linked_list.py b/data_structures/linked_list/doubly_linked_list.py
index 69763d12da15..1a6c48191c4e 100644
--- a/data_structures/linked_list/doubly_linked_list.py
+++ b/data_structures/linked_list/doubly_linked_list.py
@@ -198,13 +198,13 @@ def test_doubly_linked_list() -> None:
try:
linked_list.delete_head()
- raise AssertionError() # This should not happen.
+ raise AssertionError # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
- raise AssertionError() # This should not happen.
+ raise AssertionError # This should not happen.
except IndexError:
assert True # This should happen.
diff --git a/data_structures/linked_list/singly_linked_list.py b/data_structures/linked_list/singly_linked_list.py
index a8f9e8ebb977..890e21c9b404 100644
--- a/data_structures/linked_list/singly_linked_list.py
+++ b/data_structures/linked_list/singly_linked_list.py
@@ -353,13 +353,13 @@ def test_singly_linked_list() -> None:
try:
linked_list.delete_head()
- raise AssertionError() # This should not happen.
+ raise AssertionError # This should not happen.
except IndexError:
assert True # This should happen.
try:
linked_list.delete_tail()
- raise AssertionError() # This should not happen.
+ raise AssertionError # This should not happen.
except IndexError:
assert True # This should happen.
diff --git a/data_structures/stacks/stack.py b/data_structures/stacks/stack.py
index 55d424d5018b..a14f4648a399 100644
--- a/data_structures/stacks/stack.py
+++ b/data_structures/stacks/stack.py
@@ -92,13 +92,13 @@ def test_stack() -> None:
try:
_ = stack.pop()
- raise AssertionError() # This should not happen
+ raise AssertionError # This should not happen
except StackUnderflowError:
assert True # This should happen
try:
_ = stack.peek()
- raise AssertionError() # This should not happen
+ raise AssertionError # This should not happen
except StackUnderflowError:
assert True # This should happen
@@ -118,7 +118,7 @@ def test_stack() -> None:
try:
stack.push(200)
- raise AssertionError() # This should not happen
+ raise AssertionError # This should not happen
except StackOverflowError:
assert True # This should happen
diff --git a/digital_image_processing/dithering/burkes.py b/digital_image_processing/dithering/burkes.py
index 2bf0bbe03225..0804104abe58 100644
--- a/digital_image_processing/dithering/burkes.py
+++ b/digital_image_processing/dithering/burkes.py
@@ -21,7 +21,8 @@ def __init__(self, input_img, threshold: int):
self.max_threshold = int(self.get_greyscale(255, 255, 255))
if not self.min_threshold < threshold < self.max_threshold:
- raise ValueError(f"Factor value should be from 0 to {self.max_threshold}")
+ msg = f"Factor value should be from 0 to {self.max_threshold}"
+ raise ValueError(msg)
self.input_img = input_img
self.threshold = threshold
diff --git a/divide_and_conquer/convex_hull.py b/divide_and_conquer/convex_hull.py
index 39e78be04a71..1ad933417da6 100644
--- a/divide_and_conquer/convex_hull.py
+++ b/divide_and_conquer/convex_hull.py
@@ -174,12 +174,12 @@ def _validate_input(points: list[Point] | list[list[float]]) -> list[Point]:
"""
if not hasattr(points, "__iter__"):
- raise ValueError(
- f"Expecting an iterable object but got an non-iterable type {points}"
- )
+ msg = f"Expecting an iterable object but got an non-iterable type {points}"
+ raise ValueError(msg)
if not points:
- raise ValueError(f"Expecting a list of points but got {points}")
+ msg = f"Expecting a list of points but got {points}"
+ raise ValueError(msg)
return _construct_points(points)
diff --git a/dynamic_programming/knapsack.py b/dynamic_programming/knapsack.py
index b12d30313e31..489b5ada450a 100644
--- a/dynamic_programming/knapsack.py
+++ b/dynamic_programming/knapsack.py
@@ -78,17 +78,18 @@ def knapsack_with_example_solution(w: int, wt: list, val: list):
num_items = len(wt)
if num_items != len(val):
- raise ValueError(
- "The number of weights must be the "
- "same as the number of values.\nBut "
- f"got {num_items} weights and {len(val)} values"
+ msg = (
+ "The number of weights must be the same as the number of values.\n"
+ f"But got {num_items} weights and {len(val)} values"
)
+ raise ValueError(msg)
for i in range(num_items):
if not isinstance(wt[i], int):
- raise TypeError(
- "All weights must be integers but "
- f"got weight of type {type(wt[i])} at index {i}"
+ msg = (
+ "All weights must be integers but got weight of "
+ f"type {type(wt[i])} at index {i}"
)
+ raise TypeError(msg)
optimal_val, dp_table = knapsack(w, wt, val, num_items)
example_optional_set: set = set()
diff --git a/dynamic_programming/minimum_steps_to_one.py b/dynamic_programming/minimum_steps_to_one.py
index f4eb7033dd20..8785027fbff3 100644
--- a/dynamic_programming/minimum_steps_to_one.py
+++ b/dynamic_programming/minimum_steps_to_one.py
@@ -42,7 +42,8 @@ def min_steps_to_one(number: int) -> int:
"""
if number <= 0:
- raise ValueError(f"n must be greater than 0. Got n = {number}")
+ msg = f"n must be greater than 0. Got n = {number}"
+ raise ValueError(msg)
table = [number + 1] * (number + 1)
diff --git a/dynamic_programming/rod_cutting.py b/dynamic_programming/rod_cutting.py
index 79104d8f4044..f80fa440ae86 100644
--- a/dynamic_programming/rod_cutting.py
+++ b/dynamic_programming/rod_cutting.py
@@ -177,13 +177,15 @@ def _enforce_args(n: int, prices: list):
the rod
"""
if n < 0:
- raise ValueError(f"n must be greater than or equal to 0. Got n = {n}")
+ msg = f"n must be greater than or equal to 0. Got n = {n}"
+ raise ValueError(msg)
if n > len(prices):
- raise ValueError(
- "Each integral piece of rod must have a corresponding "
- f"price. Got n = {n} but length of prices = {len(prices)}"
+ msg = (
+ "Each integral piece of rod must have a corresponding price. "
+ f"Got n = {n} but length of prices = {len(prices)}"
)
+ raise ValueError(msg)
def main():
diff --git a/dynamic_programming/viterbi.py b/dynamic_programming/viterbi.py
index 93ab845e2ae8..764d45dc2c05 100644
--- a/dynamic_programming/viterbi.py
+++ b/dynamic_programming/viterbi.py
@@ -297,11 +297,13 @@ def _validate_list(_object: Any, var_name: str) -> None:
"""
if not isinstance(_object, list):
- raise ValueError(f"{var_name} must be a list")
+ msg = f"{var_name} must be a list"
+ raise ValueError(msg)
else:
for x in _object:
if not isinstance(x, str):
- raise ValueError(f"{var_name} must be a list of strings")
+ msg = f"{var_name} must be a list of strings"
+ raise ValueError(msg)
def _validate_dicts(
@@ -384,14 +386,15 @@ def _validate_dict(
ValueError: mock_name nested dictionary all values must be float
"""
if not isinstance(_object, dict):
- raise ValueError(f"{var_name} must be a dict")
+ msg = f"{var_name} must be a dict"
+ raise ValueError(msg)
if not all(isinstance(x, str) for x in _object):
- raise ValueError(f"{var_name} all keys must be strings")
+ msg = f"{var_name} all keys must be strings"
+ raise ValueError(msg)
if not all(isinstance(x, value_type) for x in _object.values()):
nested_text = "nested dictionary " if nested else ""
- raise ValueError(
- f"{var_name} {nested_text}all values must be {value_type.__name__}"
- )
+ msg = f"{var_name} {nested_text}all values must be {value_type.__name__}"
+ raise ValueError(msg)
if __name__ == "__main__":
diff --git a/electronics/resistor_equivalence.py b/electronics/resistor_equivalence.py
index 7142f838a065..55e7f2d6b5d2 100644
--- a/electronics/resistor_equivalence.py
+++ b/electronics/resistor_equivalence.py
@@ -23,7 +23,8 @@ def resistor_parallel(resistors: list[float]) -> float:
index = 0
for resistor in resistors:
if resistor <= 0:
- raise ValueError(f"Resistor at index {index} has a negative or zero value!")
+ msg = f"Resistor at index {index} has a negative or zero value!"
+ raise ValueError(msg)
first_sum += 1 / float(resistor)
index += 1
return 1 / first_sum
@@ -47,7 +48,8 @@ def resistor_series(resistors: list[float]) -> float:
for resistor in resistors:
sum_r += resistor
if resistor < 0:
- raise ValueError(f"Resistor at index {index} has a negative value!")
+ msg = f"Resistor at index {index} has a negative value!"
+ raise ValueError(msg)
index += 1
return sum_r
diff --git a/genetic_algorithm/basic_string.py b/genetic_algorithm/basic_string.py
index 388e7219f54b..089c5c99a1ec 100644
--- a/genetic_algorithm/basic_string.py
+++ b/genetic_algorithm/basic_string.py
@@ -96,13 +96,13 @@ def basic(target: str, genes: list[str], debug: bool = True) -> tuple[int, int,
# Verify if N_POPULATION is bigger than N_SELECTED
if N_POPULATION < N_SELECTED:
- raise ValueError(f"{N_POPULATION} must be bigger than {N_SELECTED}")
+ msg = f"{N_POPULATION} must be bigger than {N_SELECTED}"
+ raise ValueError(msg)
# Verify that the target contains no genes besides the ones inside genes variable.
not_in_genes_list = sorted({c for c in target if c not in genes})
if not_in_genes_list:
- raise ValueError(
- f"{not_in_genes_list} is not in genes list, evolution cannot converge"
- )
+ msg = f"{not_in_genes_list} is not in genes list, evolution cannot converge"
+ raise ValueError(msg)
# Generate random starting population.
population = []
diff --git a/graphics/vector3_for_2d_rendering.py b/graphics/vector3_for_2d_rendering.py
index dfa22262a8d8..a332206e67b6 100644
--- a/graphics/vector3_for_2d_rendering.py
+++ b/graphics/vector3_for_2d_rendering.py
@@ -28,9 +28,8 @@ def convert_to_2d(
TypeError: Input values must either be float or int: ['1', 2, 3, 10, 10]
"""
if not all(isinstance(val, (float, int)) for val in locals().values()):
- raise TypeError(
- "Input values must either be float or int: " f"{list(locals().values())}"
- )
+ msg = f"Input values must either be float or int: {list(locals().values())}"
+ raise TypeError(msg)
projected_x = ((x * distance) / (z + distance)) * scale
projected_y = ((y * distance) / (z + distance)) * scale
return projected_x, projected_y
@@ -71,10 +70,11 @@ def rotate(
input_variables = locals()
del input_variables["axis"]
if not all(isinstance(val, (float, int)) for val in input_variables.values()):
- raise TypeError(
+ msg = (
"Input values except axis must either be float or int: "
f"{list(input_variables.values())}"
)
+ raise TypeError(msg)
angle = (angle % 360) / 450 * 180 / math.pi
if axis == "z":
new_x = x * math.cos(angle) - y * math.sin(angle)
diff --git a/graphs/breadth_first_search_shortest_path.py b/graphs/breadth_first_search_shortest_path.py
index cb21076f91d2..d489b110b3a7 100644
--- a/graphs/breadth_first_search_shortest_path.py
+++ b/graphs/breadth_first_search_shortest_path.py
@@ -73,9 +73,10 @@ def shortest_path(self, target_vertex: str) -> str:
target_vertex_parent = self.parent.get(target_vertex)
if target_vertex_parent is None:
- raise ValueError(
+ msg = (
f"No path from vertex: {self.source_vertex} to vertex: {target_vertex}"
)
+ raise ValueError(msg)
return self.shortest_path(target_vertex_parent) + f"->{target_vertex}"
diff --git a/linear_algebra/src/schur_complement.py b/linear_algebra/src/schur_complement.py
index 3a5f4443afd3..750f4de5e397 100644
--- a/linear_algebra/src/schur_complement.py
+++ b/linear_algebra/src/schur_complement.py
@@ -31,16 +31,18 @@ def schur_complement(
shape_c = np.shape(mat_c)
if shape_a[0] != shape_b[0]:
- raise ValueError(
- f"Expected the same number of rows for A and B. \
- Instead found A of size {shape_a} and B of size {shape_b}"
+ msg = (
+ "Expected the same number of rows for A and B. "
+ f"Instead found A of size {shape_a} and B of size {shape_b}"
)
+ raise ValueError(msg)
if shape_b[1] != shape_c[1]:
- raise ValueError(
- f"Expected the same number of columns for B and C. \
- Instead found B of size {shape_b} and C of size {shape_c}"
+ msg = (
+ "Expected the same number of columns for B and C. "
+ f"Instead found B of size {shape_b} and C of size {shape_c}"
)
+ raise ValueError(msg)
a_inv = pseudo_inv
if a_inv is None:
diff --git a/machine_learning/similarity_search.py b/machine_learning/similarity_search.py
index 72979181f67c..7a23ec463c8f 100644
--- a/machine_learning/similarity_search.py
+++ b/machine_learning/similarity_search.py
@@ -97,26 +97,29 @@ def similarity_search(
"""
if dataset.ndim != value_array.ndim:
- raise ValueError(
- f"Wrong input data's dimensions... dataset : {dataset.ndim}, "
- f"value_array : {value_array.ndim}"
+ msg = (
+ "Wrong input data's dimensions... "
+ f"dataset : {dataset.ndim}, value_array : {value_array.ndim}"
)
+ raise ValueError(msg)
try:
if dataset.shape[1] != value_array.shape[1]:
- raise ValueError(
- f"Wrong input data's shape... dataset : {dataset.shape[1]}, "
- f"value_array : {value_array.shape[1]}"
+ msg = (
+ "Wrong input data's shape... "
+ f"dataset : {dataset.shape[1]}, value_array : {value_array.shape[1]}"
)
+ raise ValueError(msg)
except IndexError:
if dataset.ndim != value_array.ndim:
raise TypeError("Wrong shape")
if dataset.dtype != value_array.dtype:
- raise TypeError(
- f"Input data have different datatype... dataset : {dataset.dtype}, "
- f"value_array : {value_array.dtype}"
+ msg = (
+ "Input data have different datatype... "
+ f"dataset : {dataset.dtype}, value_array : {value_array.dtype}"
)
+ raise TypeError(msg)
answer = []
diff --git a/machine_learning/support_vector_machines.py b/machine_learning/support_vector_machines.py
index df854cc850b1..24046115ebc4 100644
--- a/machine_learning/support_vector_machines.py
+++ b/machine_learning/support_vector_machines.py
@@ -74,7 +74,8 @@ def __init__(
# sklear: def_gamma = 1/(n_features * X.var()) (wiki)
# previously it was 1/(n_features)
else:
- raise ValueError(f"Unknown kernel: {kernel}")
+ msg = f"Unknown kernel: {kernel}"
+ raise ValueError(msg)
# kernels
def __linear(self, vector1: ndarray, vector2: ndarray) -> float:
diff --git a/maths/3n_plus_1.py b/maths/3n_plus_1.py
index 59fdec48e100..f9f6dfeb9faa 100644
--- a/maths/3n_plus_1.py
+++ b/maths/3n_plus_1.py
@@ -9,9 +9,11 @@ def n31(a: int) -> tuple[list[int], int]:
"""
if not isinstance(a, int):
- raise TypeError(f"Must be int, not {type(a).__name__}")
+ msg = f"Must be int, not {type(a).__name__}"
+ raise TypeError(msg)
if a < 1:
- raise ValueError(f"Given integer must be positive, not {a}")
+ msg = f"Given integer must be positive, not {a}"
+ raise ValueError(msg)
path = [a]
while a != 1:
diff --git a/maths/automorphic_number.py b/maths/automorphic_number.py
index 103fc7301831..8ed9375632a4 100644
--- a/maths/automorphic_number.py
+++ b/maths/automorphic_number.py
@@ -40,7 +40,8 @@ def is_automorphic_number(number: int) -> bool:
TypeError: Input value of [number=5.0] must be an integer
"""
if not isinstance(number, int):
- raise TypeError(f"Input value of [number={number}] must be an integer")
+ msg = f"Input value of [number={number}] must be an integer"
+ raise TypeError(msg)
if number < 0:
return False
number_square = number * number
diff --git a/maths/catalan_number.py b/maths/catalan_number.py
index 85607dc1eca4..20c2cfb17c06 100644
--- a/maths/catalan_number.py
+++ b/maths/catalan_number.py
@@ -31,10 +31,12 @@ def catalan(number: int) -> int:
"""
if not isinstance(number, int):
- raise TypeError(f"Input value of [number={number}] must be an integer")
+ msg = f"Input value of [number={number}] must be an integer"
+ raise TypeError(msg)
if number < 1:
- raise ValueError(f"Input value of [number={number}] must be > 0")
+ msg = f"Input value of [number={number}] must be > 0"
+ raise ValueError(msg)
current_number = 1
diff --git a/maths/dual_number_automatic_differentiation.py b/maths/dual_number_automatic_differentiation.py
index 9aa75830c4a1..f98997c8be4d 100644
--- a/maths/dual_number_automatic_differentiation.py
+++ b/maths/dual_number_automatic_differentiation.py
@@ -71,7 +71,7 @@ def __truediv__(self, other):
for i in self.duals:
new_duals.append(i / other)
return Dual(self.real / other, new_duals)
- raise ValueError()
+ raise ValueError
def __floordiv__(self, other):
if not isinstance(other, Dual):
@@ -79,7 +79,7 @@ def __floordiv__(self, other):
for i in self.duals:
new_duals.append(i // other)
return Dual(self.real // other, new_duals)
- raise ValueError()
+ raise ValueError
def __pow__(self, n):
if n < 0 or isinstance(n, float):
diff --git a/maths/hexagonal_number.py b/maths/hexagonal_number.py
index 28735c638f80..3677ab95ee00 100644
--- a/maths/hexagonal_number.py
+++ b/maths/hexagonal_number.py
@@ -36,7 +36,8 @@ def hexagonal(number: int) -> int:
TypeError: Input value of [number=11.0] must be an integer
"""
if not isinstance(number, int):
- raise TypeError(f"Input value of [number={number}] must be an integer")
+ msg = f"Input value of [number={number}] must be an integer"
+ raise TypeError(msg)
if number < 1:
raise ValueError("Input must be a positive integer")
return number * (2 * number - 1)
diff --git a/maths/juggler_sequence.py b/maths/juggler_sequence.py
index 9daba8bc0e8a..7f65d1dff925 100644
--- a/maths/juggler_sequence.py
+++ b/maths/juggler_sequence.py
@@ -40,9 +40,11 @@ def juggler_sequence(number: int) -> list[int]:
ValueError: Input value of [number=-1] must be a positive integer
"""
if not isinstance(number, int):
- raise TypeError(f"Input value of [number={number}] must be an integer")
+ msg = f"Input value of [number={number}] must be an integer"
+ raise TypeError(msg)
if number < 1:
- raise ValueError(f"Input value of [number={number}] must be a positive integer")
+ msg = f"Input value of [number={number}] must be a positive integer"
+ raise ValueError(msg)
sequence = [number]
while number != 1:
if number % 2 == 0:
diff --git a/maths/liouville_lambda.py b/maths/liouville_lambda.py
index 5993efa42d66..1ed228dd5434 100644
--- a/maths/liouville_lambda.py
+++ b/maths/liouville_lambda.py
@@ -33,7 +33,8 @@ def liouville_lambda(number: int) -> int:
TypeError: Input value of [number=11.0] must be an integer
"""
if not isinstance(number, int):
- raise TypeError(f"Input value of [number={number}] must be an integer")
+ msg = f"Input value of [number={number}] must be an integer"
+ raise TypeError(msg)
if number < 1:
raise ValueError("Input must be a positive integer")
return -1 if len(prime_factors(number)) % 2 else 1
diff --git a/maths/manhattan_distance.py b/maths/manhattan_distance.py
index 2711d4c8ccd6..413991468a49 100644
--- a/maths/manhattan_distance.py
+++ b/maths/manhattan_distance.py
@@ -15,15 +15,15 @@ def manhattan_distance(point_a: list, point_b: list) -> float:
9.0
>>> manhattan_distance([1,1], None)
Traceback (most recent call last):
- ...
+ ...
ValueError: Missing an input
>>> manhattan_distance([1,1], [2, 2, 2])
Traceback (most recent call last):
- ...
+ ...
ValueError: Both points must be in the same n-dimensional space
>>> manhattan_distance([1,"one"], [2, 2, 2])
Traceback (most recent call last):
- ...
+ ...
TypeError: Expected a list of numbers as input, found str
>>> manhattan_distance(1, [2, 2, 2])
Traceback (most recent call last):
@@ -66,14 +66,14 @@ def _validate_point(point: list[float]) -> None:
if isinstance(point, list):
for item in point:
if not isinstance(item, (int, float)):
- raise TypeError(
- f"Expected a list of numbers as input, "
- f"found {type(item).__name__}"
+ msg = (
+ "Expected a list of numbers as input, found "
+ f"{type(item).__name__}"
)
+ raise TypeError(msg)
else:
- raise TypeError(
- f"Expected a list of numbers as input, found {type(point).__name__}"
- )
+ msg = f"Expected a list of numbers as input, found {type(point).__name__}"
+ raise TypeError(msg)
else:
raise ValueError("Missing an input")
diff --git a/maths/pronic_number.py b/maths/pronic_number.py
index 8b554dbbd602..cf4d3d2eb24b 100644
--- a/maths/pronic_number.py
+++ b/maths/pronic_number.py
@@ -41,7 +41,8 @@ def is_pronic(number: int) -> bool:
TypeError: Input value of [number=6.0] must be an integer
"""
if not isinstance(number, int):
- raise TypeError(f"Input value of [number={number}] must be an integer")
+ msg = f"Input value of [number={number}] must be an integer"
+ raise TypeError(msg)
if number < 0 or number % 2 == 1:
return False
number_sqrt = int(number**0.5)
diff --git a/maths/proth_number.py b/maths/proth_number.py
index ce911473a2d2..47747ed260f7 100644
--- a/maths/proth_number.py
+++ b/maths/proth_number.py
@@ -29,10 +29,12 @@ def proth(number: int) -> int:
"""
if not isinstance(number, int):
- raise TypeError(f"Input value of [number={number}] must be an integer")
+ msg = f"Input value of [number={number}] must be an integer"
+ raise TypeError(msg)
if number < 1:
- raise ValueError(f"Input value of [number={number}] must be > 0")
+ msg = f"Input value of [number={number}] must be > 0"
+ raise ValueError(msg)
elif number == 1:
return 3
elif number == 2:
diff --git a/maths/radix2_fft.py b/maths/radix2_fft.py
index af98f24f9538..2c5cdc004d1d 100644
--- a/maths/radix2_fft.py
+++ b/maths/radix2_fft.py
@@ -167,7 +167,7 @@ def __str__(self):
f"{coef}*x^{i}" for coef, i in enumerate(self.product)
)
- return "\n".join((a, b, c))
+ return f"{a}\n{b}\n{c}"
# Unit tests
diff --git a/maths/sieve_of_eratosthenes.py b/maths/sieve_of_eratosthenes.py
index 3cd6ce0b4d9d..a0520aa5cf50 100644
--- a/maths/sieve_of_eratosthenes.py
+++ b/maths/sieve_of_eratosthenes.py
@@ -34,7 +34,8 @@ def prime_sieve(num: int) -> list[int]:
"""
if num <= 0:
- raise ValueError(f"{num}: Invalid input, please enter a positive integer.")
+ msg = f"{num}: Invalid input, please enter a positive integer."
+ raise ValueError(msg)
sieve = [True] * (num + 1)
prime = []
diff --git a/maths/sylvester_sequence.py b/maths/sylvester_sequence.py
index 114c9dd58582..607424c6a90b 100644
--- a/maths/sylvester_sequence.py
+++ b/maths/sylvester_sequence.py
@@ -31,7 +31,8 @@ def sylvester(number: int) -> int:
if number == 1:
return 2
elif number < 1:
- raise ValueError(f"The input value of [n={number}] has to be > 0")
+ msg = f"The input value of [n={number}] has to be > 0"
+ raise ValueError(msg)
else:
num = sylvester(number - 1)
lower = num - 1
diff --git a/maths/twin_prime.py b/maths/twin_prime.py
index e6ac0cc7805b..912b10b366c0 100644
--- a/maths/twin_prime.py
+++ b/maths/twin_prime.py
@@ -32,7 +32,8 @@ def twin_prime(number: int) -> int:
TypeError: Input value of [number=6.0] must be an integer
"""
if not isinstance(number, int):
- raise TypeError(f"Input value of [number={number}] must be an integer")
+ msg = f"Input value of [number={number}] must be an integer"
+ raise TypeError(msg)
if is_prime(number) and is_prime(number + 2):
return number + 2
else:
diff --git a/matrix/matrix_operation.py b/matrix/matrix_operation.py
index 576094902af4..f189f1898d33 100644
--- a/matrix/matrix_operation.py
+++ b/matrix/matrix_operation.py
@@ -70,10 +70,11 @@ def multiply(matrix_a: list[list[int]], matrix_b: list[list[int]]) -> list[list[
rows, cols = _verify_matrix_sizes(matrix_a, matrix_b)
if cols[0] != rows[1]:
- raise ValueError(
- f"Cannot multiply matrix of dimensions ({rows[0]},{cols[0]}) "
- f"and ({rows[1]},{cols[1]})"
+ msg = (
+ "Cannot multiply matrix of dimensions "
+ f"({rows[0]},{cols[0]}) and ({rows[1]},{cols[1]})"
)
+ raise ValueError(msg)
return [
[sum(m * n for m, n in zip(i, j)) for j in zip(*matrix_b)] for i in matrix_a
]
@@ -174,10 +175,11 @@ def _verify_matrix_sizes(
) -> tuple[tuple[int, int], tuple[int, int]]:
shape = _shape(matrix_a) + _shape(matrix_b)
if shape[0] != shape[3] or shape[1] != shape[2]:
- raise ValueError(
- f"operands could not be broadcast together with shape "
+ msg = (
+ "operands could not be broadcast together with shape "
f"({shape[0], shape[1]}), ({shape[2], shape[3]})"
)
+ raise ValueError(msg)
return (shape[0], shape[2]), (shape[1], shape[3])
diff --git a/matrix/sherman_morrison.py b/matrix/sherman_morrison.py
index 39eddfed81f3..256271e8a87d 100644
--- a/matrix/sherman_morrison.py
+++ b/matrix/sherman_morrison.py
@@ -173,7 +173,8 @@ def __mul__(self, another: int | float | Matrix) -> Matrix:
result[r, c] += self[r, i] * another[i, c]
return result
else:
- raise TypeError(f"Unsupported type given for another ({type(another)})")
+ msg = f"Unsupported type given for another ({type(another)})"
+ raise TypeError(msg)
def transpose(self) -> Matrix:
"""
diff --git a/neural_network/input_data.py b/neural_network/input_data.py
index 2a32f0b82c37..94c018ece9ba 100644
--- a/neural_network/input_data.py
+++ b/neural_network/input_data.py
@@ -198,10 +198,7 @@ def next_batch(self, batch_size, fake_data=False, shuffle=True):
"""Return the next `batch_size` examples from this data set."""
if fake_data:
fake_image = [1] * 784
- if self.one_hot:
- fake_label = [1] + [0] * 9
- else:
- fake_label = 0
+ fake_label = [1] + [0] * 9 if self.one_hot else 0
return (
[fake_image for _ in range(batch_size)],
[fake_label for _ in range(batch_size)],
@@ -324,10 +321,11 @@ def fake():
test_labels = _extract_labels(f, one_hot=one_hot)
if not 0 <= validation_size <= len(train_images):
- raise ValueError(
- f"Validation size should be between 0 and {len(train_images)}. "
- f"Received: {validation_size}."
+ msg = (
+ "Validation size should be between 0 and "
+ f"{len(train_images)}. Received: {validation_size}."
)
+ raise ValueError(msg)
validation_images = train_images[:validation_size]
validation_labels = train_labels[:validation_size]
diff --git a/other/nested_brackets.py b/other/nested_brackets.py
index ea48c0a5f532..19c6dd53c8b2 100644
--- a/other/nested_brackets.py
+++ b/other/nested_brackets.py
@@ -18,7 +18,7 @@ def is_balanced(s):
stack = []
open_brackets = set({"(", "[", "{"})
closed_brackets = set({")", "]", "}"})
- open_to_closed = dict({"{": "}", "[": "]", "(": ")"})
+ open_to_closed = {"{": "}", "[": "]", "(": ")"}
for i in range(len(s)):
if s[i] in open_brackets:
diff --git a/other/scoring_algorithm.py b/other/scoring_algorithm.py
index 8e04a8f30dd7..af04f432e433 100644
--- a/other/scoring_algorithm.py
+++ b/other/scoring_algorithm.py
@@ -68,7 +68,8 @@ def calculate_each_score(
# weight not 0 or 1
else:
- raise ValueError(f"Invalid weight of {weight:f} provided")
+ msg = f"Invalid weight of {weight:f} provided"
+ raise ValueError(msg)
score_lists.append(score)
diff --git a/project_euler/problem_054/sol1.py b/project_euler/problem_054/sol1.py
index 9af7aef5a716..74409f32c712 100644
--- a/project_euler/problem_054/sol1.py
+++ b/project_euler/problem_054/sol1.py
@@ -119,10 +119,12 @@ def __init__(self, hand: str) -> None:
For example: "6S 4C KC AS TH"
"""
if not isinstance(hand, str):
- raise TypeError(f"Hand should be of type 'str': {hand!r}")
+ msg = f"Hand should be of type 'str': {hand!r}"
+ raise TypeError(msg)
# split removes duplicate whitespaces so no need of strip
if len(hand.split(" ")) != 5:
- raise ValueError(f"Hand should contain only 5 cards: {hand!r}")
+ msg = f"Hand should contain only 5 cards: {hand!r}"
+ raise ValueError(msg)
self._hand = hand
self._first_pair = 0
self._second_pair = 0
diff --git a/project_euler/problem_068/sol1.py b/project_euler/problem_068/sol1.py
index 772be359f630..cf814b001d57 100644
--- a/project_euler/problem_068/sol1.py
+++ b/project_euler/problem_068/sol1.py
@@ -73,7 +73,8 @@ def solution(gon_side: int = 5) -> int:
if is_magic_gon(numbers):
return int("".join(str(n) for n in numbers))
- raise ValueError(f"Magic {gon_side}-gon ring is impossible")
+ msg = f"Magic {gon_side}-gon ring is impossible"
+ raise ValueError(msg)
def generate_gon_ring(gon_side: int, perm: list[int]) -> list[int]:
diff --git a/project_euler/problem_131/sol1.py b/project_euler/problem_131/sol1.py
index f5302aac8644..be3ea9c81ae4 100644
--- a/project_euler/problem_131/sol1.py
+++ b/project_euler/problem_131/sol1.py
@@ -26,10 +26,7 @@ def is_prime(number: int) -> bool:
False
"""
- for divisor in range(2, isqrt(number) + 1):
- if number % divisor == 0:
- return False
- return True
+ return all(number % divisor != 0 for divisor in range(2, isqrt(number) + 1))
def solution(max_prime: int = 10**6) -> int:
diff --git a/pyproject.toml b/pyproject.toml
index 48c3fbd4009d..a526196685f5 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -17,45 +17,88 @@ ignore-words-list = "3rt,ans,crate,damon,fo,followings,hist,iff,kwanza,mater,sec
skip = "./.*,*.json,ciphers/prehistoric_men.txt,project_euler/problem_022/p022_names.txt,pyproject.toml,strings/dictionary.txt,strings/words.txt"
[tool.ruff]
-ignore = [ # `ruff rule S101` for a description of that rule
- "B904", # B904: Within an `except` clause, raise exceptions with `raise ... from err`
- "B905", # B905: `zip()` without an explicit `strict=` parameter
- "E741", # E741: Ambiguous variable name 'l'
- "G004", # G004 Logging statement uses f-string
- "N999", # N999: Invalid module name
- "PLC1901", # PLC1901: `{}` can be simplified to `{}` as an empty string is falsey
- "PLR2004", # PLR2004: Magic value used in comparison
- "PLR5501", # PLR5501: Consider using `elif` instead of `else`
- "PLW0120", # PLW0120: `else` clause on loop without a `break` statement
- "PLW060", # PLW060: Using global for `{name}` but no assignment is done -- DO NOT FIX
- "PLW2901", # PLW2901: Redefined loop variable
- "RUF00", # RUF00: Ambiguous unicode character -- DO NOT FIX
- "RUF100", # RUF100: Unused `noqa` directive
- "S101", # S101: Use of `assert` detected -- DO NOT FIX
- "S105", # S105: Possible hardcoded password: 'password'
- "S113", # S113: Probable use of requests call without timeout
- "S311", # S311: Standard pseudo-random generators are not suitable for cryptographic purposes
- "UP038", # UP038: Use `X | Y` in `{}` call instead of `(X, Y)` -- DO NOT FIX
+ignore = [ # `ruff rule S101` for a description of that rule
+ "ARG001", # Unused function argument `amount` -- FIX ME?
+ "B904", # Within an `except` clause, raise exceptions with `raise ... from err` -- FIX ME
+ "B905", # `zip()` without an explicit `strict=` parameter -- FIX ME
+ "DTZ001", # The use of `datetime.datetime()` without `tzinfo` argument is not allowed -- FIX ME
+ "DTZ005", # The use of `datetime.datetime.now()` without `tzinfo` argument is not allowed -- FIX ME
+ "E741", # Ambiguous variable name 'l' -- FIX ME
+ "EM101", # Exception must not use a string literal, assign to variable first
+ "EXE001", # Shebang is present but file is not executable" -- FIX ME
+ "G004", # Logging statement uses f-string
+ "ICN001", # `matplotlib.pyplot` should be imported as `plt` -- FIX ME
+ "INP001", # File `x/y/z.py` is part of an implicit namespace package. Add an `__init__.py`. -- FIX ME
+ "N999", # Invalid module name -- FIX ME
+ "NPY002", # Replace legacy `np.random.choice` call with `np.random.Generator` -- FIX ME
+ "PGH003", # Use specific rule codes when ignoring type issues -- FIX ME
+ "PLC1901", # `{}` can be simplified to `{}` as an empty string is falsey
+ "PLR5501", # Consider using `elif` instead of `else` -- FIX ME
+ "PLW0120", # `else` clause on loop without a `break` statement -- FIX ME
+ "PLW060", # Using global for `{name}` but no assignment is done -- DO NOT FIX
+ "PLW2901", # PLW2901: Redefined loop variable -- FIX ME
+ "RUF00", # Ambiguous unicode character and other rules
+ "RUF100", # Unused `noqa` directive -- FIX ME
+ "S101", # Use of `assert` detected -- DO NOT FIX
+ "S105", # Possible hardcoded password: 'password'
+ "S113", # Probable use of requests call without timeout -- FIX ME
+ "S311", # Standard pseudo-random generators are not suitable for cryptographic purposes -- FIX ME
+ "SIM102", # Use a single `if` statement instead of nested `if` statements -- FIX ME
+ "SLF001", # Private member accessed: `_Iterator` -- FIX ME
+ "UP038", # Use `X | Y` in `{}` call instead of `(X, Y)` -- DO NOT FIX
]
-select = [ # https://beta.ruff.rs/docs/rules
- "A", # A: builtins
- "B", # B: bugbear
- "C40", # C40: comprehensions
- "C90", # C90: mccabe code complexity
- "E", # E: pycodestyle errors
- "F", # F: pyflakes
- "G", # G: logging format
- "I", # I: isort
- "N", # N: pep8 naming
- "PL", # PL: pylint
- "PIE", # PIE: pie
- "PYI", # PYI: type hinting stub files
- "RUF", # RUF: ruff
- "S", # S: bandit
- "TID", # TID: tidy imports
- "UP", # UP: pyupgrade
- "W", # W: pycodestyle warnings
- "YTT", # YTT: year 2020
+select = [ # https://beta.ruff.rs/docs/rules
+ "A", # flake8-builtins
+ "ARG", # flake8-unused-arguments
+ "ASYNC", # flake8-async
+ "B", # flake8-bugbear
+ "BLE", # flake8-blind-except
+ "C4", # flake8-comprehensions
+ "C90", # McCabe cyclomatic complexity
+ "DTZ", # flake8-datetimez
+ "E", # pycodestyle
+ "EM", # flake8-errmsg
+ "EXE", # flake8-executable
+ "F", # Pyflakes
+ "FA", # flake8-future-annotations
+ "FLY", # flynt
+ "G", # flake8-logging-format
+ "I", # isort
+ "ICN", # flake8-import-conventions
+ "INP", # flake8-no-pep420
+ "INT", # flake8-gettext
+ "N", # pep8-naming
+ "NPY", # NumPy-specific rules
+ "PGH", # pygrep-hooks
+ "PIE", # flake8-pie
+ "PL", # Pylint
+ "PYI", # flake8-pyi
+ "RSE", # flake8-raise
+ "RUF", # Ruff-specific rules
+ "S", # flake8-bandit
+ "SIM", # flake8-simplify
+ "SLF", # flake8-self
+ "T10", # flake8-debugger
+ "TD", # flake8-todos
+ "TID", # flake8-tidy-imports
+ "UP", # pyupgrade
+ "W", # pycodestyle
+ "YTT", # flake8-2020
+ # "ANN", # flake8-annotations # FIX ME?
+ # "COM", # flake8-commas
+ # "D", # pydocstyle -- FIX ME?
+ # "DJ", # flake8-django
+ # "ERA", # eradicate -- DO NOT FIX
+ # "FBT", # flake8-boolean-trap # FIX ME
+ # "ISC", # flake8-implicit-str-concat # FIX ME
+ # "PD", # pandas-vet
+ # "PT", # flake8-pytest-style
+ # "PTH", # flake8-use-pathlib # FIX ME
+ # "Q", # flake8-quotes
+ # "RET", # flake8-return # FIX ME?
+ # "T20", # flake8-print
+ # "TCH", # flake8-type-checking
+ # "TRY", # tryceratops
]
show-source = true
target-version = "py311"
@@ -63,7 +106,27 @@ target-version = "py311"
[tool.ruff.mccabe] # DO NOT INCREASE THIS VALUE
max-complexity = 17 # default: 10
+[tool.ruff.per-file-ignores]
+"arithmetic_analysis/newton_raphson.py" = ["PGH001"]
+"audio_filters/show_response.py" = ["ARG002"]
+"data_structures/binary_tree/binary_search_tree_recursive.py" = ["BLE001"]
+"data_structures/binary_tree/treap.py" = ["SIM114"]
+"data_structures/hashing/hash_table.py" = ["ARG002"]
+"data_structures/hashing/quadratic_probing.py" = ["ARG002"]
+"data_structures/hashing/tests/test_hash_map.py" = ["BLE001"]
+"data_structures/heap/max_heap.py" = ["SIM114"]
+"graphs/minimum_spanning_tree_prims.py" = ["SIM114"]
+"hashes/enigma_machine.py" = ["BLE001"]
+"machine_learning/decision_tree.py" = ["SIM114"]
+"machine_learning/linear_discriminant_analysis.py" = ["ARG005"]
+"machine_learning/sequential_minimum_optimization.py" = ["SIM115"]
+"matrix/sherman_morrison.py" = ["SIM103", "SIM114"]
+"physics/newtons_second_law_of_motion.py" = ["BLE001"]
+"project_euler/problem_099/sol1.py" = ["SIM115"]
+"sorts/external_sort.py" = ["SIM115"]
+
[tool.ruff.pylint] # DO NOT INCREASE THESE VALUES
+allow-magic-value-types = ["float", "int", "str"]
max-args = 10 # default: 5
max-branches = 20 # default: 12
max-returns = 8 # default: 6
diff --git a/scripts/build_directory_md.py b/scripts/build_directory_md.py
index b95be9ebc254..24bc00cd036f 100755
--- a/scripts/build_directory_md.py
+++ b/scripts/build_directory_md.py
@@ -33,7 +33,7 @@ def print_directory_md(top_dir: str = ".") -> None:
if filepath != old_path:
old_path = print_path(old_path, filepath)
indent = (filepath.count(os.sep) + 1) if filepath else 0
- url = "/".join((filepath, filename)).replace(" ", "%20")
+ url = f"{filepath}/{filename}".replace(" ", "%20")
filename = os.path.splitext(filename.replace("_", " ").title())[0]
print(f"{md_prefix(indent)} [{filename}]({url})")
diff --git a/sorts/dutch_national_flag_sort.py b/sorts/dutch_national_flag_sort.py
index 79afefa73afe..758e3a887b84 100644
--- a/sorts/dutch_national_flag_sort.py
+++ b/sorts/dutch_national_flag_sort.py
@@ -84,9 +84,8 @@ def dutch_national_flag_sort(sequence: list) -> list:
sequence[mid], sequence[high] = sequence[high], sequence[mid]
high -= 1
else:
- raise ValueError(
- f"The elements inside the sequence must contains only {colors} values"
- )
+ msg = f"The elements inside the sequence must contains only {colors} values"
+ raise ValueError(msg)
return sequence
diff --git a/strings/barcode_validator.py b/strings/barcode_validator.py
index e050cd337d74..b4f3864e2642 100644
--- a/strings/barcode_validator.py
+++ b/strings/barcode_validator.py
@@ -65,7 +65,8 @@ def get_barcode(barcode: str) -> int:
ValueError: Barcode 'dwefgiweuf' has alphabetic characters.
"""
if str(barcode).isalpha():
- raise ValueError(f"Barcode '{barcode}' has alphabetic characters.")
+ msg = f"Barcode '{barcode}' has alphabetic characters."
+ raise ValueError(msg)
elif int(barcode) < 0:
raise ValueError("The entered barcode has a negative value. Try again.")
else:
diff --git a/strings/capitalize.py b/strings/capitalize.py
index 63603aa07e2d..e7e97c2beb53 100644
--- a/strings/capitalize.py
+++ b/strings/capitalize.py
@@ -17,7 +17,7 @@ def capitalize(sentence: str) -> str:
"""
if not sentence:
return ""
- lower_to_upper = {lc: uc for lc, uc in zip(ascii_lowercase, ascii_uppercase)}
+ lower_to_upper = dict(zip(ascii_lowercase, ascii_uppercase))
return lower_to_upper.get(sentence[0], sentence[0]) + sentence[1:]
diff --git a/strings/is_spain_national_id.py b/strings/is_spain_national_id.py
index 67f49755f412..60d06e123aae 100644
--- a/strings/is_spain_national_id.py
+++ b/strings/is_spain_national_id.py
@@ -48,7 +48,8 @@ def is_spain_national_id(spanish_id: str) -> bool:
"""
if not isinstance(spanish_id, str):
- raise TypeError(f"Expected string as input, found {type(spanish_id).__name__}")
+ msg = f"Expected string as input, found {type(spanish_id).__name__}"
+ raise TypeError(msg)
spanish_id_clean = spanish_id.replace("-", "").upper()
if len(spanish_id_clean) != 9:
diff --git a/strings/snake_case_to_camel_pascal_case.py b/strings/snake_case_to_camel_pascal_case.py
index 28a28b517a01..8219337a63b0 100644
--- a/strings/snake_case_to_camel_pascal_case.py
+++ b/strings/snake_case_to_camel_pascal_case.py
@@ -27,11 +27,11 @@ def snake_to_camel_case(input_str: str, use_pascal: bool = False) -> str:
"""
if not isinstance(input_str, str):
- raise ValueError(f"Expected string as input, found {type(input_str)}")
+ msg = f"Expected string as input, found {type(input_str)}"
+ raise ValueError(msg)
if not isinstance(use_pascal, bool):
- raise ValueError(
- f"Expected boolean as use_pascal parameter, found {type(use_pascal)}"
- )
+ msg = f"Expected boolean as use_pascal parameter, found {type(use_pascal)}"
+ raise ValueError(msg)
words = input_str.split("_")
diff --git a/web_programming/reddit.py b/web_programming/reddit.py
index 6a31c81c34bd..5ca5f828c0fb 100644
--- a/web_programming/reddit.py
+++ b/web_programming/reddit.py
@@ -26,7 +26,8 @@ def get_subreddit_data(
"""
wanted_data = wanted_data or []
if invalid_search_terms := ", ".join(sorted(set(wanted_data) - valid_terms)):
- raise ValueError(f"Invalid search term: {invalid_search_terms}")
+ msg = f"Invalid search term: {invalid_search_terms}"
+ raise ValueError(msg)
response = requests.get(
f"https://reddit.com/r/{subreddit}/{age}.json?limit={limit}",
headers={"User-agent": "A random string"},
diff --git a/web_programming/search_books_by_isbn.py b/web_programming/search_books_by_isbn.py
index abac3c70b22e..d5d4cfe92f20 100644
--- a/web_programming/search_books_by_isbn.py
+++ b/web_programming/search_books_by_isbn.py
@@ -22,7 +22,8 @@ def get_openlibrary_data(olid: str = "isbn/0140328726") -> dict:
"""
new_olid = olid.strip().strip("/") # Remove leading/trailing whitespace & slashes
if new_olid.count("/") != 1:
- raise ValueError(f"{olid} is not a valid Open Library olid")
+ msg = f"{olid} is not a valid Open Library olid"
+ raise ValueError(msg)
return requests.get(f"https://openlibrary.org/{new_olid}.json").json()
diff --git a/web_programming/slack_message.py b/web_programming/slack_message.py
index f35aa3ca587e..5e97d6b64c75 100644
--- a/web_programming/slack_message.py
+++ b/web_programming/slack_message.py
@@ -7,10 +7,11 @@ def send_slack_message(message_body: str, slack_url: str) -> None:
headers = {"Content-Type": "application/json"}
response = requests.post(slack_url, json={"text": message_body}, headers=headers)
if response.status_code != 200:
- raise ValueError(
- f"Request to slack returned an error {response.status_code}, "
- f"the response is:\n{response.text}"
+ msg = (
+ "Request to slack returned an error "
+ f"{response.status_code}, the response is:\n{response.text}"
)
+ raise ValueError(msg)
if __name__ == "__main__":
From c93659d7ce65e3717f06333e3d049ebaa888e597 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Mon, 29 May 2023 17:37:54 -0700
Subject: [PATCH 087/808] Fix type error in `strassen_matrix_multiplication.py`
(#8784)
* Fix type error in strassen_matrix_multiplication.py
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 +
...ion.py.BROKEN => strassen_matrix_multiplication.py} | 10 ++++++----
2 files changed, 7 insertions(+), 4 deletions(-)
rename divide_and_conquer/{strassen_matrix_multiplication.py.BROKEN => strassen_matrix_multiplication.py} (97%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 11ff93c91430..231b0e2f1d2f 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -294,6 +294,7 @@
* [Mergesort](divide_and_conquer/mergesort.py)
* [Peak](divide_and_conquer/peak.py)
* [Power](divide_and_conquer/power.py)
+ * [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py)
## Dynamic Programming
* [Abbreviation](dynamic_programming/abbreviation.py)
diff --git a/divide_and_conquer/strassen_matrix_multiplication.py.BROKEN b/divide_and_conquer/strassen_matrix_multiplication.py
similarity index 97%
rename from divide_and_conquer/strassen_matrix_multiplication.py.BROKEN
rename to divide_and_conquer/strassen_matrix_multiplication.py
index 2ca91c63bf4c..cbfc7e5655db 100644
--- a/divide_and_conquer/strassen_matrix_multiplication.py.BROKEN
+++ b/divide_and_conquer/strassen_matrix_multiplication.py
@@ -112,17 +112,19 @@ def strassen(matrix1: list, matrix2: list) -> list:
[[139, 163], [121, 134], [100, 121]]
"""
if matrix_dimensions(matrix1)[1] != matrix_dimensions(matrix2)[0]:
- raise Exception(
- "Unable to multiply these matrices, please check the dimensions. \n"
- f"Matrix A:{matrix1} \nMatrix B:{matrix2}"
+ msg = (
+ "Unable to multiply these matrices, please check the dimensions.\n"
+ f"Matrix A: {matrix1}\n"
+ f"Matrix B: {matrix2}"
)
+ raise Exception(msg)
dimension1 = matrix_dimensions(matrix1)
dimension2 = matrix_dimensions(matrix2)
if dimension1[0] == dimension1[1] and dimension2[0] == dimension2[1]:
return [matrix1, matrix2]
- maximum = max(dimension1, dimension2)
+ maximum = max(*dimension1, *dimension2)
maxim = int(math.pow(2, math.ceil(math.log2(maximum))))
new_matrix1 = matrix1
new_matrix2 = matrix2
From 4a27b544303e6bab90ed57b72fa3acf3d785429e Mon Sep 17 00:00:00 2001
From: Sundaram Kumar Jha
Date: Wed, 31 May 2023 06:26:59 +0530
Subject: [PATCH 088/808] Update permutations.py (#8102)
---
data_structures/arrays/permutations.py | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/data_structures/arrays/permutations.py b/data_structures/arrays/permutations.py
index eb3f26517863..4558bd8d468a 100644
--- a/data_structures/arrays/permutations.py
+++ b/data_structures/arrays/permutations.py
@@ -1,7 +1,6 @@
def permute(nums: list[int]) -> list[list[int]]:
"""
Return all permutations.
-
>>> from itertools import permutations
>>> numbers= [1,2,3]
>>> all(list(nums) in permute(numbers) for nums in permutations(numbers))
@@ -20,7 +19,32 @@ def permute(nums: list[int]) -> list[list[int]]:
return result
+def permute2(nums):
+ """
+ Return all permutations of the given list.
+
+ >>> permute2([1, 2, 3])
+ [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 2, 1], [3, 1, 2]]
+ """
+
+ def backtrack(start):
+ if start == len(nums) - 1:
+ output.append(nums[:])
+ else:
+ for i in range(start, len(nums)):
+ nums[start], nums[i] = nums[i], nums[start]
+ backtrack(start + 1)
+ nums[start], nums[i] = nums[i], nums[start] # backtrack
+
+ output = []
+ backtrack(0)
+ return output
+
+
if __name__ == "__main__":
import doctest
+ # use res to print the data in permute2 function
+ res = permute2([1, 2, 3])
+ print(res)
doctest.testmod()
From e871540e37b834673f9e6650b8e2281d7d36a8c3 Mon Sep 17 00:00:00 2001
From: Rudransh Bhardwaj <115872354+rudransh61@users.noreply.github.com>
Date: Wed, 31 May 2023 20:33:02 +0530
Subject: [PATCH 089/808] Added rank of matrix in linear algebra (#8687)
* Added rank of matrix in linear algebra
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Corrected name of function
* Corrected Rank_of_Matrix.py
* Completed rank_of_matrix.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* delete to rename Rank_of_Matrix.py
* created rank_of_matrix
* added more doctests in rank_of_matrix.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fixed some issues in rank_of_matrix.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* added moreeee doctestsss in rank_of_mtrix.py and fixed some bugss
* Update linear_algebra/src/rank_of_matrix.py
Co-authored-by: Christian Clauss
* Update linear_algebra/src/rank_of_matrix.py
Co-authored-by: Christian Clauss
* Update linear_algebra/src/rank_of_matrix.py
Co-authored-by: Christian Clauss
* Update rank_of_matrix.py
* Update linear_algebra/src/rank_of_matrix.py
Co-authored-by: Caeden Perelli-Harris
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
Co-authored-by: Caeden Perelli-Harris
---
linear_algebra/src/rank_of_matrix.py | 89 ++++++++++++++++++++++++++++
1 file changed, 89 insertions(+)
create mode 100644 linear_algebra/src/rank_of_matrix.py
diff --git a/linear_algebra/src/rank_of_matrix.py b/linear_algebra/src/rank_of_matrix.py
new file mode 100644
index 000000000000..7ff3c1699a69
--- /dev/null
+++ b/linear_algebra/src/rank_of_matrix.py
@@ -0,0 +1,89 @@
+"""
+Calculate the rank of a matrix.
+
+See: https://en.wikipedia.org/wiki/Rank_(linear_algebra)
+"""
+
+
+def rank_of_matrix(matrix: list[list[int | float]]) -> int:
+ """
+ Finds the rank of a matrix.
+ Args:
+ matrix: The matrix as a list of lists.
+ Returns:
+ The rank of the matrix.
+ Example:
+ >>> matrix1 = [[1, 2, 3],
+ ... [4, 5, 6],
+ ... [7, 8, 9]]
+ >>> rank_of_matrix(matrix1)
+ 2
+ >>> matrix2 = [[1, 0, 0],
+ ... [0, 1, 0],
+ ... [0, 0, 0]]
+ >>> rank_of_matrix(matrix2)
+ 2
+ >>> matrix3 = [[1, 2, 3, 4],
+ ... [5, 6, 7, 8],
+ ... [9, 10, 11, 12]]
+ >>> rank_of_matrix(matrix3)
+ 2
+ >>> rank_of_matrix([[2,3,-1,-1],
+ ... [1,-1,-2,4],
+ ... [3,1,3,-2],
+ ... [6,3,0,-7]])
+ 4
+ >>> rank_of_matrix([[2,1,-3,-6],
+ ... [3,-3,1,2],
+ ... [1,1,1,2]])
+ 3
+ >>> rank_of_matrix([[2,-1,0],
+ ... [1,3,4],
+ ... [4,1,-3]])
+ 3
+ >>> rank_of_matrix([[3,2,1],
+ ... [-6,-4,-2]])
+ 1
+ >>> rank_of_matrix([[],[]])
+ 0
+ >>> rank_of_matrix([[1]])
+ 1
+ >>> rank_of_matrix([[]])
+ 0
+ """
+
+ rows = len(matrix)
+ columns = len(matrix[0])
+ rank = min(rows, columns)
+
+ for row in range(rank):
+ # Check if diagonal element is not zero
+ if matrix[row][row] != 0:
+ # Eliminate all the elements below the diagonal
+ for col in range(row + 1, rows):
+ multiplier = matrix[col][row] / matrix[row][row]
+ for i in range(row, columns):
+ matrix[col][i] -= multiplier * matrix[row][i]
+ else:
+ # Find a non-zero diagonal element to swap rows
+ reduce = True
+ for i in range(row + 1, rows):
+ if matrix[i][row] != 0:
+ matrix[row], matrix[i] = matrix[i], matrix[row]
+ reduce = False
+ break
+ if reduce:
+ rank -= 1
+ for i in range(rows):
+ matrix[i][row] = matrix[i][rank]
+
+ # Reduce the row pointer by one to stay on the same row
+ row -= 1
+
+ return rank
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 4621b0bb4f5d3fff2fa4f0e53d6cb862fe002c60 Mon Sep 17 00:00:00 2001
From: nith2001 <75632283+nith2001@users.noreply.github.com>
Date: Wed, 31 May 2023 13:06:12 -0700
Subject: [PATCH 090/808] Improved Graph Implementations (#8730)
* Improved Graph Implementations
Provides new implementation for graph_list.py and graph_matrix.py along with pytest suites for each. Fixes #8709
* Graph implementation style fixes, corrections, and refactored tests
* Helpful docs about graph implementation
* Refactored code to separate files and applied enumerate()
* Renamed files and refactored code to fail fast
* Error handling style fix
* Fixed f-string code quality issue
* Last f-string fix
* Added return types to test functions and more style fixes
* Added more function return types
* Added more function return types pt2
* Fixed error messages
---
graphs/graph_adjacency_list.py | 589 ++++++++++++++++++++++++++++++
graphs/graph_adjacency_matrix.py | 608 +++++++++++++++++++++++++++++++
graphs/graph_matrix.py | 24 --
graphs/tests/__init__.py | 0
4 files changed, 1197 insertions(+), 24 deletions(-)
create mode 100644 graphs/graph_adjacency_list.py
create mode 100644 graphs/graph_adjacency_matrix.py
delete mode 100644 graphs/graph_matrix.py
create mode 100644 graphs/tests/__init__.py
diff --git a/graphs/graph_adjacency_list.py b/graphs/graph_adjacency_list.py
new file mode 100644
index 000000000000..76f34f845860
--- /dev/null
+++ b/graphs/graph_adjacency_list.py
@@ -0,0 +1,589 @@
+#!/usr/bin/env python3
+"""
+Author: Vikram Nithyanandam
+
+Description:
+The following implementation is a robust unweighted Graph data structure
+implemented using an adjacency list. This vertices and edges of this graph can be
+effectively initialized and modified while storing your chosen generic
+value in each vertex.
+
+Adjacency List: https://en.wikipedia.org/wiki/Adjacency_list
+
+Potential Future Ideas:
+- Add a flag to set edge weights on and set edge weights
+- Make edge weights and vertex values customizable to store whatever the client wants
+- Support multigraph functionality if the client wants it
+"""
+from __future__ import annotations
+
+import random
+import unittest
+from pprint import pformat
+from typing import Generic, TypeVar
+
+T = TypeVar("T")
+
+
+class GraphAdjacencyList(Generic[T]):
+ def __init__(
+ self, vertices: list[T], edges: list[list[T]], directed: bool = True
+ ) -> None:
+ """
+ Parameters:
+ - vertices: (list[T]) The list of vertex names the client wants to
+ pass in. Default is empty.
+ - edges: (list[list[T]]) The list of edges the client wants to
+ pass in. Each edge is a 2-element list. Default is empty.
+ - directed: (bool) Indicates if graph is directed or undirected.
+ Default is True.
+ """
+ self.adj_list: dict[T, list[T]] = {} # dictionary of lists of T
+ self.directed = directed
+
+ # Falsey checks
+ edges = edges or []
+ vertices = vertices or []
+
+ for vertex in vertices:
+ self.add_vertex(vertex)
+
+ for edge in edges:
+ if len(edge) != 2:
+ msg = f"Invalid input: {edge} is the wrong length."
+ raise ValueError(msg)
+ self.add_edge(edge[0], edge[1])
+
+ def add_vertex(self, vertex: T) -> None:
+ """
+ Adds a vertex to the graph. If the given vertex already exists,
+ a ValueError will be thrown.
+ """
+ if self.contains_vertex(vertex):
+ msg = f"Incorrect input: {vertex} is already in the graph."
+ raise ValueError(msg)
+ self.adj_list[vertex] = []
+
+ def add_edge(self, source_vertex: T, destination_vertex: T) -> None:
+ """
+ Creates an edge from source vertex to destination vertex. If any
+ given vertex doesn't exist or the edge already exists, a ValueError
+ will be thrown.
+ """
+ if not (
+ self.contains_vertex(source_vertex)
+ and self.contains_vertex(destination_vertex)
+ ):
+ msg = (
+ f"Incorrect input: Either {source_vertex} or "
+ f"{destination_vertex} does not exist"
+ )
+ raise ValueError(msg)
+ if self.contains_edge(source_vertex, destination_vertex):
+ msg = (
+ "Incorrect input: The edge already exists between "
+ f"{source_vertex} and {destination_vertex}"
+ )
+ raise ValueError(msg)
+
+ # add the destination vertex to the list associated with the source vertex
+ # and vice versa if not directed
+ self.adj_list[source_vertex].append(destination_vertex)
+ if not self.directed:
+ self.adj_list[destination_vertex].append(source_vertex)
+
+ def remove_vertex(self, vertex: T) -> None:
+ """
+ Removes the given vertex from the graph and deletes all incoming and
+ outgoing edges from the given vertex as well. If the given vertex
+ does not exist, a ValueError will be thrown.
+ """
+ if not self.contains_vertex(vertex):
+ msg = f"Incorrect input: {vertex} does not exist in this graph."
+ raise ValueError(msg)
+
+ if not self.directed:
+ # If not directed, find all neighboring vertices and delete all references
+ # of edges connecting to the given vertex
+ for neighbor in self.adj_list[vertex]:
+ self.adj_list[neighbor].remove(vertex)
+ else:
+ # If directed, search all neighbors of all vertices and delete all
+ # references of edges connecting to the given vertex
+ for edge_list in self.adj_list.values():
+ if vertex in edge_list:
+ edge_list.remove(vertex)
+
+ # Finally, delete the given vertex and all of its outgoing edge references
+ self.adj_list.pop(vertex)
+
+ def remove_edge(self, source_vertex: T, destination_vertex: T) -> None:
+ """
+ Removes the edge between the two vertices. If any given vertex
+ doesn't exist or the edge does not exist, a ValueError will be thrown.
+ """
+ if not (
+ self.contains_vertex(source_vertex)
+ and self.contains_vertex(destination_vertex)
+ ):
+ msg = (
+ f"Incorrect input: Either {source_vertex} or "
+ f"{destination_vertex} does not exist"
+ )
+ raise ValueError(msg)
+ if not self.contains_edge(source_vertex, destination_vertex):
+ msg = (
+ "Incorrect input: The edge does NOT exist between "
+ f"{source_vertex} and {destination_vertex}"
+ )
+ raise ValueError(msg)
+
+ # remove the destination vertex from the list associated with the source
+ # vertex and vice versa if not directed
+ self.adj_list[source_vertex].remove(destination_vertex)
+ if not self.directed:
+ self.adj_list[destination_vertex].remove(source_vertex)
+
+ def contains_vertex(self, vertex: T) -> bool:
+ """
+ Returns True if the graph contains the vertex, False otherwise.
+ """
+ return vertex in self.adj_list
+
+ def contains_edge(self, source_vertex: T, destination_vertex: T) -> bool:
+ """
+ Returns True if the graph contains the edge from the source_vertex to the
+ destination_vertex, False otherwise. If any given vertex doesn't exist, a
+ ValueError will be thrown.
+ """
+ if not (
+ self.contains_vertex(source_vertex)
+ and self.contains_vertex(destination_vertex)
+ ):
+ msg = (
+ f"Incorrect input: Either {source_vertex} "
+ f"or {destination_vertex} does not exist."
+ )
+ raise ValueError(msg)
+
+ return destination_vertex in self.adj_list[source_vertex]
+
+ def clear_graph(self) -> None:
+ """
+ Clears all vertices and edges.
+ """
+ self.adj_list = {}
+
+ def __repr__(self) -> str:
+ return pformat(self.adj_list)
+
+
+class TestGraphAdjacencyList(unittest.TestCase):
+ def __assert_graph_edge_exists_check(
+ self,
+ undirected_graph: GraphAdjacencyList,
+ directed_graph: GraphAdjacencyList,
+ edge: list[int],
+ ) -> None:
+ self.assertTrue(undirected_graph.contains_edge(edge[0], edge[1]))
+ self.assertTrue(undirected_graph.contains_edge(edge[1], edge[0]))
+ self.assertTrue(directed_graph.contains_edge(edge[0], edge[1]))
+
+ def __assert_graph_edge_does_not_exist_check(
+ self,
+ undirected_graph: GraphAdjacencyList,
+ directed_graph: GraphAdjacencyList,
+ edge: list[int],
+ ) -> None:
+ self.assertFalse(undirected_graph.contains_edge(edge[0], edge[1]))
+ self.assertFalse(undirected_graph.contains_edge(edge[1], edge[0]))
+ self.assertFalse(directed_graph.contains_edge(edge[0], edge[1]))
+
+ def __assert_graph_vertex_exists_check(
+ self,
+ undirected_graph: GraphAdjacencyList,
+ directed_graph: GraphAdjacencyList,
+ vertex: int,
+ ) -> None:
+ self.assertTrue(undirected_graph.contains_vertex(vertex))
+ self.assertTrue(directed_graph.contains_vertex(vertex))
+
+ def __assert_graph_vertex_does_not_exist_check(
+ self,
+ undirected_graph: GraphAdjacencyList,
+ directed_graph: GraphAdjacencyList,
+ vertex: int,
+ ) -> None:
+ self.assertFalse(undirected_graph.contains_vertex(vertex))
+ self.assertFalse(directed_graph.contains_vertex(vertex))
+
+ def __generate_random_edges(
+ self, vertices: list[int], edge_pick_count: int
+ ) -> list[list[int]]:
+ self.assertTrue(edge_pick_count <= len(vertices))
+
+ random_source_vertices: list[int] = random.sample(
+ vertices[0 : int(len(vertices) / 2)], edge_pick_count
+ )
+ random_destination_vertices: list[int] = random.sample(
+ vertices[int(len(vertices) / 2) :], edge_pick_count
+ )
+ random_edges: list[list[int]] = []
+
+ for source in random_source_vertices:
+ for dest in random_destination_vertices:
+ random_edges.append([source, dest])
+
+ return random_edges
+
+ def __generate_graphs(
+ self, vertex_count: int, min_val: int, max_val: int, edge_pick_count: int
+ ) -> tuple[GraphAdjacencyList, GraphAdjacencyList, list[int], list[list[int]]]:
+ if max_val - min_val + 1 < vertex_count:
+ raise ValueError(
+ "Will result in duplicate vertices. Either increase range "
+ "between min_val and max_val or decrease vertex count."
+ )
+
+ # generate graph input
+ random_vertices: list[int] = random.sample(
+ range(min_val, max_val + 1), vertex_count
+ )
+ random_edges: list[list[int]] = self.__generate_random_edges(
+ random_vertices, edge_pick_count
+ )
+
+ # build graphs
+ undirected_graph = GraphAdjacencyList(
+ vertices=random_vertices, edges=random_edges, directed=False
+ )
+ directed_graph = GraphAdjacencyList(
+ vertices=random_vertices, edges=random_edges, directed=True
+ )
+
+ return undirected_graph, directed_graph, random_vertices, random_edges
+
+ def test_init_check(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ # test graph initialization with vertices and edges
+ for num in random_vertices:
+ self.__assert_graph_vertex_exists_check(
+ undirected_graph, directed_graph, num
+ )
+
+ for edge in random_edges:
+ self.__assert_graph_edge_exists_check(
+ undirected_graph, directed_graph, edge
+ )
+ self.assertFalse(undirected_graph.directed)
+ self.assertTrue(directed_graph.directed)
+
+ def test_contains_vertex(self) -> None:
+ random_vertices: list[int] = random.sample(range(101), 20)
+
+ # Build graphs WITHOUT edges
+ undirected_graph = GraphAdjacencyList(
+ vertices=random_vertices, edges=[], directed=False
+ )
+ directed_graph = GraphAdjacencyList(
+ vertices=random_vertices, edges=[], directed=True
+ )
+
+ # Test contains_vertex
+ for num in range(101):
+ self.assertEqual(
+ num in random_vertices, undirected_graph.contains_vertex(num)
+ )
+ self.assertEqual(
+ num in random_vertices, directed_graph.contains_vertex(num)
+ )
+
+ def test_add_vertices(self) -> None:
+ random_vertices: list[int] = random.sample(range(101), 20)
+
+ # build empty graphs
+ undirected_graph: GraphAdjacencyList = GraphAdjacencyList(
+ vertices=[], edges=[], directed=False
+ )
+ directed_graph: GraphAdjacencyList = GraphAdjacencyList(
+ vertices=[], edges=[], directed=True
+ )
+
+ # run add_vertex
+ for num in random_vertices:
+ undirected_graph.add_vertex(num)
+
+ for num in random_vertices:
+ directed_graph.add_vertex(num)
+
+ # test add_vertex worked
+ for num in random_vertices:
+ self.__assert_graph_vertex_exists_check(
+ undirected_graph, directed_graph, num
+ )
+
+ def test_remove_vertices(self) -> None:
+ random_vertices: list[int] = random.sample(range(101), 20)
+
+ # build graphs WITHOUT edges
+ undirected_graph = GraphAdjacencyList(
+ vertices=random_vertices, edges=[], directed=False
+ )
+ directed_graph = GraphAdjacencyList(
+ vertices=random_vertices, edges=[], directed=True
+ )
+
+ # test remove_vertex worked
+ for num in random_vertices:
+ self.__assert_graph_vertex_exists_check(
+ undirected_graph, directed_graph, num
+ )
+
+ undirected_graph.remove_vertex(num)
+ directed_graph.remove_vertex(num)
+
+ self.__assert_graph_vertex_does_not_exist_check(
+ undirected_graph, directed_graph, num
+ )
+
+ def test_add_and_remove_vertices_repeatedly(self) -> None:
+ random_vertices1: list[int] = random.sample(range(51), 20)
+ random_vertices2: list[int] = random.sample(range(51, 101), 20)
+
+ # build graphs WITHOUT edges
+ undirected_graph = GraphAdjacencyList(
+ vertices=random_vertices1, edges=[], directed=False
+ )
+ directed_graph = GraphAdjacencyList(
+ vertices=random_vertices1, edges=[], directed=True
+ )
+
+ # test adding and removing vertices
+ for i, _ in enumerate(random_vertices1):
+ undirected_graph.add_vertex(random_vertices2[i])
+ directed_graph.add_vertex(random_vertices2[i])
+
+ self.__assert_graph_vertex_exists_check(
+ undirected_graph, directed_graph, random_vertices2[i]
+ )
+
+ undirected_graph.remove_vertex(random_vertices1[i])
+ directed_graph.remove_vertex(random_vertices1[i])
+
+ self.__assert_graph_vertex_does_not_exist_check(
+ undirected_graph, directed_graph, random_vertices1[i]
+ )
+
+ # remove all vertices
+ for i, _ in enumerate(random_vertices1):
+ undirected_graph.remove_vertex(random_vertices2[i])
+ directed_graph.remove_vertex(random_vertices2[i])
+
+ self.__assert_graph_vertex_does_not_exist_check(
+ undirected_graph, directed_graph, random_vertices2[i]
+ )
+
+ def test_contains_edge(self) -> None:
+ # generate graphs and graph input
+ vertex_count = 20
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(vertex_count, 0, 100, 4)
+
+ # generate all possible edges for testing
+ all_possible_edges: list[list[int]] = []
+ for i in range(vertex_count - 1):
+ for j in range(i + 1, vertex_count):
+ all_possible_edges.append([random_vertices[i], random_vertices[j]])
+ all_possible_edges.append([random_vertices[j], random_vertices[i]])
+
+ # test contains_edge function
+ for edge in all_possible_edges:
+ if edge in random_edges:
+ self.__assert_graph_edge_exists_check(
+ undirected_graph, directed_graph, edge
+ )
+ elif [edge[1], edge[0]] in random_edges:
+ # since this edge exists for undirected but the reverse
+ # may not exist for directed
+ self.__assert_graph_edge_exists_check(
+ undirected_graph, directed_graph, [edge[1], edge[0]]
+ )
+ else:
+ self.__assert_graph_edge_does_not_exist_check(
+ undirected_graph, directed_graph, edge
+ )
+
+ def test_add_edge(self) -> None:
+ # generate graph input
+ random_vertices: list[int] = random.sample(range(101), 15)
+ random_edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
+
+ # build graphs WITHOUT edges
+ undirected_graph = GraphAdjacencyList(
+ vertices=random_vertices, edges=[], directed=False
+ )
+ directed_graph = GraphAdjacencyList(
+ vertices=random_vertices, edges=[], directed=True
+ )
+
+ # run and test add_edge
+ for edge in random_edges:
+ undirected_graph.add_edge(edge[0], edge[1])
+ directed_graph.add_edge(edge[0], edge[1])
+ self.__assert_graph_edge_exists_check(
+ undirected_graph, directed_graph, edge
+ )
+
+ def test_remove_edge(self) -> None:
+ # generate graph input and graphs
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ # run and test remove_edge
+ for edge in random_edges:
+ self.__assert_graph_edge_exists_check(
+ undirected_graph, directed_graph, edge
+ )
+ undirected_graph.remove_edge(edge[0], edge[1])
+ directed_graph.remove_edge(edge[0], edge[1])
+ self.__assert_graph_edge_does_not_exist_check(
+ undirected_graph, directed_graph, edge
+ )
+
+ def test_add_and_remove_edges_repeatedly(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ # make some more edge options!
+ more_random_edges: list[list[int]] = []
+
+ while len(more_random_edges) != len(random_edges):
+ edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
+ for edge in edges:
+ if len(more_random_edges) == len(random_edges):
+ break
+ elif edge not in more_random_edges and edge not in random_edges:
+ more_random_edges.append(edge)
+
+ for i, _ in enumerate(random_edges):
+ undirected_graph.add_edge(more_random_edges[i][0], more_random_edges[i][1])
+ directed_graph.add_edge(more_random_edges[i][0], more_random_edges[i][1])
+
+ self.__assert_graph_edge_exists_check(
+ undirected_graph, directed_graph, more_random_edges[i]
+ )
+
+ undirected_graph.remove_edge(random_edges[i][0], random_edges[i][1])
+ directed_graph.remove_edge(random_edges[i][0], random_edges[i][1])
+
+ self.__assert_graph_edge_does_not_exist_check(
+ undirected_graph, directed_graph, random_edges[i]
+ )
+
+ def test_add_vertex_exception_check(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ for vertex in random_vertices:
+ with self.assertRaises(ValueError):
+ undirected_graph.add_vertex(vertex)
+ with self.assertRaises(ValueError):
+ directed_graph.add_vertex(vertex)
+
+ def test_remove_vertex_exception_check(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ for i in range(101):
+ if i not in random_vertices:
+ with self.assertRaises(ValueError):
+ undirected_graph.remove_vertex(i)
+ with self.assertRaises(ValueError):
+ directed_graph.remove_vertex(i)
+
+ def test_add_edge_exception_check(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ for edge in random_edges:
+ with self.assertRaises(ValueError):
+ undirected_graph.add_edge(edge[0], edge[1])
+ with self.assertRaises(ValueError):
+ directed_graph.add_edge(edge[0], edge[1])
+
+ def test_remove_edge_exception_check(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ more_random_edges: list[list[int]] = []
+
+ while len(more_random_edges) != len(random_edges):
+ edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
+ for edge in edges:
+ if len(more_random_edges) == len(random_edges):
+ break
+ elif edge not in more_random_edges and edge not in random_edges:
+ more_random_edges.append(edge)
+
+ for edge in more_random_edges:
+ with self.assertRaises(ValueError):
+ undirected_graph.remove_edge(edge[0], edge[1])
+ with self.assertRaises(ValueError):
+ directed_graph.remove_edge(edge[0], edge[1])
+
+ def test_contains_edge_exception_check(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ for vertex in random_vertices:
+ with self.assertRaises(ValueError):
+ undirected_graph.contains_edge(vertex, 102)
+ with self.assertRaises(ValueError):
+ directed_graph.contains_edge(vertex, 102)
+
+ with self.assertRaises(ValueError):
+ undirected_graph.contains_edge(103, 102)
+ with self.assertRaises(ValueError):
+ directed_graph.contains_edge(103, 102)
+
+
+if __name__ == "__main__":
+ unittest.main()
diff --git a/graphs/graph_adjacency_matrix.py b/graphs/graph_adjacency_matrix.py
new file mode 100644
index 000000000000..4d2e02f737f9
--- /dev/null
+++ b/graphs/graph_adjacency_matrix.py
@@ -0,0 +1,608 @@
+#!/usr/bin/env python3
+"""
+Author: Vikram Nithyanandam
+
+Description:
+The following implementation is a robust unweighted Graph data structure
+implemented using an adjacency matrix. This vertices and edges of this graph can be
+effectively initialized and modified while storing your chosen generic
+value in each vertex.
+
+Adjacency Matrix: https://mathworld.wolfram.com/AdjacencyMatrix.html
+
+Potential Future Ideas:
+- Add a flag to set edge weights on and set edge weights
+- Make edge weights and vertex values customizable to store whatever the client wants
+- Support multigraph functionality if the client wants it
+"""
+from __future__ import annotations
+
+import random
+import unittest
+from pprint import pformat
+from typing import Generic, TypeVar
+
+T = TypeVar("T")
+
+
+class GraphAdjacencyMatrix(Generic[T]):
+ def __init__(
+ self, vertices: list[T], edges: list[list[T]], directed: bool = True
+ ) -> None:
+ """
+ Parameters:
+ - vertices: (list[T]) The list of vertex names the client wants to
+ pass in. Default is empty.
+ - edges: (list[list[T]]) The list of edges the client wants to
+ pass in. Each edge is a 2-element list. Default is empty.
+ - directed: (bool) Indicates if graph is directed or undirected.
+ Default is True.
+ """
+ self.directed = directed
+ self.vertex_to_index: dict[T, int] = {}
+ self.adj_matrix: list[list[int]] = []
+
+ # Falsey checks
+ edges = edges or []
+ vertices = vertices or []
+
+ for vertex in vertices:
+ self.add_vertex(vertex)
+
+ for edge in edges:
+ if len(edge) != 2:
+ msg = f"Invalid input: {edge} must have length 2."
+ raise ValueError(msg)
+ self.add_edge(edge[0], edge[1])
+
+ def add_edge(self, source_vertex: T, destination_vertex: T) -> None:
+ """
+ Creates an edge from source vertex to destination vertex. If any
+ given vertex doesn't exist or the edge already exists, a ValueError
+ will be thrown.
+ """
+ if not (
+ self.contains_vertex(source_vertex)
+ and self.contains_vertex(destination_vertex)
+ ):
+ msg = (
+ f"Incorrect input: Either {source_vertex} or "
+ f"{destination_vertex} does not exist"
+ )
+ raise ValueError(msg)
+ if self.contains_edge(source_vertex, destination_vertex):
+ msg = (
+ "Incorrect input: The edge already exists between "
+ f"{source_vertex} and {destination_vertex}"
+ )
+ raise ValueError(msg)
+
+ # Get the indices of the corresponding vertices and set their edge value to 1.
+ u: int = self.vertex_to_index[source_vertex]
+ v: int = self.vertex_to_index[destination_vertex]
+ self.adj_matrix[u][v] = 1
+ if not self.directed:
+ self.adj_matrix[v][u] = 1
+
+ def remove_edge(self, source_vertex: T, destination_vertex: T) -> None:
+ """
+ Removes the edge between the two vertices. If any given vertex
+ doesn't exist or the edge does not exist, a ValueError will be thrown.
+ """
+ if not (
+ self.contains_vertex(source_vertex)
+ and self.contains_vertex(destination_vertex)
+ ):
+ msg = (
+ f"Incorrect input: Either {source_vertex} or "
+ f"{destination_vertex} does not exist"
+ )
+ raise ValueError(msg)
+ if not self.contains_edge(source_vertex, destination_vertex):
+ msg = (
+ "Incorrect input: The edge does NOT exist between "
+ f"{source_vertex} and {destination_vertex}"
+ )
+ raise ValueError(msg)
+
+ # Get the indices of the corresponding vertices and set their edge value to 0.
+ u: int = self.vertex_to_index[source_vertex]
+ v: int = self.vertex_to_index[destination_vertex]
+ self.adj_matrix[u][v] = 0
+ if not self.directed:
+ self.adj_matrix[v][u] = 0
+
+ def add_vertex(self, vertex: T) -> None:
+ """
+ Adds a vertex to the graph. If the given vertex already exists,
+ a ValueError will be thrown.
+ """
+ if self.contains_vertex(vertex):
+ msg = f"Incorrect input: {vertex} already exists in this graph."
+ raise ValueError(msg)
+
+ # build column for vertex
+ for row in self.adj_matrix:
+ row.append(0)
+
+ # build row for vertex and update other data structures
+ self.adj_matrix.append([0] * (len(self.adj_matrix) + 1))
+ self.vertex_to_index[vertex] = len(self.adj_matrix) - 1
+
+ def remove_vertex(self, vertex: T) -> None:
+ """
+ Removes the given vertex from the graph and deletes all incoming and
+ outgoing edges from the given vertex as well. If the given vertex
+ does not exist, a ValueError will be thrown.
+ """
+ if not self.contains_vertex(vertex):
+ msg = f"Incorrect input: {vertex} does not exist in this graph."
+ raise ValueError(msg)
+
+ # first slide up the rows by deleting the row corresponding to
+ # the vertex being deleted.
+ start_index = self.vertex_to_index[vertex]
+ self.adj_matrix.pop(start_index)
+
+ # next, slide the columns to the left by deleting the values in
+ # the column corresponding to the vertex being deleted
+ for lst in self.adj_matrix:
+ lst.pop(start_index)
+
+ # final clean up
+ self.vertex_to_index.pop(vertex)
+
+ # decrement indices for vertices shifted by the deleted vertex in the adj matrix
+ for vertex in self.vertex_to_index:
+ if self.vertex_to_index[vertex] >= start_index:
+ self.vertex_to_index[vertex] = self.vertex_to_index[vertex] - 1
+
+ def contains_vertex(self, vertex: T) -> bool:
+ """
+ Returns True if the graph contains the vertex, False otherwise.
+ """
+ return vertex in self.vertex_to_index
+
+ def contains_edge(self, source_vertex: T, destination_vertex: T) -> bool:
+ """
+ Returns True if the graph contains the edge from the source_vertex to the
+ destination_vertex, False otherwise. If any given vertex doesn't exist, a
+ ValueError will be thrown.
+ """
+ if not (
+ self.contains_vertex(source_vertex)
+ and self.contains_vertex(destination_vertex)
+ ):
+ msg = (
+ f"Incorrect input: Either {source_vertex} "
+ f"or {destination_vertex} does not exist."
+ )
+ raise ValueError(msg)
+
+ u = self.vertex_to_index[source_vertex]
+ v = self.vertex_to_index[destination_vertex]
+ return self.adj_matrix[u][v] == 1
+
+ def clear_graph(self) -> None:
+ """
+ Clears all vertices and edges.
+ """
+ self.vertex_to_index = {}
+ self.adj_matrix = []
+
+ def __repr__(self) -> str:
+ first = "Adj Matrix:\n" + pformat(self.adj_matrix)
+ second = "\nVertex to index mapping:\n" + pformat(self.vertex_to_index)
+ return first + second
+
+
+class TestGraphMatrix(unittest.TestCase):
+ def __assert_graph_edge_exists_check(
+ self,
+ undirected_graph: GraphAdjacencyMatrix,
+ directed_graph: GraphAdjacencyMatrix,
+ edge: list[int],
+ ) -> None:
+ self.assertTrue(undirected_graph.contains_edge(edge[0], edge[1]))
+ self.assertTrue(undirected_graph.contains_edge(edge[1], edge[0]))
+ self.assertTrue(directed_graph.contains_edge(edge[0], edge[1]))
+
+ def __assert_graph_edge_does_not_exist_check(
+ self,
+ undirected_graph: GraphAdjacencyMatrix,
+ directed_graph: GraphAdjacencyMatrix,
+ edge: list[int],
+ ) -> None:
+ self.assertFalse(undirected_graph.contains_edge(edge[0], edge[1]))
+ self.assertFalse(undirected_graph.contains_edge(edge[1], edge[0]))
+ self.assertFalse(directed_graph.contains_edge(edge[0], edge[1]))
+
+ def __assert_graph_vertex_exists_check(
+ self,
+ undirected_graph: GraphAdjacencyMatrix,
+ directed_graph: GraphAdjacencyMatrix,
+ vertex: int,
+ ) -> None:
+ self.assertTrue(undirected_graph.contains_vertex(vertex))
+ self.assertTrue(directed_graph.contains_vertex(vertex))
+
+ def __assert_graph_vertex_does_not_exist_check(
+ self,
+ undirected_graph: GraphAdjacencyMatrix,
+ directed_graph: GraphAdjacencyMatrix,
+ vertex: int,
+ ) -> None:
+ self.assertFalse(undirected_graph.contains_vertex(vertex))
+ self.assertFalse(directed_graph.contains_vertex(vertex))
+
+ def __generate_random_edges(
+ self, vertices: list[int], edge_pick_count: int
+ ) -> list[list[int]]:
+ self.assertTrue(edge_pick_count <= len(vertices))
+
+ random_source_vertices: list[int] = random.sample(
+ vertices[0 : int(len(vertices) / 2)], edge_pick_count
+ )
+ random_destination_vertices: list[int] = random.sample(
+ vertices[int(len(vertices) / 2) :], edge_pick_count
+ )
+ random_edges: list[list[int]] = []
+
+ for source in random_source_vertices:
+ for dest in random_destination_vertices:
+ random_edges.append([source, dest])
+
+ return random_edges
+
+ def __generate_graphs(
+ self, vertex_count: int, min_val: int, max_val: int, edge_pick_count: int
+ ) -> tuple[GraphAdjacencyMatrix, GraphAdjacencyMatrix, list[int], list[list[int]]]:
+ if max_val - min_val + 1 < vertex_count:
+ raise ValueError(
+ "Will result in duplicate vertices. Either increase "
+ "range between min_val and max_val or decrease vertex count"
+ )
+
+ # generate graph input
+ random_vertices: list[int] = random.sample(
+ range(min_val, max_val + 1), vertex_count
+ )
+ random_edges: list[list[int]] = self.__generate_random_edges(
+ random_vertices, edge_pick_count
+ )
+
+ # build graphs
+ undirected_graph = GraphAdjacencyMatrix(
+ vertices=random_vertices, edges=random_edges, directed=False
+ )
+ directed_graph = GraphAdjacencyMatrix(
+ vertices=random_vertices, edges=random_edges, directed=True
+ )
+
+ return undirected_graph, directed_graph, random_vertices, random_edges
+
+ def test_init_check(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ # test graph initialization with vertices and edges
+ for num in random_vertices:
+ self.__assert_graph_vertex_exists_check(
+ undirected_graph, directed_graph, num
+ )
+
+ for edge in random_edges:
+ self.__assert_graph_edge_exists_check(
+ undirected_graph, directed_graph, edge
+ )
+
+ self.assertFalse(undirected_graph.directed)
+ self.assertTrue(directed_graph.directed)
+
+ def test_contains_vertex(self) -> None:
+ random_vertices: list[int] = random.sample(range(101), 20)
+
+ # Build graphs WITHOUT edges
+ undirected_graph = GraphAdjacencyMatrix(
+ vertices=random_vertices, edges=[], directed=False
+ )
+ directed_graph = GraphAdjacencyMatrix(
+ vertices=random_vertices, edges=[], directed=True
+ )
+
+ # Test contains_vertex
+ for num in range(101):
+ self.assertEqual(
+ num in random_vertices, undirected_graph.contains_vertex(num)
+ )
+ self.assertEqual(
+ num in random_vertices, directed_graph.contains_vertex(num)
+ )
+
+ def test_add_vertices(self) -> None:
+ random_vertices: list[int] = random.sample(range(101), 20)
+
+ # build empty graphs
+ undirected_graph: GraphAdjacencyMatrix = GraphAdjacencyMatrix(
+ vertices=[], edges=[], directed=False
+ )
+ directed_graph: GraphAdjacencyMatrix = GraphAdjacencyMatrix(
+ vertices=[], edges=[], directed=True
+ )
+
+ # run add_vertex
+ for num in random_vertices:
+ undirected_graph.add_vertex(num)
+
+ for num in random_vertices:
+ directed_graph.add_vertex(num)
+
+ # test add_vertex worked
+ for num in random_vertices:
+ self.__assert_graph_vertex_exists_check(
+ undirected_graph, directed_graph, num
+ )
+
+ def test_remove_vertices(self) -> None:
+ random_vertices: list[int] = random.sample(range(101), 20)
+
+ # build graphs WITHOUT edges
+ undirected_graph = GraphAdjacencyMatrix(
+ vertices=random_vertices, edges=[], directed=False
+ )
+ directed_graph = GraphAdjacencyMatrix(
+ vertices=random_vertices, edges=[], directed=True
+ )
+
+ # test remove_vertex worked
+ for num in random_vertices:
+ self.__assert_graph_vertex_exists_check(
+ undirected_graph, directed_graph, num
+ )
+
+ undirected_graph.remove_vertex(num)
+ directed_graph.remove_vertex(num)
+
+ self.__assert_graph_vertex_does_not_exist_check(
+ undirected_graph, directed_graph, num
+ )
+
+ def test_add_and_remove_vertices_repeatedly(self) -> None:
+ random_vertices1: list[int] = random.sample(range(51), 20)
+ random_vertices2: list[int] = random.sample(range(51, 101), 20)
+
+ # build graphs WITHOUT edges
+ undirected_graph = GraphAdjacencyMatrix(
+ vertices=random_vertices1, edges=[], directed=False
+ )
+ directed_graph = GraphAdjacencyMatrix(
+ vertices=random_vertices1, edges=[], directed=True
+ )
+
+ # test adding and removing vertices
+ for i, _ in enumerate(random_vertices1):
+ undirected_graph.add_vertex(random_vertices2[i])
+ directed_graph.add_vertex(random_vertices2[i])
+
+ self.__assert_graph_vertex_exists_check(
+ undirected_graph, directed_graph, random_vertices2[i]
+ )
+
+ undirected_graph.remove_vertex(random_vertices1[i])
+ directed_graph.remove_vertex(random_vertices1[i])
+
+ self.__assert_graph_vertex_does_not_exist_check(
+ undirected_graph, directed_graph, random_vertices1[i]
+ )
+
+ # remove all vertices
+ for i, _ in enumerate(random_vertices1):
+ undirected_graph.remove_vertex(random_vertices2[i])
+ directed_graph.remove_vertex(random_vertices2[i])
+
+ self.__assert_graph_vertex_does_not_exist_check(
+ undirected_graph, directed_graph, random_vertices2[i]
+ )
+
+ def test_contains_edge(self) -> None:
+ # generate graphs and graph input
+ vertex_count = 20
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(vertex_count, 0, 100, 4)
+
+ # generate all possible edges for testing
+ all_possible_edges: list[list[int]] = []
+ for i in range(vertex_count - 1):
+ for j in range(i + 1, vertex_count):
+ all_possible_edges.append([random_vertices[i], random_vertices[j]])
+ all_possible_edges.append([random_vertices[j], random_vertices[i]])
+
+ # test contains_edge function
+ for edge in all_possible_edges:
+ if edge in random_edges:
+ self.__assert_graph_edge_exists_check(
+ undirected_graph, directed_graph, edge
+ )
+ elif [edge[1], edge[0]] in random_edges:
+ # since this edge exists for undirected but the reverse may
+ # not exist for directed
+ self.__assert_graph_edge_exists_check(
+ undirected_graph, directed_graph, [edge[1], edge[0]]
+ )
+ else:
+ self.__assert_graph_edge_does_not_exist_check(
+ undirected_graph, directed_graph, edge
+ )
+
+ def test_add_edge(self) -> None:
+ # generate graph input
+ random_vertices: list[int] = random.sample(range(101), 15)
+ random_edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
+
+ # build graphs WITHOUT edges
+ undirected_graph = GraphAdjacencyMatrix(
+ vertices=random_vertices, edges=[], directed=False
+ )
+ directed_graph = GraphAdjacencyMatrix(
+ vertices=random_vertices, edges=[], directed=True
+ )
+
+ # run and test add_edge
+ for edge in random_edges:
+ undirected_graph.add_edge(edge[0], edge[1])
+ directed_graph.add_edge(edge[0], edge[1])
+ self.__assert_graph_edge_exists_check(
+ undirected_graph, directed_graph, edge
+ )
+
+ def test_remove_edge(self) -> None:
+ # generate graph input and graphs
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ # run and test remove_edge
+ for edge in random_edges:
+ self.__assert_graph_edge_exists_check(
+ undirected_graph, directed_graph, edge
+ )
+ undirected_graph.remove_edge(edge[0], edge[1])
+ directed_graph.remove_edge(edge[0], edge[1])
+ self.__assert_graph_edge_does_not_exist_check(
+ undirected_graph, directed_graph, edge
+ )
+
+ def test_add_and_remove_edges_repeatedly(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ # make some more edge options!
+ more_random_edges: list[list[int]] = []
+
+ while len(more_random_edges) != len(random_edges):
+ edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
+ for edge in edges:
+ if len(more_random_edges) == len(random_edges):
+ break
+ elif edge not in more_random_edges and edge not in random_edges:
+ more_random_edges.append(edge)
+
+ for i, _ in enumerate(random_edges):
+ undirected_graph.add_edge(more_random_edges[i][0], more_random_edges[i][1])
+ directed_graph.add_edge(more_random_edges[i][0], more_random_edges[i][1])
+
+ self.__assert_graph_edge_exists_check(
+ undirected_graph, directed_graph, more_random_edges[i]
+ )
+
+ undirected_graph.remove_edge(random_edges[i][0], random_edges[i][1])
+ directed_graph.remove_edge(random_edges[i][0], random_edges[i][1])
+
+ self.__assert_graph_edge_does_not_exist_check(
+ undirected_graph, directed_graph, random_edges[i]
+ )
+
+ def test_add_vertex_exception_check(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ for vertex in random_vertices:
+ with self.assertRaises(ValueError):
+ undirected_graph.add_vertex(vertex)
+ with self.assertRaises(ValueError):
+ directed_graph.add_vertex(vertex)
+
+ def test_remove_vertex_exception_check(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ for i in range(101):
+ if i not in random_vertices:
+ with self.assertRaises(ValueError):
+ undirected_graph.remove_vertex(i)
+ with self.assertRaises(ValueError):
+ directed_graph.remove_vertex(i)
+
+ def test_add_edge_exception_check(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ for edge in random_edges:
+ with self.assertRaises(ValueError):
+ undirected_graph.add_edge(edge[0], edge[1])
+ with self.assertRaises(ValueError):
+ directed_graph.add_edge(edge[0], edge[1])
+
+ def test_remove_edge_exception_check(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ more_random_edges: list[list[int]] = []
+
+ while len(more_random_edges) != len(random_edges):
+ edges: list[list[int]] = self.__generate_random_edges(random_vertices, 4)
+ for edge in edges:
+ if len(more_random_edges) == len(random_edges):
+ break
+ elif edge not in more_random_edges and edge not in random_edges:
+ more_random_edges.append(edge)
+
+ for edge in more_random_edges:
+ with self.assertRaises(ValueError):
+ undirected_graph.remove_edge(edge[0], edge[1])
+ with self.assertRaises(ValueError):
+ directed_graph.remove_edge(edge[0], edge[1])
+
+ def test_contains_edge_exception_check(self) -> None:
+ (
+ undirected_graph,
+ directed_graph,
+ random_vertices,
+ random_edges,
+ ) = self.__generate_graphs(20, 0, 100, 4)
+
+ for vertex in random_vertices:
+ with self.assertRaises(ValueError):
+ undirected_graph.contains_edge(vertex, 102)
+ with self.assertRaises(ValueError):
+ directed_graph.contains_edge(vertex, 102)
+
+ with self.assertRaises(ValueError):
+ undirected_graph.contains_edge(103, 102)
+ with self.assertRaises(ValueError):
+ directed_graph.contains_edge(103, 102)
+
+
+if __name__ == "__main__":
+ unittest.main()
diff --git a/graphs/graph_matrix.py b/graphs/graph_matrix.py
deleted file mode 100644
index 4adc6c0bb93b..000000000000
--- a/graphs/graph_matrix.py
+++ /dev/null
@@ -1,24 +0,0 @@
-class Graph:
- def __init__(self, vertex):
- self.vertex = vertex
- self.graph = [[0] * vertex for i in range(vertex)]
-
- def add_edge(self, u, v):
- self.graph[u - 1][v - 1] = 1
- self.graph[v - 1][u - 1] = 1
-
- def show(self):
- for i in self.graph:
- for j in i:
- print(j, end=" ")
- print(" ")
-
-
-g = Graph(100)
-
-g.add_edge(1, 4)
-g.add_edge(4, 2)
-g.add_edge(4, 5)
-g.add_edge(2, 5)
-g.add_edge(5, 3)
-g.show()
diff --git a/graphs/tests/__init__.py b/graphs/tests/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
From 3a9e5fa5ecea0df54ed3ffdcb74f46171199f552 Mon Sep 17 00:00:00 2001
From: Chris O <46587501+ChrisO345@users.noreply.github.com>
Date: Fri, 2 Jun 2023 17:14:25 +1200
Subject: [PATCH 091/808] Create a Simultaneous Equation Solver Algorithm
(#8773)
* Added simultaneous_linear_equation_solver.py
* Removed Augment class, replaced with recursive functions
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fixed edge cases
* Update settings.json
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.vscode/settings.json | 5 +
maths/simultaneous_linear_equation_solver.py | 142 +++++++++++++++++++
2 files changed, 147 insertions(+)
create mode 100644 .vscode/settings.json
create mode 100644 maths/simultaneous_linear_equation_solver.py
diff --git a/.vscode/settings.json b/.vscode/settings.json
new file mode 100644
index 000000000000..ef16fa1aa7ac
--- /dev/null
+++ b/.vscode/settings.json
@@ -0,0 +1,5 @@
+{
+ "githubPullRequests.ignoredPullRequestBranches": [
+ "master"
+ ]
+}
diff --git a/maths/simultaneous_linear_equation_solver.py b/maths/simultaneous_linear_equation_solver.py
new file mode 100644
index 000000000000..1287b2002d00
--- /dev/null
+++ b/maths/simultaneous_linear_equation_solver.py
@@ -0,0 +1,142 @@
+"""
+https://en.wikipedia.org/wiki/Augmented_matrix
+
+This algorithm solves simultaneous linear equations of the form
+λa + λb + λc + λd + ... = γ as [λ, λ, λ, λ, ..., γ]
+Where λ & γ are individual coefficients, the no. of equations = no. of coefficients - 1
+
+Note in order to work there must exist 1 equation where all instances of λ and γ != 0
+"""
+
+
+def simplify(current_set: list[list]) -> list[list]:
+ """
+ >>> simplify([[1, 2, 3], [4, 5, 6]])
+ [[1.0, 2.0, 3.0], [0.0, 0.75, 1.5]]
+ >>> simplify([[5, 2, 5], [5, 1, 10]])
+ [[1.0, 0.4, 1.0], [0.0, 0.2, -1.0]]
+ """
+ # Divide each row by magnitude of first term --> creates 'unit' matrix
+ duplicate_set = current_set.copy()
+ for row_index, row in enumerate(duplicate_set):
+ magnitude = row[0]
+ for column_index, column in enumerate(row):
+ if magnitude == 0:
+ current_set[row_index][column_index] = column
+ continue
+ current_set[row_index][column_index] = column / magnitude
+ # Subtract to cancel term
+ first_row = current_set[0]
+ final_set = [first_row]
+ current_set = current_set[1::]
+ for row in current_set:
+ temp_row = []
+ # If first term is 0, it is already in form we want, so we preserve it
+ if row[0] == 0:
+ final_set.append(row)
+ continue
+ for column_index in range(len(row)):
+ temp_row.append(first_row[column_index] - row[column_index])
+ final_set.append(temp_row)
+ # Create next recursion iteration set
+ if len(final_set[0]) != 3:
+ current_first_row = final_set[0]
+ current_first_column = []
+ next_iteration = []
+ for row in final_set[1::]:
+ current_first_column.append(row[0])
+ next_iteration.append(row[1::])
+ resultant = simplify(next_iteration)
+ for i in range(len(resultant)):
+ resultant[i].insert(0, current_first_column[i])
+ resultant.insert(0, current_first_row)
+ final_set = resultant
+ return final_set
+
+
+def solve_simultaneous(equations: list[list]) -> list:
+ """
+ >>> solve_simultaneous([[1, 2, 3],[4, 5, 6]])
+ [-1.0, 2.0]
+ >>> solve_simultaneous([[0, -3, 1, 7],[3, 2, -1, 11],[5, 1, -2, 12]])
+ [6.4, 1.2, 10.6]
+ >>> solve_simultaneous([])
+ Traceback (most recent call last):
+ ...
+ IndexError: solve_simultaneous() requires n lists of length n+1
+ >>> solve_simultaneous([[1, 2, 3],[1, 2]])
+ Traceback (most recent call last):
+ ...
+ IndexError: solve_simultaneous() requires n lists of length n+1
+ >>> solve_simultaneous([[1, 2, 3],["a", 7, 8]])
+ Traceback (most recent call last):
+ ...
+ ValueError: solve_simultaneous() requires lists of integers
+ >>> solve_simultaneous([[0, 2, 3],[4, 0, 6]])
+ Traceback (most recent call last):
+ ...
+ ValueError: solve_simultaneous() requires at least 1 full equation
+ """
+ if len(equations) == 0:
+ raise IndexError("solve_simultaneous() requires n lists of length n+1")
+ _length = len(equations) + 1
+ if any(len(item) != _length for item in equations):
+ raise IndexError("solve_simultaneous() requires n lists of length n+1")
+ for row in equations:
+ if any(not isinstance(column, (int, float)) for column in row):
+ raise ValueError("solve_simultaneous() requires lists of integers")
+ if len(equations) == 1:
+ return [equations[0][-1] / equations[0][0]]
+ data_set = equations.copy()
+ if any(0 in row for row in data_set):
+ temp_data = data_set.copy()
+ full_row = []
+ for row_index, row in enumerate(temp_data):
+ if 0 not in row:
+ full_row = data_set.pop(row_index)
+ break
+ if not full_row:
+ raise ValueError("solve_simultaneous() requires at least 1 full equation")
+ data_set.insert(0, full_row)
+ useable_form = data_set.copy()
+ simplified = simplify(useable_form)
+ simplified = simplified[::-1]
+ solutions: list = []
+ for row in simplified:
+ current_solution = row[-1]
+ if not solutions:
+ if row[-2] == 0:
+ solutions.append(0)
+ continue
+ solutions.append(current_solution / row[-2])
+ continue
+ temp_row = row.copy()[: len(row) - 1 :]
+ while temp_row[0] == 0:
+ temp_row.pop(0)
+ if len(temp_row) == 0:
+ solutions.append(0)
+ continue
+ temp_row = temp_row[1::]
+ temp_row = temp_row[::-1]
+ for column_index, column in enumerate(temp_row):
+ current_solution -= column * solutions[column_index]
+ solutions.append(current_solution)
+ final = []
+ for item in solutions:
+ final.append(float(round(item, 5)))
+ return final[::-1]
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+ eq = [
+ [2, 1, 1, 1, 1, 4],
+ [1, 2, 1, 1, 1, 5],
+ [1, 1, 2, 1, 1, 6],
+ [1, 1, 1, 2, 1, 7],
+ [1, 1, 1, 1, 2, 8],
+ ]
+ print(solve_simultaneous(eq))
+ print(solve_simultaneous([[4, 2]]))
From 80d95fccc390d366a9f617d8628a546a7be7b2a3 Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Sat, 3 Jun 2023 17:16:33 +0100
Subject: [PATCH 092/808] Pytest locally fails due to API_KEY env variable
(#8738)
* fix: Pytest locally fails due to API_KEY env variable (#8737)
* chore: Fix ruff errors
---
web_programming/currency_converter.py | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/web_programming/currency_converter.py b/web_programming/currency_converter.py
index 69f2a2c4d421..3bbcafa8f89b 100644
--- a/web_programming/currency_converter.py
+++ b/web_programming/currency_converter.py
@@ -8,13 +8,7 @@
import requests
URL_BASE = "https://www.amdoren.com/api/currency.php"
-TESTING = os.getenv("CI", "")
-API_KEY = os.getenv("AMDOREN_API_KEY", "")
-if not API_KEY and not TESTING:
- raise KeyError(
- "API key must be provided in the 'AMDOREN_API_KEY' environment variable."
- )
# Currency and their description
list_of_currencies = """
@@ -175,20 +169,31 @@
def convert_currency(
- from_: str = "USD", to: str = "INR", amount: float = 1.0, api_key: str = API_KEY
+ from_: str = "USD", to: str = "INR", amount: float = 1.0, api_key: str = ""
) -> str:
"""https://www.amdoren.com/currency-api/"""
+ # Instead of manually generating parameters
params = locals()
+ # from is a reserved keyword
params["from"] = params.pop("from_")
res = requests.get(URL_BASE, params=params).json()
return str(res["amount"]) if res["error"] == 0 else res["error_message"]
if __name__ == "__main__":
+ TESTING = os.getenv("CI", "")
+ API_KEY = os.getenv("AMDOREN_API_KEY", "")
+
+ if not API_KEY and not TESTING:
+ raise KeyError(
+ "API key must be provided in the 'AMDOREN_API_KEY' environment variable."
+ )
+
print(
convert_currency(
input("Enter from currency: ").strip(),
input("Enter to currency: ").strip(),
float(input("Enter the amount: ").strip()),
+ API_KEY,
)
)
From fa12b9a286bf42d250b30a772e8f226dc14953f4 Mon Sep 17 00:00:00 2001
From: ShivaDahal99 <130563462+ShivaDahal99@users.noreply.github.com>
Date: Wed, 7 Jun 2023 23:47:27 +0200
Subject: [PATCH 093/808] Speed of sound (#8803)
* Create TestShiva
* Delete TestShiva
* Add speed of sound
* Update physics/speed_of_sound.py
Co-authored-by: Christian Clauss
* Update physics/speed_of_sound.py
Co-authored-by: Christian Clauss
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update speed_of_sound.py
* Update speed_of_sound.py
---------
Co-authored-by: jlhuhn <134317018+jlhuhn@users.noreply.github.com>
Co-authored-by: Christian Clauss
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
physics/speed_of_sound.py | 52 +++++++++++++++++++++++++++++++++++++++
1 file changed, 52 insertions(+)
create mode 100644 physics/speed_of_sound.py
diff --git a/physics/speed_of_sound.py b/physics/speed_of_sound.py
new file mode 100644
index 000000000000..a4658366a36c
--- /dev/null
+++ b/physics/speed_of_sound.py
@@ -0,0 +1,52 @@
+"""
+Title : Calculating the speed of sound
+
+Description :
+ The speed of sound (c) is the speed that a sound wave travels
+ per unit time (m/s). During propagation, the sound wave propagates
+ through an elastic medium. Its SI unit is meter per second (m/s).
+
+ Only longitudinal waves can propagate in liquids and gas other then
+ solid where they also travel in transverse wave. The following Algo-
+ rithem calculates the speed of sound in fluid depanding on the bulk
+ module and the density of the fluid.
+
+ Equation for calculating speed od sound in fluid:
+ c_fluid = (K_s*p)**0.5
+
+ c_fluid: speed of sound in fluid
+ K_s: isentropic bulk modulus
+ p: density of fluid
+
+
+
+Source : https://en.wikipedia.org/wiki/Speed_of_sound
+"""
+
+
+def speed_of_sound_in_a_fluid(density: float, bulk_modulus: float) -> float:
+ """
+ This method calculates the speed of sound in fluid -
+ This is calculated from the other two provided values
+ Examples:
+ Example 1 --> Water 20°C: bulk_moduls= 2.15MPa, density=998kg/m³
+ Example 2 --> Murcery 20°: bulk_moduls= 28.5MPa, density=13600kg/m³
+
+ >>> speed_of_sound_in_a_fluid(bulk_modulus=2.15*10**9, density=998)
+ 1467.7563207952705
+ >>> speed_of_sound_in_a_fluid(bulk_modulus=28.5*10**9, density=13600)
+ 1447.614670861731
+ """
+
+ if density <= 0:
+ raise ValueError("Impossible fluid density")
+ if bulk_modulus <= 0:
+ raise ValueError("Impossible bulk modulus")
+
+ return (bulk_modulus / density) ** 0.5
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 7775de0ef779a28cec7d9f28af97a89b2bc29d7e Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Thu, 8 Jun 2023 13:40:38 +0100
Subject: [PATCH 094/808] Create number container system algorithm (#8808)
* feat: Create number container system algorithm
* updating DIRECTORY.md
* chore: Fix failing tests
* Update other/number_container_system.py
Co-authored-by: Christian Clauss
* Update other/number_container_system.py
Co-authored-by: Christian Clauss
* Update other/number_container_system.py
Co-authored-by: Christian Clauss
* chore: Add more tests
* chore: Create binary_search_insert failing test
* type: Update typehints to accept str, list and range
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
DIRECTORY.md | 6 +-
other/number_container_system.py | 180 +++++++++++++++++++++++++++++++
2 files changed, 185 insertions(+), 1 deletion(-)
create mode 100644 other/number_container_system.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 231b0e2f1d2f..6dac4a9a5783 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -419,8 +419,9 @@
* [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py)
* [G Topological Sort](graphs/g_topological_sort.py)
* [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py)
+ * [Graph Adjacency List](graphs/graph_adjacency_list.py)
+ * [Graph Adjacency Matrix](graphs/graph_adjacency_matrix.py)
* [Graph List](graphs/graph_list.py)
- * [Graph Matrix](graphs/graph_matrix.py)
* [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py)
* [Greedy Best First](graphs/greedy_best_first.py)
* [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py)
@@ -479,6 +480,7 @@
* [Lib](linear_algebra/src/lib.py)
* [Polynom For Points](linear_algebra/src/polynom_for_points.py)
* [Power Iteration](linear_algebra/src/power_iteration.py)
+ * [Rank Of Matrix](linear_algebra/src/rank_of_matrix.py)
* [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py)
* [Schur Complement](linear_algebra/src/schur_complement.py)
* [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py)
@@ -651,6 +653,7 @@
* [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py)
* [Signum](maths/signum.py)
* [Simpson Rule](maths/simpson_rule.py)
+ * [Simultaneous Linear Equation Solver](maths/simultaneous_linear_equation_solver.py)
* [Sin](maths/sin.py)
* [Sock Merchant](maths/sock_merchant.py)
* [Softmax](maths/softmax.py)
@@ -726,6 +729,7 @@
* [Maximum Subarray](other/maximum_subarray.py)
* [Maximum Subsequence](other/maximum_subsequence.py)
* [Nested Brackets](other/nested_brackets.py)
+ * [Number Container System](other/number_container_system.py)
* [Password](other/password.py)
* [Quine](other/quine.py)
* [Scoring Algorithm](other/scoring_algorithm.py)
diff --git a/other/number_container_system.py b/other/number_container_system.py
new file mode 100644
index 000000000000..f547bc8a229e
--- /dev/null
+++ b/other/number_container_system.py
@@ -0,0 +1,180 @@
+"""
+A number container system that uses binary search to delete and insert values into
+arrays with O(n logn) write times and O(1) read times.
+
+This container system holds integers at indexes.
+
+Further explained in this leetcode problem
+> https://leetcode.com/problems/minimum-cost-tree-from-leaf-values
+"""
+
+
+class NumberContainer:
+ def __init__(self) -> None:
+ # numbermap keys are the number and its values are lists of indexes sorted
+ # in ascending order
+ self.numbermap: dict[int, list[int]] = {}
+ # indexmap keys are an index and it's values are the number at that index
+ self.indexmap: dict[int, int] = {}
+
+ def binary_search_delete(self, array: list | str | range, item: int) -> list[int]:
+ """
+ Removes the item from the sorted array and returns
+ the new array.
+
+ >>> NumberContainer().binary_search_delete([1,2,3], 2)
+ [1, 3]
+ >>> NumberContainer().binary_search_delete([0, 0, 0], 0)
+ [0, 0]
+ >>> NumberContainer().binary_search_delete([-1, -1, -1], -1)
+ [-1, -1]
+ >>> NumberContainer().binary_search_delete([-1, 0], 0)
+ [-1]
+ >>> NumberContainer().binary_search_delete([-1, 0], -1)
+ [0]
+ >>> NumberContainer().binary_search_delete(range(7), 3)
+ [0, 1, 2, 4, 5, 6]
+ >>> NumberContainer().binary_search_delete([1.1, 2.2, 3.3], 2.2)
+ [1.1, 3.3]
+ >>> NumberContainer().binary_search_delete("abcde", "c")
+ ['a', 'b', 'd', 'e']
+ >>> NumberContainer().binary_search_delete([0, -1, 2, 4], 0)
+ Traceback (most recent call last):
+ ...
+ ValueError: Either the item is not in the array or the array was unsorted
+ >>> NumberContainer().binary_search_delete([2, 0, 4, -1, 11], -1)
+ Traceback (most recent call last):
+ ...
+ ValueError: Either the item is not in the array or the array was unsorted
+ >>> NumberContainer().binary_search_delete(125, 1)
+ Traceback (most recent call last):
+ ...
+ TypeError: binary_search_delete() only accepts either a list, range or str
+ """
+ if isinstance(array, (range, str)):
+ array = list(array)
+ elif not isinstance(array, list):
+ raise TypeError(
+ "binary_search_delete() only accepts either a list, range or str"
+ )
+
+ low = 0
+ high = len(array) - 1
+
+ while low <= high:
+ mid = (low + high) // 2
+ if array[mid] == item:
+ array.pop(mid)
+ return array
+ elif array[mid] < item:
+ low = mid + 1
+ else:
+ high = mid - 1
+ raise ValueError(
+ "Either the item is not in the array or the array was unsorted"
+ )
+
+ def binary_search_insert(self, array: list | str | range, index: int) -> list[int]:
+ """
+ Inserts the index into the sorted array
+ at the correct position.
+
+ >>> NumberContainer().binary_search_insert([1,2,3], 2)
+ [1, 2, 2, 3]
+ >>> NumberContainer().binary_search_insert([0,1,3], 2)
+ [0, 1, 2, 3]
+ >>> NumberContainer().binary_search_insert([-5, -3, 0, 0, 11, 103], 51)
+ [-5, -3, 0, 0, 11, 51, 103]
+ >>> NumberContainer().binary_search_insert([-5, -3, 0, 0, 11, 100, 103], 101)
+ [-5, -3, 0, 0, 11, 100, 101, 103]
+ >>> NumberContainer().binary_search_insert(range(10), 4)
+ [0, 1, 2, 3, 4, 4, 5, 6, 7, 8, 9]
+ >>> NumberContainer().binary_search_insert("abd", "c")
+ ['a', 'b', 'c', 'd']
+ >>> NumberContainer().binary_search_insert(131, 23)
+ Traceback (most recent call last):
+ ...
+ TypeError: binary_search_insert() only accepts either a list, range or str
+ """
+ if isinstance(array, (range, str)):
+ array = list(array)
+ elif not isinstance(array, list):
+ raise TypeError(
+ "binary_search_insert() only accepts either a list, range or str"
+ )
+
+ low = 0
+ high = len(array) - 1
+
+ while low <= high:
+ mid = (low + high) // 2
+ if array[mid] == index:
+ # If the item already exists in the array,
+ # insert it after the existing item
+ array.insert(mid + 1, index)
+ return array
+ elif array[mid] < index:
+ low = mid + 1
+ else:
+ high = mid - 1
+
+ # If the item doesn't exist in the array, insert it at the appropriate position
+ array.insert(low, index)
+ return array
+
+ def change(self, index: int, number: int) -> None:
+ """
+ Changes (sets) the index as number
+
+ >>> cont = NumberContainer()
+ >>> cont.change(0, 10)
+ >>> cont.change(0, 20)
+ >>> cont.change(-13, 20)
+ >>> cont.change(-100030, 20032903290)
+ """
+ # Remove previous index
+ if index in self.indexmap:
+ n = self.indexmap[index]
+ if len(self.numbermap[n]) == 1:
+ del self.numbermap[n]
+ else:
+ self.numbermap[n] = self.binary_search_delete(self.numbermap[n], index)
+
+ # Set new index
+ self.indexmap[index] = number
+
+ # Number not seen before or empty so insert number value
+ if number not in self.numbermap:
+ self.numbermap[number] = [index]
+
+ # Here we need to perform a binary search insertion in order to insert
+ # The item in the correct place
+ else:
+ self.numbermap[number] = self.binary_search_insert(
+ self.numbermap[number], index
+ )
+
+ def find(self, number: int) -> int:
+ """
+ Returns the smallest index where the number is.
+
+ >>> cont = NumberContainer()
+ >>> cont.find(10)
+ -1
+ >>> cont.change(0, 10)
+ >>> cont.find(10)
+ 0
+ >>> cont.change(0, 20)
+ >>> cont.find(10)
+ -1
+ >>> cont.find(20)
+ 0
+ """
+ # Simply return the 0th index (smallest) of the indexes found (or -1)
+ return self.numbermap.get(number, [-1])[0]
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 9c9da8ebf1d35ae40ac5438c05cc273f7c6d4473 Mon Sep 17 00:00:00 2001
From: Jan Wojciechowski <96974442+yanvoi@users.noreply.github.com>
Date: Fri, 9 Jun 2023 11:06:37 +0200
Subject: [PATCH 095/808] Improve readability of
ciphers/mixed_keyword_cypher.py (#8626)
* refactored the code
* the code will now pass the test
* looked more into it and fixed the logic
* made the code easier to read, added comments and fixed the logic
* got rid of redundant code + plaintext can contain chars that are not in the alphabet
* fixed the reduntant conversion of ascii_uppercase to a list
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* keyword and plaintext won't have default values
* ran the ruff command
* Update linear_discriminant_analysis.py and rsa_cipher.py (#8680)
* Update rsa_cipher.py by replacing %s with {}
* Update rsa_cipher.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update linear_discriminant_analysis.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update linear_discriminant_analysis.py
* Update linear_discriminant_analysis.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update linear_discriminant_analysis.py
* Update linear_discriminant_analysis.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update linear_discriminant_analysis.py
* Update machine_learning/linear_discriminant_analysis.py
Co-authored-by: Christian Clauss
* Update linear_discriminant_analysis.py
* updated
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
* fixed some difficulties
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* added comments, made printing mapping optional, added 1 test
* shortened the line that was too long
* Update ciphers/mixed_keyword_cypher.py
Co-authored-by: Tianyi Zheng
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
Co-authored-by: Christian Clauss
Co-authored-by: Tianyi Zheng
---
ciphers/mixed_keyword_cypher.py | 100 +++++++++++++++++---------------
1 file changed, 53 insertions(+), 47 deletions(-)
diff --git a/ciphers/mixed_keyword_cypher.py b/ciphers/mixed_keyword_cypher.py
index 93a0e3acb7b1..b984808fced6 100644
--- a/ciphers/mixed_keyword_cypher.py
+++ b/ciphers/mixed_keyword_cypher.py
@@ -1,7 +1,11 @@
-def mixed_keyword(key: str = "college", pt: str = "UNIVERSITY") -> str:
- """
+from string import ascii_uppercase
+
- For key:hello
+def mixed_keyword(
+ keyword: str, plaintext: str, verbose: bool = False, alphabet: str = ascii_uppercase
+) -> str:
+ """
+ For keyword: hello
H E L O
A B C D
@@ -12,58 +16,60 @@ def mixed_keyword(key: str = "college", pt: str = "UNIVERSITY") -> str:
Y Z
and map vertically
- >>> mixed_keyword("college", "UNIVERSITY") # doctest: +NORMALIZE_WHITESPACE
+ >>> mixed_keyword("college", "UNIVERSITY", True) # doctest: +NORMALIZE_WHITESPACE
{'A': 'C', 'B': 'A', 'C': 'I', 'D': 'P', 'E': 'U', 'F': 'Z', 'G': 'O', 'H': 'B',
'I': 'J', 'J': 'Q', 'K': 'V', 'L': 'L', 'M': 'D', 'N': 'K', 'O': 'R', 'P': 'W',
'Q': 'E', 'R': 'F', 'S': 'M', 'T': 'S', 'U': 'X', 'V': 'G', 'W': 'H', 'X': 'N',
'Y': 'T', 'Z': 'Y'}
'XKJGUFMJST'
+
+ >>> mixed_keyword("college", "UNIVERSITY", False) # doctest: +NORMALIZE_WHITESPACE
+ 'XKJGUFMJST'
"""
- key = key.upper()
- pt = pt.upper()
- temp = []
- for i in key:
- if i not in temp:
- temp.append(i)
- len_temp = len(temp)
- # print(temp)
- alpha = []
- modalpha = []
- for j in range(65, 91):
- t = chr(j)
- alpha.append(t)
- if t not in temp:
- temp.append(t)
- # print(temp)
- r = int(26 / 4)
- # print(r)
- k = 0
- for _ in range(r):
- s = []
- for _ in range(len_temp):
- s.append(temp[k])
- if k >= 25:
- break
- k += 1
- modalpha.append(s)
- # print(modalpha)
- d = {}
- j = 0
- k = 0
- for j in range(len_temp):
- for m in modalpha:
- if not len(m) - 1 >= j:
- break
- d[alpha[k]] = m[j]
- if not k < 25:
+ keyword = keyword.upper()
+ plaintext = plaintext.upper()
+ alphabet_set = set(alphabet)
+
+ # create a list of unique characters in the keyword - their order matters
+ # it determines how we will map plaintext characters to the ciphertext
+ unique_chars = []
+ for char in keyword:
+ if char in alphabet_set and char not in unique_chars:
+ unique_chars.append(char)
+ # the number of those unique characters will determine the number of rows
+ num_unique_chars_in_keyword = len(unique_chars)
+
+ # create a shifted version of the alphabet
+ shifted_alphabet = unique_chars + [
+ char for char in alphabet if char not in unique_chars
+ ]
+
+ # create a modified alphabet by splitting the shifted alphabet into rows
+ modified_alphabet = [
+ shifted_alphabet[k : k + num_unique_chars_in_keyword]
+ for k in range(0, 26, num_unique_chars_in_keyword)
+ ]
+
+ # map the alphabet characters to the modified alphabet characters
+ # going 'vertically' through the modified alphabet - consider columns first
+ mapping = {}
+ letter_index = 0
+ for column in range(num_unique_chars_in_keyword):
+ for row in modified_alphabet:
+ # if current row (the last one) is too short, break out of loop
+ if len(row) <= column:
break
- k += 1
- print(d)
- cypher = ""
- for i in pt:
- cypher += d[i]
- return cypher
+
+ # map current letter to letter in modified alphabet
+ mapping[alphabet[letter_index]] = row[column]
+ letter_index += 1
+
+ if verbose:
+ print(mapping)
+ # create the encrypted text by mapping the plaintext to the modified alphabet
+ return "".join(mapping[char] if char in mapping else char for char in plaintext)
if __name__ == "__main__":
+ # example use
print(mixed_keyword("college", "UNIVERSITY"))
From daa0c8f3d340485ce295570e6d76b38891e371bd Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Sat, 10 Jun 2023 13:21:49 +0100
Subject: [PATCH 096/808] Create count negative numbers in matrix algorithm
(#8813)
* updating DIRECTORY.md
* feat: Count negative numbers in sorted matrix
* updating DIRECTORY.md
* chore: Fix pre-commit
* refactor: Combine functions into iteration
* style: Reformat reference
* feat: Add timings of each implementation
* chore: Fix problems with algorithms-keeper bot
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* test: Remove doctest from benchmark function
* Update matrix/count_negative_numbers_in_sorted_matrix.py
Co-authored-by: Christian Clauss
* Update matrix/count_negative_numbers_in_sorted_matrix.py
Co-authored-by: Christian Clauss
* Update matrix/count_negative_numbers_in_sorted_matrix.py
Co-authored-by: Christian Clauss
* Update matrix/count_negative_numbers_in_sorted_matrix.py
Co-authored-by: Christian Clauss
* Update matrix/count_negative_numbers_in_sorted_matrix.py
Co-authored-by: Christian Clauss
* Update matrix/count_negative_numbers_in_sorted_matrix.py
Co-authored-by: Christian Clauss
* refactor: Use sum instead of large iteration
* refactor: Use len not sum
* Update count_negative_numbers_in_sorted_matrix.py
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
DIRECTORY.md | 2 +
...count_negative_numbers_in_sorted_matrix.py | 151 ++++++++++++++++++
2 files changed, 153 insertions(+)
create mode 100644 matrix/count_negative_numbers_in_sorted_matrix.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 6dac4a9a5783..8511c261a3d2 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -679,6 +679,7 @@
## Matrix
* [Binary Search Matrix](matrix/binary_search_matrix.py)
* [Count Islands In Matrix](matrix/count_islands_in_matrix.py)
+ * [Count Negative Numbers In Sorted Matrix](matrix/count_negative_numbers_in_sorted_matrix.py)
* [Count Paths](matrix/count_paths.py)
* [Cramers Rule 2X2](matrix/cramers_rule_2x2.py)
* [Inverse Of Matrix](matrix/inverse_of_matrix.py)
@@ -753,6 +754,7 @@
* [Potential Energy](physics/potential_energy.py)
* [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py)
* [Shear Stress](physics/shear_stress.py)
+ * [Speed Of Sound](physics/speed_of_sound.py)
## Project Euler
* Problem 001
diff --git a/matrix/count_negative_numbers_in_sorted_matrix.py b/matrix/count_negative_numbers_in_sorted_matrix.py
new file mode 100644
index 000000000000..2799ff3b45fe
--- /dev/null
+++ b/matrix/count_negative_numbers_in_sorted_matrix.py
@@ -0,0 +1,151 @@
+"""
+Given an matrix of numbers in which all rows and all columns are sorted in decreasing
+order, return the number of negative numbers in grid.
+
+Reference: https://leetcode.com/problems/count-negative-numbers-in-a-sorted-matrix
+"""
+
+
+def generate_large_matrix() -> list[list[int]]:
+ """
+ >>> generate_large_matrix() # doctest: +ELLIPSIS
+ [[1000, ..., -999], [999, ..., -1001], ..., [2, ..., -1998]]
+ """
+ return [list(range(1000 - i, -1000 - i, -1)) for i in range(1000)]
+
+
+grid = generate_large_matrix()
+test_grids = (
+ [[4, 3, 2, -1], [3, 2, 1, -1], [1, 1, -1, -2], [-1, -1, -2, -3]],
+ [[3, 2], [1, 0]],
+ [[7, 7, 6]],
+ [[7, 7, 6], [-1, -2, -3]],
+ grid,
+)
+
+
+def validate_grid(grid: list[list[int]]) -> None:
+ """
+ Validate that the rows and columns of the grid is sorted in decreasing order.
+ >>> for grid in test_grids:
+ ... validate_grid(grid)
+ """
+ assert all(row == sorted(row, reverse=True) for row in grid)
+ assert all(list(col) == sorted(col, reverse=True) for col in zip(*grid))
+
+
+def find_negative_index(array: list[int]) -> int:
+ """
+ Find the smallest negative index
+
+ >>> find_negative_index([0,0,0,0])
+ 4
+ >>> find_negative_index([4,3,2,-1])
+ 3
+ >>> find_negative_index([1,0,-1,-10])
+ 2
+ >>> find_negative_index([0,0,0,-1])
+ 3
+ >>> find_negative_index([11,8,7,-3,-5,-9])
+ 3
+ >>> find_negative_index([-1,-1,-2,-3])
+ 0
+ >>> find_negative_index([5,1,0])
+ 3
+ >>> find_negative_index([-5,-5,-5])
+ 0
+ >>> find_negative_index([0])
+ 1
+ >>> find_negative_index([])
+ 0
+ """
+ left = 0
+ right = len(array) - 1
+
+ # Edge cases such as no values or all numbers are negative.
+ if not array or array[0] < 0:
+ return 0
+
+ while right + 1 > left:
+ mid = (left + right) // 2
+ num = array[mid]
+
+ # Num must be negative and the index must be greater than or equal to 0.
+ if num < 0 and array[mid - 1] >= 0:
+ return mid
+
+ if num >= 0:
+ left = mid + 1
+ else:
+ right = mid - 1
+ # No negative numbers so return the last index of the array + 1 which is the length.
+ return len(array)
+
+
+def count_negatives_binary_search(grid: list[list[int]]) -> int:
+ """
+ An O(m logn) solution that uses binary search in order to find the boundary between
+ positive and negative numbers
+
+ >>> [count_negatives_binary_search(grid) for grid in test_grids]
+ [8, 0, 0, 3, 1498500]
+ """
+ total = 0
+ bound = len(grid[0])
+
+ for i in range(len(grid)):
+ bound = find_negative_index(grid[i][:bound])
+ total += bound
+ return (len(grid) * len(grid[0])) - total
+
+
+def count_negatives_brute_force(grid: list[list[int]]) -> int:
+ """
+ This solution is O(n^2) because it iterates through every column and row.
+
+ >>> [count_negatives_brute_force(grid) for grid in test_grids]
+ [8, 0, 0, 3, 1498500]
+ """
+ return len([number for row in grid for number in row if number < 0])
+
+
+def count_negatives_brute_force_with_break(grid: list[list[int]]) -> int:
+ """
+ Similar to the brute force solution above but uses break in order to reduce the
+ number of iterations.
+
+ >>> [count_negatives_brute_force_with_break(grid) for grid in test_grids]
+ [8, 0, 0, 3, 1498500]
+ """
+ total = 0
+ for row in grid:
+ for i, number in enumerate(row):
+ if number < 0:
+ total += len(row) - i
+ break
+ return total
+
+
+def benchmark() -> None:
+ """Benchmark our functions next to each other"""
+ from timeit import timeit
+
+ print("Running benchmarks")
+ setup = (
+ "from __main__ import count_negatives_binary_search, "
+ "count_negatives_brute_force, count_negatives_brute_force_with_break, grid"
+ )
+ for func in (
+ "count_negatives_binary_search", # took 0.7727 seconds
+ "count_negatives_brute_force_with_break", # took 4.6505 seconds
+ "count_negatives_brute_force", # took 12.8160 seconds
+ ):
+ time = timeit(f"{func}(grid=grid)", setup=setup, number=500)
+ print(f"{func}() took {time:0.4f} seconds")
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+ benchmark()
From 46379861257d43bb7140d261094bf17dc414950f Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 13 Jun 2023 00:09:33 +0200
Subject: [PATCH 097/808] [pre-commit.ci] pre-commit autoupdate (#8817)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
updates:
- [github.com/charliermarsh/ruff-pre-commit: v0.0.270 → v0.0.272](https://github.com/charliermarsh/ruff-pre-commit/compare/v0.0.270...v0.0.272)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.pre-commit-config.yaml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 4c70ae219f74..1d4b73681108 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.270
+ rev: v0.0.272
hooks:
- id: ruff
From e6f89a6b89941ffed911e96362be3611a45420e7 Mon Sep 17 00:00:00 2001
From: Ilkin Mengusoglu <113149540+imengus@users.noreply.github.com>
Date: Sun, 18 Jun 2023 17:00:02 +0100
Subject: [PATCH 098/808] Simplex algorithm (#8825)
* feat: added simplex.py
* added docstrings
* Update linear_programming/simplex.py
Co-authored-by: Caeden Perelli-Harris
* Update linear_programming/simplex.py
Co-authored-by: Caeden Perelli-Harris
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update linear_programming/simplex.py
Co-authored-by: Caeden Perelli-Harris
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* ruff fix
Co-authored by: CaedenPH
* removed README to add in separate PR
* Update linear_programming/simplex.py
Co-authored-by: Tianyi Zheng
* Update linear_programming/simplex.py
Co-authored-by: Tianyi Zheng
* fix class docstring
* add comments
---------
Co-authored-by: Caeden Perelli-Harris
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
linear_programming/simplex.py | 311 ++++++++++++++++++++++++++++++++++
1 file changed, 311 insertions(+)
create mode 100644 linear_programming/simplex.py
diff --git a/linear_programming/simplex.py b/linear_programming/simplex.py
new file mode 100644
index 000000000000..ba64add40b5f
--- /dev/null
+++ b/linear_programming/simplex.py
@@ -0,0 +1,311 @@
+"""
+Python implementation of the simplex algorithm for solving linear programs in
+tabular form with
+- `>=`, `<=`, and `=` constraints and
+- each variable `x1, x2, ...>= 0`.
+
+See https://gist.github.com/imengus/f9619a568f7da5bc74eaf20169a24d98 for how to
+convert linear programs to simplex tableaus, and the steps taken in the simplex
+algorithm.
+
+Resources:
+https://en.wikipedia.org/wiki/Simplex_algorithm
+https://tinyurl.com/simplex4beginners
+"""
+from typing import Any
+
+import numpy as np
+
+
+class Tableau:
+ """Operate on simplex tableaus
+
+ >>> t = Tableau(np.array([[-1,-1,0,0,-1],[1,3,1,0,4],[3,1,0,1,4.]]), 2)
+ Traceback (most recent call last):
+ ...
+ ValueError: RHS must be > 0
+ """
+
+ def __init__(self, tableau: np.ndarray, n_vars: int) -> None:
+ # Check if RHS is negative
+ if np.any(tableau[:, -1], where=tableau[:, -1] < 0):
+ raise ValueError("RHS must be > 0")
+
+ self.tableau = tableau
+ self.n_rows, _ = tableau.shape
+
+ # Number of decision variables x1, x2, x3...
+ self.n_vars = n_vars
+
+ # Number of artificial variables to be minimised
+ self.n_art_vars = len(np.where(tableau[self.n_vars : -1] == -1)[0])
+
+ # 2 if there are >= or == constraints (nonstandard), 1 otherwise (std)
+ self.n_stages = (self.n_art_vars > 0) + 1
+
+ # Number of slack variables added to make inequalities into equalities
+ self.n_slack = self.n_rows - self.n_stages
+
+ # Objectives for each stage
+ self.objectives = ["max"]
+
+ # In two stage simplex, first minimise then maximise
+ if self.n_art_vars:
+ self.objectives.append("min")
+
+ self.col_titles = [""]
+
+ # Index of current pivot row and column
+ self.row_idx = None
+ self.col_idx = None
+
+ # Does objective row only contain (non)-negative values?
+ self.stop_iter = False
+
+ @staticmethod
+ def generate_col_titles(*args: int) -> list[str]:
+ """Generate column titles for tableau of specific dimensions
+
+ >>> Tableau.generate_col_titles(2, 3, 1)
+ ['x1', 'x2', 's1', 's2', 's3', 'a1', 'RHS']
+
+ >>> Tableau.generate_col_titles()
+ Traceback (most recent call last):
+ ...
+ ValueError: Must provide n_vars, n_slack, and n_art_vars
+ >>> Tableau.generate_col_titles(-2, 3, 1)
+ Traceback (most recent call last):
+ ...
+ ValueError: All arguments must be non-negative integers
+ """
+ if len(args) != 3:
+ raise ValueError("Must provide n_vars, n_slack, and n_art_vars")
+
+ if not all(x >= 0 and isinstance(x, int) for x in args):
+ raise ValueError("All arguments must be non-negative integers")
+
+ # decision | slack | artificial
+ string_starts = ["x", "s", "a"]
+ titles = []
+ for i in range(3):
+ for j in range(args[i]):
+ titles.append(string_starts[i] + str(j + 1))
+ titles.append("RHS")
+ return titles
+
+ def find_pivot(self, tableau: np.ndarray) -> tuple[Any, Any]:
+ """Finds the pivot row and column.
+ >>> t = Tableau(np.array([[-2,1,0,0,0], [3,1,1,0,6], [1,2,0,1,7.]]), 2)
+ >>> t.find_pivot(t.tableau)
+ (1, 0)
+ """
+ objective = self.objectives[-1]
+
+ # Find entries of highest magnitude in objective rows
+ sign = (objective == "min") - (objective == "max")
+ col_idx = np.argmax(sign * tableau[0, : self.n_vars])
+
+ # Choice is only valid if below 0 for maximise, and above for minimise
+ if sign * self.tableau[0, col_idx] <= 0:
+ self.stop_iter = True
+ return 0, 0
+
+ # Pivot row is chosen as having the lowest quotient when elements of
+ # the pivot column divide the right-hand side
+
+ # Slice excluding the objective rows
+ s = slice(self.n_stages, self.n_rows)
+
+ # RHS
+ dividend = tableau[s, -1]
+
+ # Elements of pivot column within slice
+ divisor = tableau[s, col_idx]
+
+ # Array filled with nans
+ nans = np.full(self.n_rows - self.n_stages, np.nan)
+
+ # If element in pivot column is greater than zeron_stages, return
+ # quotient or nan otherwise
+ quotients = np.divide(dividend, divisor, out=nans, where=divisor > 0)
+
+ # Arg of minimum quotient excluding the nan values. n_stages is added
+ # to compensate for earlier exclusion of objective columns
+ row_idx = np.nanargmin(quotients) + self.n_stages
+ return row_idx, col_idx
+
+ def pivot(self, tableau: np.ndarray, row_idx: int, col_idx: int) -> np.ndarray:
+ """Pivots on value on the intersection of pivot row and column.
+
+ >>> t = Tableau(np.array([[-2,-3,0,0,0],[1,3,1,0,4],[3,1,0,1,4.]]), 2)
+ >>> t.pivot(t.tableau, 1, 0).tolist()
+ ... # doctest: +NORMALIZE_WHITESPACE
+ [[0.0, 3.0, 2.0, 0.0, 8.0],
+ [1.0, 3.0, 1.0, 0.0, 4.0],
+ [0.0, -8.0, -3.0, 1.0, -8.0]]
+ """
+ # Avoid changes to original tableau
+ piv_row = tableau[row_idx].copy()
+
+ piv_val = piv_row[col_idx]
+
+ # Entry becomes 1
+ piv_row *= 1 / piv_val
+
+ # Variable in pivot column becomes basic, ie the only non-zero entry
+ for idx, coeff in enumerate(tableau[:, col_idx]):
+ tableau[idx] += -coeff * piv_row
+ tableau[row_idx] = piv_row
+ return tableau
+
+ def change_stage(self, tableau: np.ndarray) -> np.ndarray:
+ """Exits first phase of the two-stage method by deleting artificial
+ rows and columns, or completes the algorithm if exiting the standard
+ case.
+
+ >>> t = Tableau(np.array([
+ ... [3, 3, -1, -1, 0, 0, 4],
+ ... [2, 1, 0, 0, 0, 0, 0.],
+ ... [1, 2, -1, 0, 1, 0, 2],
+ ... [2, 1, 0, -1, 0, 1, 2]
+ ... ]), 2)
+ >>> t.change_stage(t.tableau).tolist()
+ ... # doctest: +NORMALIZE_WHITESPACE
+ [[2.0, 1.0, 0.0, 0.0, 0.0, 0.0],
+ [1.0, 2.0, -1.0, 0.0, 1.0, 2.0],
+ [2.0, 1.0, 0.0, -1.0, 0.0, 2.0]]
+ """
+ # Objective of original objective row remains
+ self.objectives.pop()
+
+ if not self.objectives:
+ return tableau
+
+ # Slice containing ids for artificial columns
+ s = slice(-self.n_art_vars - 1, -1)
+
+ # Delete the artificial variable columns
+ tableau = np.delete(tableau, s, axis=1)
+
+ # Delete the objective row of the first stage
+ tableau = np.delete(tableau, 0, axis=0)
+
+ self.n_stages = 1
+ self.n_rows -= 1
+ self.n_art_vars = 0
+ self.stop_iter = False
+ return tableau
+
+ def run_simplex(self) -> dict[Any, Any]:
+ """Operate on tableau until objective function cannot be
+ improved further.
+
+ # Standard linear program:
+ Max: x1 + x2
+ ST: x1 + 3x2 <= 4
+ 3x1 + x2 <= 4
+ >>> Tableau(np.array([[-1,-1,0,0,0],[1,3,1,0,4],[3,1,0,1,4.]]),
+ ... 2).run_simplex()
+ {'P': 2.0, 'x1': 1.0, 'x2': 1.0}
+
+ # Optimal tableau input:
+ >>> Tableau(np.array([
+ ... [0, 0, 0.25, 0.25, 2],
+ ... [0, 1, 0.375, -0.125, 1],
+ ... [1, 0, -0.125, 0.375, 1]
+ ... ]), 2).run_simplex()
+ {'P': 2.0, 'x1': 1.0, 'x2': 1.0}
+
+ # Non-standard: >= constraints
+ Max: 2x1 + 3x2 + x3
+ ST: x1 + x2 + x3 <= 40
+ 2x1 + x2 - x3 >= 10
+ - x2 + x3 >= 10
+ >>> Tableau(np.array([
+ ... [2, 0, 0, 0, -1, -1, 0, 0, 20],
+ ... [-2, -3, -1, 0, 0, 0, 0, 0, 0],
+ ... [1, 1, 1, 1, 0, 0, 0, 0, 40],
+ ... [2, 1, -1, 0, -1, 0, 1, 0, 10],
+ ... [0, -1, 1, 0, 0, -1, 0, 1, 10.]
+ ... ]), 3).run_simplex()
+ {'P': 70.0, 'x1': 10.0, 'x2': 10.0, 'x3': 20.0}
+
+ # Non standard: minimisation and equalities
+ Min: x1 + x2
+ ST: 2x1 + x2 = 12
+ 6x1 + 5x2 = 40
+ >>> Tableau(np.array([
+ ... [8, 6, 0, -1, 0, -1, 0, 0, 52],
+ ... [1, 1, 0, 0, 0, 0, 0, 0, 0],
+ ... [2, 1, 1, 0, 0, 0, 0, 0, 12],
+ ... [2, 1, 0, -1, 0, 0, 1, 0, 12],
+ ... [6, 5, 0, 0, 1, 0, 0, 0, 40],
+ ... [6, 5, 0, 0, 0, -1, 0, 1, 40.]
+ ... ]), 2).run_simplex()
+ {'P': 7.0, 'x1': 5.0, 'x2': 2.0}
+ """
+ # Stop simplex algorithm from cycling.
+ for _ in range(100):
+ # Completion of each stage removes an objective. If both stages
+ # are complete, then no objectives are left
+ if not self.objectives:
+ self.col_titles = self.generate_col_titles(
+ self.n_vars, self.n_slack, self.n_art_vars
+ )
+
+ # Find the values of each variable at optimal solution
+ return self.interpret_tableau(self.tableau, self.col_titles)
+
+ row_idx, col_idx = self.find_pivot(self.tableau)
+
+ # If there are no more negative values in objective row
+ if self.stop_iter:
+ # Delete artificial variable columns and rows. Update attributes
+ self.tableau = self.change_stage(self.tableau)
+ else:
+ self.tableau = self.pivot(self.tableau, row_idx, col_idx)
+ return {}
+
+ def interpret_tableau(
+ self, tableau: np.ndarray, col_titles: list[str]
+ ) -> dict[str, float]:
+ """Given the final tableau, add the corresponding values of the basic
+ decision variables to the `output_dict`
+ >>> tableau = np.array([
+ ... [0,0,0.875,0.375,5],
+ ... [0,1,0.375,-0.125,1],
+ ... [1,0,-0.125,0.375,1]
+ ... ])
+ >>> t = Tableau(tableau, 2)
+ >>> t.interpret_tableau(tableau, ["x1", "x2", "s1", "s2", "RHS"])
+ {'P': 5.0, 'x1': 1.0, 'x2': 1.0}
+ """
+ # P = RHS of final tableau
+ output_dict = {"P": abs(tableau[0, -1])}
+
+ for i in range(self.n_vars):
+ # Gives ids of nonzero entries in the ith column
+ nonzero = np.nonzero(tableau[:, i])
+ n_nonzero = len(nonzero[0])
+
+ # First entry in the nonzero ids
+ nonzero_rowidx = nonzero[0][0]
+ nonzero_val = tableau[nonzero_rowidx, i]
+
+ # If there is only one nonzero value in column, which is one
+ if n_nonzero == nonzero_val == 1:
+ rhs_val = tableau[nonzero_rowidx, -1]
+ output_dict[col_titles[i]] = rhs_val
+
+ # Check for basic variables
+ for title in col_titles:
+ # Don't add RHS or slack variables to output dict
+ if title[0] not in "R-s-a":
+ output_dict.setdefault(title, 0)
+ return output_dict
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From b0f871032e78dd1d2f2214acbaae2fac88fa55b0 Mon Sep 17 00:00:00 2001
From: Frank-1998 <77809242+Frank-1998@users.noreply.github.com>
Date: Sun, 18 Jun 2023 10:30:06 -0600
Subject: [PATCH 099/808] Fix removing the root node in binary_search_tree.py
removes the whole tree (#8752)
* fix issue #8715
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
data_structures/binary_tree/binary_search_tree.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/data_structures/binary_tree/binary_search_tree.py b/data_structures/binary_tree/binary_search_tree.py
index cd88cc10e697..c72195424c7c 100644
--- a/data_structures/binary_tree/binary_search_tree.py
+++ b/data_structures/binary_tree/binary_search_tree.py
@@ -40,7 +40,7 @@ def __reassign_nodes(self, node: Node, new_children: Node | None) -> None:
else:
node.parent.left = new_children
else:
- self.root = None
+ self.root = new_children
def is_right(self, node: Node) -> bool:
if node.parent and node.parent.right:
From ea6c6056cf2215358834710bf89422310f831178 Mon Sep 17 00:00:00 2001
From: Turro <42980188+smturro2@users.noreply.github.com>
Date: Mon, 19 Jun 2023 06:46:29 -0500
Subject: [PATCH 100/808] Added apr_interest function to financial (#6025)
* Added apr_interest function to financial
* Update interest.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update financial/interest.py
* float
---------
Co-authored-by: Christian Clauss
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
financial/interest.py | 41 +++++++++++++++++++++++++++++++++++++++--
1 file changed, 39 insertions(+), 2 deletions(-)
diff --git a/financial/interest.py b/financial/interest.py
index c69c730457d9..33d02e27ccb3 100644
--- a/financial/interest.py
+++ b/financial/interest.py
@@ -4,7 +4,7 @@
def simple_interest(
- principal: float, daily_interest_rate: float, days_between_payments: int
+ principal: float, daily_interest_rate: float, days_between_payments: float
) -> float:
"""
>>> simple_interest(18000.0, 0.06, 3)
@@ -42,7 +42,7 @@ def simple_interest(
def compound_interest(
principal: float,
nominal_annual_interest_rate_percentage: float,
- number_of_compounding_periods: int,
+ number_of_compounding_periods: float,
) -> float:
"""
>>> compound_interest(10000.0, 0.05, 3)
@@ -77,6 +77,43 @@ def compound_interest(
)
+def apr_interest(
+ principal: float,
+ nominal_annual_percentage_rate: float,
+ number_of_years: float,
+) -> float:
+ """
+ >>> apr_interest(10000.0, 0.05, 3)
+ 1618.223072263547
+ >>> apr_interest(10000.0, 0.05, 1)
+ 512.6749646744732
+ >>> apr_interest(0.5, 0.05, 3)
+ 0.08091115361317736
+ >>> apr_interest(10000.0, 0.06, -4)
+ Traceback (most recent call last):
+ ...
+ ValueError: number_of_years must be > 0
+ >>> apr_interest(10000.0, -3.5, 3.0)
+ Traceback (most recent call last):
+ ...
+ ValueError: nominal_annual_percentage_rate must be >= 0
+ >>> apr_interest(-5500.0, 0.01, 5)
+ Traceback (most recent call last):
+ ...
+ ValueError: principal must be > 0
+ """
+ if number_of_years <= 0:
+ raise ValueError("number_of_years must be > 0")
+ if nominal_annual_percentage_rate < 0:
+ raise ValueError("nominal_annual_percentage_rate must be >= 0")
+ if principal <= 0:
+ raise ValueError("principal must be > 0")
+
+ return compound_interest(
+ principal, nominal_annual_percentage_rate / 365, number_of_years * 365
+ )
+
+
if __name__ == "__main__":
import doctest
From 0dee4a402c85981af0c2d4c53af27a69a7eb91bf Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 20 Jun 2023 15:56:14 +0200
Subject: [PATCH 101/808] [pre-commit.ci] pre-commit autoupdate (#8827)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/codespell-project/codespell: v2.2.4 → v2.2.5](https://github.com/codespell-project/codespell/compare/v2.2.4...v2.2.5)
- [github.com/tox-dev/pyproject-fmt: 0.11.2 → 0.12.0](https://github.com/tox-dev/pyproject-fmt/compare/0.11.2...0.12.0)
* updating DIRECTORY.md
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.pre-commit-config.yaml | 4 ++--
DIRECTORY.md | 3 +++
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 1d4b73681108..591fd7819a5a 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -26,14 +26,14 @@ repos:
- id: black
- repo: https://github.com/codespell-project/codespell
- rev: v2.2.4
+ rev: v2.2.5
hooks:
- id: codespell
additional_dependencies:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
- rev: "0.11.2"
+ rev: "0.12.0"
hooks:
- id: pyproject-fmt
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 8511c261a3d2..6ec8d5111176 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -486,6 +486,9 @@
* [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py)
* [Transformations 2D](linear_algebra/src/transformations_2d.py)
+## Linear Programming
+ * [Simplex](linear_programming/simplex.py)
+
## Machine Learning
* [Astar](machine_learning/astar.py)
* [Data Transformations](machine_learning/data_transformations.py)
From 07e68128883b84fb7e342c6bce88863a05fbbf62 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Tue, 20 Jun 2023 18:03:16 +0200
Subject: [PATCH 102/808] Update .pre-commit-config.yaml (#8828)
* Update .pre-commit-config.yaml
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
pyproject.toml | 36 ++++++++++++++++++------------------
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/pyproject.toml b/pyproject.toml
index a526196685f5..1dcce044a313 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,21 +1,3 @@
-[tool.pytest.ini_options]
-markers = [
- "mat_ops: mark a test as utilizing matrix operations.",
-]
-addopts = [
- "--durations=10",
- "--doctest-modules",
- "--showlocals",
-]
-
-[tool.coverage.report]
-omit = [".env/*"]
-sort = "Cover"
-
-[tool.codespell]
-ignore-words-list = "3rt,ans,crate,damon,fo,followings,hist,iff,kwanza,mater,secant,som,sur,tim,zar"
-skip = "./.*,*.json,ciphers/prehistoric_men.txt,project_euler/problem_022/p022_names.txt,pyproject.toml,strings/dictionary.txt,strings/words.txt"
-
[tool.ruff]
ignore = [ # `ruff rule S101` for a description of that rule
"ARG001", # Unused function argument `amount` -- FIX ME?
@@ -131,3 +113,21 @@ max-args = 10 # default: 5
max-branches = 20 # default: 12
max-returns = 8 # default: 6
max-statements = 88 # default: 50
+
+[tool.pytest.ini_options]
+markers = [
+ "mat_ops: mark a test as utilizing matrix operations.",
+]
+addopts = [
+ "--durations=10",
+ "--doctest-modules",
+ "--showlocals",
+]
+
+[tool.coverage.report]
+omit = [".env/*"]
+sort = "Cover"
+
+[tool.codespell]
+ignore-words-list = "3rt,ans,crate,damon,fo,followings,hist,iff,kwanza,mater,secant,som,sur,tim,zar"
+skip = "./.*,*.json,ciphers/prehistoric_men.txt,project_euler/problem_022/p022_names.txt,pyproject.toml,strings/dictionary.txt,strings/words.txt"
From 5b0890bd833eb85c58fae9afc4984d520e7e2ad6 Mon Sep 17 00:00:00 2001
From: "Linus M. Henkel" <86628476+linushenkel@users.noreply.github.com>
Date: Thu, 22 Jun 2023 13:49:09 +0200
Subject: [PATCH 103/808] Dijkstra algorithm with binary grid (#8802)
* Create TestShiva
* Delete TestShiva
* Implementation of the Dijkstra-Algorithm in a binary grid
* Update double_ended_queue.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update least_common_multiple.py
* Update sol1.py
* Update pyproject.toml
* Update pyproject.toml
* https://github.com/astral-sh/ruff-pre-commit v0.0.274
---------
Co-authored-by: ShivaDahal99 <130563462+ShivaDahal99@users.noreply.github.com>
Co-authored-by: jlhuhn <134317018+jlhuhn@users.noreply.github.com>
Co-authored-by: Christian Clauss
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.pre-commit-config.yaml | 4 +-
data_structures/queue/double_ended_queue.py | 4 +-
graphs/dijkstra_binary_grid.py | 89 +++++++++++++++++++++
maths/least_common_multiple.py | 6 +-
project_euler/problem_054/sol1.py | 18 ++---
pyproject.toml | 1 +
6 files changed, 106 insertions(+), 16 deletions(-)
create mode 100644 graphs/dijkstra_binary_grid.py
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 591fd7819a5a..3d4cc4084ccf 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -15,8 +15,8 @@ repos:
hooks:
- id: auto-walrus
- - repo: https://github.com/charliermarsh/ruff-pre-commit
- rev: v0.0.272
+ - repo: https://github.com/astral-sh/ruff-pre-commit
+ rev: v0.0.274
hooks:
- id: ruff
diff --git a/data_structures/queue/double_ended_queue.py b/data_structures/queue/double_ended_queue.py
index 637b7f62fd2c..2472371b42fe 100644
--- a/data_structures/queue/double_ended_queue.py
+++ b/data_structures/queue/double_ended_queue.py
@@ -32,7 +32,7 @@ class Deque:
the number of nodes
"""
- __slots__ = ["_front", "_back", "_len"]
+ __slots__ = ("_front", "_back", "_len")
@dataclass
class _Node:
@@ -54,7 +54,7 @@ class _Iterator:
the current node of the iteration.
"""
- __slots__ = ["_cur"]
+ __slots__ = "_cur"
def __init__(self, cur: Deque._Node | None) -> None:
self._cur = cur
diff --git a/graphs/dijkstra_binary_grid.py b/graphs/dijkstra_binary_grid.py
new file mode 100644
index 000000000000..c23d8234328a
--- /dev/null
+++ b/graphs/dijkstra_binary_grid.py
@@ -0,0 +1,89 @@
+"""
+This script implements the Dijkstra algorithm on a binary grid.
+The grid consists of 0s and 1s, where 1 represents
+a walkable node and 0 represents an obstacle.
+The algorithm finds the shortest path from a start node to a destination node.
+Diagonal movement can be allowed or disallowed.
+"""
+
+from heapq import heappop, heappush
+
+import numpy as np
+
+
+def dijkstra(
+ grid: np.ndarray,
+ source: tuple[int, int],
+ destination: tuple[int, int],
+ allow_diagonal: bool,
+) -> tuple[float | int, list[tuple[int, int]]]:
+ """
+ Implements Dijkstra's algorithm on a binary grid.
+
+ Args:
+ grid (np.ndarray): A 2D numpy array representing the grid.
+ 1 represents a walkable node and 0 represents an obstacle.
+ source (Tuple[int, int]): A tuple representing the start node.
+ destination (Tuple[int, int]): A tuple representing the
+ destination node.
+ allow_diagonal (bool): A boolean determining whether
+ diagonal movements are allowed.
+
+ Returns:
+ Tuple[Union[float, int], List[Tuple[int, int]]]:
+ The shortest distance from the start node to the destination node
+ and the shortest path as a list of nodes.
+
+ >>> dijkstra(np.array([[1, 1, 1], [0, 1, 0], [0, 1, 1]]), (0, 0), (2, 2), False)
+ (4.0, [(0, 0), (0, 1), (1, 1), (2, 1), (2, 2)])
+
+ >>> dijkstra(np.array([[1, 1, 1], [0, 1, 0], [0, 1, 1]]), (0, 0), (2, 2), True)
+ (2.0, [(0, 0), (1, 1), (2, 2)])
+
+ >>> dijkstra(np.array([[1, 1, 1], [0, 0, 1], [0, 1, 1]]), (0, 0), (2, 2), False)
+ (4.0, [(0, 0), (0, 1), (0, 2), (1, 2), (2, 2)])
+ """
+ rows, cols = grid.shape
+ dx = [-1, 1, 0, 0]
+ dy = [0, 0, -1, 1]
+ if allow_diagonal:
+ dx += [-1, -1, 1, 1]
+ dy += [-1, 1, -1, 1]
+
+ queue, visited = [(0, source)], set()
+ matrix = np.full((rows, cols), np.inf)
+ matrix[source] = 0
+ predecessors = np.empty((rows, cols), dtype=object)
+ predecessors[source] = None
+
+ while queue:
+ (dist, (x, y)) = heappop(queue)
+ if (x, y) in visited:
+ continue
+ visited.add((x, y))
+
+ if (x, y) == destination:
+ path = []
+ while (x, y) != source:
+ path.append((x, y))
+ x, y = predecessors[x, y]
+ path.append(source) # add the source manually
+ path.reverse()
+ return matrix[destination], path
+
+ for i in range(len(dx)):
+ nx, ny = x + dx[i], y + dy[i]
+ if 0 <= nx < rows and 0 <= ny < cols:
+ next_node = grid[nx][ny]
+ if next_node == 1 and matrix[nx, ny] > dist + 1:
+ heappush(queue, (dist + 1, (nx, ny)))
+ matrix[nx, ny] = dist + 1
+ predecessors[nx, ny] = (x, y)
+
+ return np.inf, []
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
diff --git a/maths/least_common_multiple.py b/maths/least_common_multiple.py
index 621d93720c41..10cc63ac7990 100644
--- a/maths/least_common_multiple.py
+++ b/maths/least_common_multiple.py
@@ -67,7 +67,7 @@ def benchmark():
class TestLeastCommonMultiple(unittest.TestCase):
- test_inputs = [
+ test_inputs = (
(10, 20),
(13, 15),
(4, 31),
@@ -77,8 +77,8 @@ class TestLeastCommonMultiple(unittest.TestCase):
(12, 25),
(10, 25),
(6, 9),
- ]
- expected_results = [20, 195, 124, 210, 1462, 60, 300, 50, 18]
+ )
+ expected_results = (20, 195, 124, 210, 1462, 60, 300, 50, 18)
def test_lcm_function(self):
for i, (first_num, second_num) in enumerate(self.test_inputs):
diff --git a/project_euler/problem_054/sol1.py b/project_euler/problem_054/sol1.py
index 74409f32c712..86dfa5edd2f5 100644
--- a/project_euler/problem_054/sol1.py
+++ b/project_euler/problem_054/sol1.py
@@ -47,18 +47,18 @@
class PokerHand:
"""Create an object representing a Poker Hand based on an input of a
- string which represents the best 5 card combination from the player's hand
+ string which represents the best 5-card combination from the player's hand
and board cards.
Attributes: (read-only)
- hand: string representing the hand consisting of five cards
+ hand: a string representing the hand consisting of five cards
Methods:
compare_with(opponent): takes in player's hand (self) and
opponent's hand (opponent) and compares both hands according to
the rules of Texas Hold'em.
Returns one of 3 strings (Win, Loss, Tie) based on whether
- player's hand is better than opponent's hand.
+ player's hand is better than the opponent's hand.
hand_name(): Returns a string made up of two parts: hand name
and high card.
@@ -66,11 +66,11 @@ class PokerHand:
Supported operators:
Rich comparison operators: <, >, <=, >=, ==, !=
- Supported builtin methods and functions:
+ Supported built-in methods and functions:
list.sort(), sorted()
"""
- _HAND_NAME = [
+ _HAND_NAME = (
"High card",
"One pair",
"Two pairs",
@@ -81,10 +81,10 @@ class PokerHand:
"Four of a kind",
"Straight flush",
"Royal flush",
- ]
+ )
- _CARD_NAME = [
- "", # placeholder as lists are zero indexed
+ _CARD_NAME = (
+ "", # placeholder as tuples are zero-indexed
"One",
"Two",
"Three",
@@ -99,7 +99,7 @@ class PokerHand:
"Queen",
"King",
"Ace",
- ]
+ )
def __init__(self, hand: str) -> None:
"""
diff --git a/pyproject.toml b/pyproject.toml
index 1dcce044a313..4f21a95190da 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -103,6 +103,7 @@ max-complexity = 17 # default: 10
"machine_learning/linear_discriminant_analysis.py" = ["ARG005"]
"machine_learning/sequential_minimum_optimization.py" = ["SIM115"]
"matrix/sherman_morrison.py" = ["SIM103", "SIM114"]
+"other/l*u_cache.py" = ["RUF012"]
"physics/newtons_second_law_of_motion.py" = ["BLE001"]
"project_euler/problem_099/sol1.py" = ["SIM115"]
"sorts/external_sort.py" = ["SIM115"]
From 5ffe601c86a9b44691a4dce37480c6d904102d49 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Thu, 22 Jun 2023 05:24:34 -0700
Subject: [PATCH 104/808] Fix `mypy` errors in `maths/sigmoid_linear_unit.py`
(#8786)
* updating DIRECTORY.md
* Fix mypy errors in sigmoid_linear_unit.py
* updating DIRECTORY.md
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
maths/sigmoid_linear_unit.py | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/maths/sigmoid_linear_unit.py b/maths/sigmoid_linear_unit.py
index a8ada10dd8ec..0ee09bf82d38 100644
--- a/maths/sigmoid_linear_unit.py
+++ b/maths/sigmoid_linear_unit.py
@@ -17,7 +17,7 @@
import numpy as np
-def sigmoid(vector: np.array) -> np.array:
+def sigmoid(vector: np.ndarray) -> np.ndarray:
"""
Mathematical function sigmoid takes a vector x of K real numbers as input and
returns 1/ (1 + e^-x).
@@ -29,17 +29,15 @@ def sigmoid(vector: np.array) -> np.array:
return 1 / (1 + np.exp(-vector))
-def sigmoid_linear_unit(vector: np.array) -> np.array:
+def sigmoid_linear_unit(vector: np.ndarray) -> np.ndarray:
"""
Implements the Sigmoid Linear Unit (SiLU) or swish function
Parameters:
- vector (np.array): A numpy array consisting of real
- values.
+ vector (np.ndarray): A numpy array consisting of real values
Returns:
- swish_vec (np.array): The input numpy array, after applying
- swish.
+ swish_vec (np.ndarray): The input numpy array, after applying swish
Examples:
>>> sigmoid_linear_unit(np.array([-1.0, 1.0, 2.0]))
From f54a9668103e560f20b50559fb54ac38a74d1fe8 Mon Sep 17 00:00:00 2001
From: Jan-Lukas Huhn <134317018+jlhuhn@users.noreply.github.com>
Date: Thu, 22 Jun 2023 14:31:48 +0200
Subject: [PATCH 105/808] Energy conversions (#8801)
* Create TestShiva
* Delete TestShiva
* Create energy_conversions.py
* Update conversions/energy_conversions.py
Co-authored-by: Caeden Perelli-Harris
---------
Co-authored-by: ShivaDahal99 <130563462+ShivaDahal99@users.noreply.github.com>
Co-authored-by: Caeden Perelli-Harris
---
conversions/energy_conversions.py | 114 ++++++++++++++++++++++++++++++
1 file changed, 114 insertions(+)
create mode 100644 conversions/energy_conversions.py
diff --git a/conversions/energy_conversions.py b/conversions/energy_conversions.py
new file mode 100644
index 000000000000..51de6b313928
--- /dev/null
+++ b/conversions/energy_conversions.py
@@ -0,0 +1,114 @@
+"""
+Conversion of energy units.
+
+Available units: joule, kilojoule, megajoule, gigajoule,\
+ wattsecond, watthour, kilowatthour, newtonmeter, calorie_nutr,\
+ kilocalorie_nutr, electronvolt, britishthermalunit_it, footpound
+
+USAGE :
+-> Import this file into their respective project.
+-> Use the function energy_conversion() for conversion of energy units.
+-> Parameters :
+ -> from_type : From which type you want to convert
+ -> to_type : To which type you want to convert
+ -> value : the value which you want to convert
+
+REFERENCES :
+-> Wikipedia reference: https://en.wikipedia.org/wiki/Units_of_energy
+-> Wikipedia reference: https://en.wikipedia.org/wiki/Joule
+-> Wikipedia reference: https://en.wikipedia.org/wiki/Kilowatt-hour
+-> Wikipedia reference: https://en.wikipedia.org/wiki/Newton-metre
+-> Wikipedia reference: https://en.wikipedia.org/wiki/Calorie
+-> Wikipedia reference: https://en.wikipedia.org/wiki/Electronvolt
+-> Wikipedia reference: https://en.wikipedia.org/wiki/British_thermal_unit
+-> Wikipedia reference: https://en.wikipedia.org/wiki/Foot-pound_(energy)
+-> Unit converter reference: https://www.unitconverters.net/energy-converter.html
+"""
+
+ENERGY_CONVERSION: dict[str, float] = {
+ "joule": 1.0,
+ "kilojoule": 1_000,
+ "megajoule": 1_000_000,
+ "gigajoule": 1_000_000_000,
+ "wattsecond": 1.0,
+ "watthour": 3_600,
+ "kilowatthour": 3_600_000,
+ "newtonmeter": 1.0,
+ "calorie_nutr": 4_186.8,
+ "kilocalorie_nutr": 4_186_800.00,
+ "electronvolt": 1.602_176_634e-19,
+ "britishthermalunit_it": 1_055.055_85,
+ "footpound": 1.355_818,
+}
+
+
+def energy_conversion(from_type: str, to_type: str, value: float) -> float:
+ """
+ Conversion of energy units.
+ >>> energy_conversion("joule", "joule", 1)
+ 1.0
+ >>> energy_conversion("joule", "kilojoule", 1)
+ 0.001
+ >>> energy_conversion("joule", "megajoule", 1)
+ 1e-06
+ >>> energy_conversion("joule", "gigajoule", 1)
+ 1e-09
+ >>> energy_conversion("joule", "wattsecond", 1)
+ 1.0
+ >>> energy_conversion("joule", "watthour", 1)
+ 0.0002777777777777778
+ >>> energy_conversion("joule", "kilowatthour", 1)
+ 2.7777777777777776e-07
+ >>> energy_conversion("joule", "newtonmeter", 1)
+ 1.0
+ >>> energy_conversion("joule", "calorie_nutr", 1)
+ 0.00023884589662749592
+ >>> energy_conversion("joule", "kilocalorie_nutr", 1)
+ 2.388458966274959e-07
+ >>> energy_conversion("joule", "electronvolt", 1)
+ 6.241509074460763e+18
+ >>> energy_conversion("joule", "britishthermalunit_it", 1)
+ 0.0009478171226670134
+ >>> energy_conversion("joule", "footpound", 1)
+ 0.7375621211696556
+ >>> energy_conversion("joule", "megajoule", 1000)
+ 0.001
+ >>> energy_conversion("calorie_nutr", "kilocalorie_nutr", 1000)
+ 1.0
+ >>> energy_conversion("kilowatthour", "joule", 10)
+ 36000000.0
+ >>> energy_conversion("britishthermalunit_it", "footpound", 1)
+ 778.1692306784539
+ >>> energy_conversion("watthour", "joule", "a") # doctest: +ELLIPSIS
+ Traceback (most recent call last):
+ ...
+ TypeError: unsupported operand type(s) for /: 'str' and 'float'
+ >>> energy_conversion("wrongunit", "joule", 1) # doctest: +ELLIPSIS
+ Traceback (most recent call last):
+ ...
+ ValueError: Incorrect 'from_type' or 'to_type' value: 'wrongunit', 'joule'
+ Valid values are: joule, ... footpound
+ >>> energy_conversion("joule", "wrongunit", 1) # doctest: +ELLIPSIS
+ Traceback (most recent call last):
+ ...
+ ValueError: Incorrect 'from_type' or 'to_type' value: 'joule', 'wrongunit'
+ Valid values are: joule, ... footpound
+ >>> energy_conversion("123", "abc", 1) # doctest: +ELLIPSIS
+ Traceback (most recent call last):
+ ...
+ ValueError: Incorrect 'from_type' or 'to_type' value: '123', 'abc'
+ Valid values are: joule, ... footpound
+ """
+ if to_type not in ENERGY_CONVERSION or from_type not in ENERGY_CONVERSION:
+ msg = (
+ f"Incorrect 'from_type' or 'to_type' value: {from_type!r}, {to_type!r}\n"
+ f"Valid values are: {', '.join(ENERGY_CONVERSION)}"
+ )
+ raise ValueError(msg)
+ return value * ENERGY_CONVERSION[from_type] / ENERGY_CONVERSION[to_type]
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 331585f3f866e210e23d11700b09a8770a1c2490 Mon Sep 17 00:00:00 2001
From: Himanshu Tomar
Date: Fri, 23 Jun 2023 13:56:05 +0530
Subject: [PATCH 106/808] Algorithm: Calculating Product Sum from a Special
Array with Nested Structures (#8761)
* Added minimum waiting time problem solution using greedy algorithm
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* ruff --fix
* Add type hints
* Added two more doc test
* Removed unnecessary comments
* updated type hints
* Updated the code as per the code review
* Added recursive algo to calculate product sum from an array
* Added recursive algo to calculate product sum from an array
* Update doc string
* Added doctest for product_sum function
* Updated the code and added more doctests
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Added more test coverage for product_sum method
* Update product_sum.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
DIRECTORY.md | 1 +
data_structures/arrays/product_sum.py | 98 +++++++++++++++++++++++++++
2 files changed, 99 insertions(+)
create mode 100644 data_structures/arrays/product_sum.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 6ec8d5111176..83389dab1f56 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -166,6 +166,7 @@
* Arrays
* [Permutations](data_structures/arrays/permutations.py)
* [Prefix Sum](data_structures/arrays/prefix_sum.py)
+ * [Product Sum Array](data_structures/arrays/product_sum.py)
* Binary Tree
* [Avl Tree](data_structures/binary_tree/avl_tree.py)
* [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py)
diff --git a/data_structures/arrays/product_sum.py b/data_structures/arrays/product_sum.py
new file mode 100644
index 000000000000..4fb906f369ab
--- /dev/null
+++ b/data_structures/arrays/product_sum.py
@@ -0,0 +1,98 @@
+"""
+Calculate the Product Sum from a Special Array.
+reference: https://dev.to/sfrasica/algorithms-product-sum-from-an-array-dc6
+
+Python doctests can be run with the following command:
+python -m doctest -v product_sum.py
+
+Calculate the product sum of a "special" array which can contain integers or nested
+arrays. The product sum is obtained by adding all elements and multiplying by their
+respective depths.
+
+For example, in the array [x, y], the product sum is (x + y). In the array [x, [y, z]],
+the product sum is x + 2 * (y + z). In the array [x, [y, [z]]],
+the product sum is x + 2 * (y + 3z).
+
+Example Input:
+[5, 2, [-7, 1], 3, [6, [-13, 8], 4]]
+Output: 12
+
+"""
+
+
+def product_sum(arr: list[int | list], depth: int) -> int:
+ """
+ Recursively calculates the product sum of an array.
+
+ The product sum of an array is defined as the sum of its elements multiplied by
+ their respective depths. If an element is a list, its product sum is calculated
+ recursively by multiplying the sum of its elements with its depth plus one.
+
+ Args:
+ arr: The array of integers and nested lists.
+ depth: The current depth level.
+
+ Returns:
+ int: The product sum of the array.
+
+ Examples:
+ >>> product_sum([1, 2, 3], 1)
+ 6
+ >>> product_sum([-1, 2, [-3, 4]], 2)
+ 8
+ >>> product_sum([1, 2, 3], -1)
+ -6
+ >>> product_sum([1, 2, 3], 0)
+ 0
+ >>> product_sum([1, 2, 3], 7)
+ 42
+ >>> product_sum((1, 2, 3), 7)
+ 42
+ >>> product_sum({1, 2, 3}, 7)
+ 42
+ >>> product_sum([1, -1], 1)
+ 0
+ >>> product_sum([1, -2], 1)
+ -1
+ >>> product_sum([-3.5, [1, [0.5]]], 1)
+ 1.5
+
+ """
+ total_sum = 0
+ for ele in arr:
+ total_sum += product_sum(ele, depth + 1) if isinstance(ele, list) else ele
+ return total_sum * depth
+
+
+def product_sum_array(array: list[int | list]) -> int:
+ """
+ Calculates the product sum of an array.
+
+ Args:
+ array (List[Union[int, List]]): The array of integers and nested lists.
+
+ Returns:
+ int: The product sum of the array.
+
+ Examples:
+ >>> product_sum_array([1, 2, 3])
+ 6
+ >>> product_sum_array([1, [2, 3]])
+ 11
+ >>> product_sum_array([1, [2, [3, 4]]])
+ 47
+ >>> product_sum_array([0])
+ 0
+ >>> product_sum_array([-3.5, [1, [0.5]]])
+ 1.5
+ >>> product_sum_array([1, -2])
+ -1
+
+ """
+ return product_sum(array, 1)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 267a8b72f97762383e7c313ed20df859115e2815 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Fri, 23 Jun 2023 06:56:58 -0700
Subject: [PATCH 107/808] Clarify how to add issue numbers in PR template and
CONTRIBUTING.md (#8833)
* updating DIRECTORY.md
* Clarify wording in PR template
* Clarify CONTRIBUTING.md wording about adding issue numbers
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Add suggested change from review to CONTRIBUTING.md
Co-authored-by: Christian Clauss
* Incorporate review edit to CONTRIBUTING.md
Co-authored-by: Christian Clauss
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.github/pull_request_template.md | 2 +-
CONTRIBUTING.md | 7 ++++++-
DIRECTORY.md | 2 ++
3 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
index b3ba8baf9c34..1f9797fae038 100644
--- a/.github/pull_request_template.md
+++ b/.github/pull_request_template.md
@@ -17,4 +17,4 @@
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
-* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
+* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 2bb0c2e39eee..618cca868d83 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -25,7 +25,12 @@ We appreciate any contribution, from fixing a grammar mistake in a comment to im
Your contribution will be tested by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) to save time and mental energy. After you have submitted your pull request, you should see the GitHub Actions tests start to run at the bottom of your submission page. If those tests fail, then click on the ___details___ button try to read through the GitHub Actions output to understand the failure. If you do not understand, please leave a comment on your submission page and a community member will try to help.
-Please help us keep our issue list small by adding fixes: #{$ISSUE_NO} to the commit message of pull requests that resolve open issues. GitHub will use this tag to auto-close the issue when the PR is merged.
+Please help us keep our issue list small by adding `Fixes #{$ISSUE_NUMBER}` to the description of pull requests that resolve open issues.
+For example, if your pull request fixes issue #10, then please add the following to its description:
+```
+Fixes #10
+```
+GitHub will use this tag to [auto-close the issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue) if and when the PR is merged.
#### What is an Algorithm?
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 83389dab1f56..1414aacf95f7 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -146,6 +146,7 @@
* [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py)
* [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py)
* [Decimal To Octal](conversions/decimal_to_octal.py)
+ * [Energy Conversions](conversions/energy_conversions.py)
* [Excel Title To Column](conversions/excel_title_to_column.py)
* [Hex To Bin](conversions/hex_to_bin.py)
* [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py)
@@ -411,6 +412,7 @@
* [Dijkstra 2](graphs/dijkstra_2.py)
* [Dijkstra Algorithm](graphs/dijkstra_algorithm.py)
* [Dijkstra Alternate](graphs/dijkstra_alternate.py)
+ * [Dijkstra Binary Grid](graphs/dijkstra_binary_grid.py)
* [Dinic](graphs/dinic.py)
* [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py)
* [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py)
From 3bfa89dacf877b1d7a62b14f82d54e8de99a838e Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Sun, 25 Jun 2023 18:28:01 +0200
Subject: [PATCH 108/808] GitHub Actions build: Add more tests (#8837)
* GitHub Actions build: Add more tests
Re-enable some tests that were disabled in #6591.
Fixes #8818
* updating DIRECTORY.md
* TODO: Re-enable quantum tests
* fails: pytest quantum/bb84.py quantum/q_fourier_transform.py
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.github/workflows/build.yml | 7 +++----
DIRECTORY.md | 2 +-
2 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 6b9cc890b6af..5229edaf8659 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -22,11 +22,10 @@ jobs:
python -m pip install --upgrade pip setuptools six wheel
python -m pip install pytest-cov -r requirements.txt
- name: Run tests
- # See: #6591 for re-enabling tests on Python v3.11
+ # TODO: #8818 Re-enable quantum tests
run: pytest
- --ignore=computer_vision/cnn_classification.py
- --ignore=machine_learning/lstm/lstm_prediction.py
- --ignore=quantum/
+ --ignore=quantum/bb84.py
+ --ignore=quantum/q_fourier_transform.py
--ignore=project_euler/
--ignore=scripts/validate_solutions.py
--cov-report=term-missing:skip-covered
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 1414aacf95f7..0c21b9537fc1 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -167,7 +167,7 @@
* Arrays
* [Permutations](data_structures/arrays/permutations.py)
* [Prefix Sum](data_structures/arrays/prefix_sum.py)
- * [Product Sum Array](data_structures/arrays/product_sum.py)
+ * [Product Sum](data_structures/arrays/product_sum.py)
* Binary Tree
* [Avl Tree](data_structures/binary_tree/avl_tree.py)
* [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py)
From d764eec655c1c51f5ef3490d27ea72430191a000 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Mon, 26 Jun 2023 05:24:50 +0200
Subject: [PATCH 109/808] Fix failing pytest quantum/bb84.py (#8838)
* Fix failing pytest quantum/bb84.py
* Update bb84.py test results to match current qiskit
---
.github/workflows/build.yml | 1 -
quantum/bb84.py | 4 ++--
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 5229edaf8659..fc8cb636979e 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -24,7 +24,6 @@ jobs:
- name: Run tests
# TODO: #8818 Re-enable quantum tests
run: pytest
- --ignore=quantum/bb84.py
--ignore=quantum/q_fourier_transform.py
--ignore=project_euler/
--ignore=scripts/validate_solutions.py
diff --git a/quantum/bb84.py b/quantum/bb84.py
index 60d64371fe63..e90a11c2aef3 100644
--- a/quantum/bb84.py
+++ b/quantum/bb84.py
@@ -64,10 +64,10 @@ def bb84(key_len: int = 8, seed: int | None = None) -> str:
key: The key generated using BB84 protocol.
>>> bb84(16, seed=0)
- '1101101100010000'
+ '0111110111010010'
>>> bb84(8, seed=0)
- '01011011'
+ '10110001'
"""
# Set up the random number generator.
rng = np.random.default_rng(seed=seed)
From 62dcbea943e8cc4ea4d83eff115c4e6f6a4808af Mon Sep 17 00:00:00 2001
From: duongoku
Date: Mon, 26 Jun 2023 14:39:18 +0700
Subject: [PATCH 110/808] Add power sum problem (#8832)
* Add powersum problem
* Add doctest
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Add more doctests
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Add more doctests
* Improve paramater name
* Fix line too long
* Remove global variables
* Apply suggestions from code review
* Apply suggestions from code review
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
backtracking/power_sum.py | 93 +++++++++++++++++++++++++++++++++++++++
1 file changed, 93 insertions(+)
create mode 100644 backtracking/power_sum.py
diff --git a/backtracking/power_sum.py b/backtracking/power_sum.py
new file mode 100644
index 000000000000..fcf1429f8570
--- /dev/null
+++ b/backtracking/power_sum.py
@@ -0,0 +1,93 @@
+"""
+Problem source: https://www.hackerrank.com/challenges/the-power-sum/problem
+Find the number of ways that a given integer X, can be expressed as the sum
+of the Nth powers of unique, natural numbers. For example, if X=13 and N=2.
+We have to find all combinations of unique squares adding up to 13.
+The only solution is 2^2+3^2. Constraints: 1<=X<=1000, 2<=N<=10.
+"""
+
+from math import pow
+
+
+def backtrack(
+ needed_sum: int,
+ power: int,
+ current_number: int,
+ current_sum: int,
+ solutions_count: int,
+) -> tuple[int, int]:
+ """
+ >>> backtrack(13, 2, 1, 0, 0)
+ (0, 1)
+ >>> backtrack(100, 2, 1, 0, 0)
+ (0, 3)
+ >>> backtrack(100, 3, 1, 0, 0)
+ (0, 1)
+ >>> backtrack(800, 2, 1, 0, 0)
+ (0, 561)
+ >>> backtrack(1000, 10, 1, 0, 0)
+ (0, 0)
+ >>> backtrack(400, 2, 1, 0, 0)
+ (0, 55)
+ >>> backtrack(50, 1, 1, 0, 0)
+ (0, 3658)
+ """
+ if current_sum == needed_sum:
+ # If the sum of the powers is equal to needed_sum, then we have a solution.
+ solutions_count += 1
+ return current_sum, solutions_count
+
+ i_to_n = int(pow(current_number, power))
+ if current_sum + i_to_n <= needed_sum:
+ # If the sum of the powers is less than needed_sum, then continue adding powers.
+ current_sum += i_to_n
+ current_sum, solutions_count = backtrack(
+ needed_sum, power, current_number + 1, current_sum, solutions_count
+ )
+ current_sum -= i_to_n
+ if i_to_n < needed_sum:
+ # If the power of i is less than needed_sum, then try with the next power.
+ current_sum, solutions_count = backtrack(
+ needed_sum, power, current_number + 1, current_sum, solutions_count
+ )
+ return current_sum, solutions_count
+
+
+def solve(needed_sum: int, power: int) -> int:
+ """
+ >>> solve(13, 2)
+ 1
+ >>> solve(100, 2)
+ 3
+ >>> solve(100, 3)
+ 1
+ >>> solve(800, 2)
+ 561
+ >>> solve(1000, 10)
+ 0
+ >>> solve(400, 2)
+ 55
+ >>> solve(50, 1)
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid input
+ needed_sum must be between 1 and 1000, power between 2 and 10.
+ >>> solve(-10, 5)
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid input
+ needed_sum must be between 1 and 1000, power between 2 and 10.
+ """
+ if not (1 <= needed_sum <= 1000 and 2 <= power <= 10):
+ raise ValueError(
+ "Invalid input\n"
+ "needed_sum must be between 1 and 1000, power between 2 and 10."
+ )
+
+ return backtrack(needed_sum, power, 1, 0, 0)[1] # Return the solutions_count
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 69f20033e55ae62c337e2fb2146aea5fabf3e5a0 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Mon, 26 Jun 2023 02:15:31 -0700
Subject: [PATCH 111/808] Remove duplicate implementation of Collatz sequence
(#8836)
* updating DIRECTORY.md
* Remove duplicate implementation of Collatz sequence
* updating DIRECTORY.md
* Add suggestions from PR review
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 -
maths/3n_plus_1.py | 151 --------------------------------------
maths/collatz_sequence.py | 69 +++++++++++------
3 files changed, 46 insertions(+), 175 deletions(-)
delete mode 100644 maths/3n_plus_1.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 0c21b9537fc1..1e0e450bca2b 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -522,7 +522,6 @@
* [Xgboost Regressor](machine_learning/xgboost_regressor.py)
## Maths
- * [3N Plus 1](maths/3n_plus_1.py)
* [Abs](maths/abs.py)
* [Add](maths/add.py)
* [Addition Without Arithmetic](maths/addition_without_arithmetic.py)
diff --git a/maths/3n_plus_1.py b/maths/3n_plus_1.py
deleted file mode 100644
index f9f6dfeb9faa..000000000000
--- a/maths/3n_plus_1.py
+++ /dev/null
@@ -1,151 +0,0 @@
-from __future__ import annotations
-
-
-def n31(a: int) -> tuple[list[int], int]:
- """
- Returns the Collatz sequence and its length of any positive integer.
- >>> n31(4)
- ([4, 2, 1], 3)
- """
-
- if not isinstance(a, int):
- msg = f"Must be int, not {type(a).__name__}"
- raise TypeError(msg)
- if a < 1:
- msg = f"Given integer must be positive, not {a}"
- raise ValueError(msg)
-
- path = [a]
- while a != 1:
- if a % 2 == 0:
- a //= 2
- else:
- a = 3 * a + 1
- path.append(a)
- return path, len(path)
-
-
-def test_n31():
- """
- >>> test_n31()
- """
- assert n31(4) == ([4, 2, 1], 3)
- assert n31(11) == ([11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1], 15)
- assert n31(31) == (
- [
- 31,
- 94,
- 47,
- 142,
- 71,
- 214,
- 107,
- 322,
- 161,
- 484,
- 242,
- 121,
- 364,
- 182,
- 91,
- 274,
- 137,
- 412,
- 206,
- 103,
- 310,
- 155,
- 466,
- 233,
- 700,
- 350,
- 175,
- 526,
- 263,
- 790,
- 395,
- 1186,
- 593,
- 1780,
- 890,
- 445,
- 1336,
- 668,
- 334,
- 167,
- 502,
- 251,
- 754,
- 377,
- 1132,
- 566,
- 283,
- 850,
- 425,
- 1276,
- 638,
- 319,
- 958,
- 479,
- 1438,
- 719,
- 2158,
- 1079,
- 3238,
- 1619,
- 4858,
- 2429,
- 7288,
- 3644,
- 1822,
- 911,
- 2734,
- 1367,
- 4102,
- 2051,
- 6154,
- 3077,
- 9232,
- 4616,
- 2308,
- 1154,
- 577,
- 1732,
- 866,
- 433,
- 1300,
- 650,
- 325,
- 976,
- 488,
- 244,
- 122,
- 61,
- 184,
- 92,
- 46,
- 23,
- 70,
- 35,
- 106,
- 53,
- 160,
- 80,
- 40,
- 20,
- 10,
- 5,
- 16,
- 8,
- 4,
- 2,
- 1,
- ],
- 107,
- )
-
-
-if __name__ == "__main__":
- num = 4
- path, length = n31(num)
- print(f"The Collatz sequence of {num} took {length} steps. \nPath: {path}")
diff --git a/maths/collatz_sequence.py b/maths/collatz_sequence.py
index 7b3636de69f4..4f3aa5582731 100644
--- a/maths/collatz_sequence.py
+++ b/maths/collatz_sequence.py
@@ -1,43 +1,66 @@
+"""
+The Collatz conjecture is a famous unsolved problem in mathematics. Given a starting
+positive integer, define the following sequence:
+- If the current term n is even, then the next term is n/2.
+- If the current term n is odd, then the next term is 3n + 1.
+The conjecture claims that this sequence will always reach 1 for any starting number.
+
+Other names for this problem include the 3n + 1 problem, the Ulam conjecture, Kakutani's
+problem, the Thwaites conjecture, Hasse's algorithm, the Syracuse problem, and the
+hailstone sequence.
+
+Reference: https://en.wikipedia.org/wiki/Collatz_conjecture
+"""
+
from __future__ import annotations
+from collections.abc import Generator
-def collatz_sequence(n: int) -> list[int]:
+
+def collatz_sequence(n: int) -> Generator[int, None, None]:
"""
- Collatz conjecture: start with any positive integer n. The next term is
- obtained as follows:
- If n term is even, the next term is: n / 2 .
- If n is odd, the next term is: 3 * n + 1.
-
- The conjecture states the sequence will always reach 1 for any starting value n.
- Example:
- >>> collatz_sequence(2.1)
+ Generate the Collatz sequence starting at n.
+ >>> tuple(collatz_sequence(2.1))
Traceback (most recent call last):
...
- Exception: Sequence only defined for natural numbers
- >>> collatz_sequence(0)
+ Exception: Sequence only defined for positive integers
+ >>> tuple(collatz_sequence(0))
Traceback (most recent call last):
...
- Exception: Sequence only defined for natural numbers
- >>> collatz_sequence(43) # doctest: +NORMALIZE_WHITESPACE
- [43, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56, 28, 14, 7,
- 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
+ Exception: Sequence only defined for positive integers
+ >>> tuple(collatz_sequence(4))
+ (4, 2, 1)
+ >>> tuple(collatz_sequence(11))
+ (11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1)
+ >>> tuple(collatz_sequence(31)) # doctest: +NORMALIZE_WHITESPACE
+ (31, 94, 47, 142, 71, 214, 107, 322, 161, 484, 242, 121, 364, 182, 91, 274, 137,
+ 412, 206, 103, 310, 155, 466, 233, 700, 350, 175, 526, 263, 790, 395, 1186, 593,
+ 1780, 890, 445, 1336, 668, 334, 167, 502, 251, 754, 377, 1132, 566, 283, 850, 425,
+ 1276, 638, 319, 958, 479, 1438, 719, 2158, 1079, 3238, 1619, 4858, 2429, 7288, 3644,
+ 1822, 911, 2734, 1367, 4102, 2051, 6154, 3077, 9232, 4616, 2308, 1154, 577, 1732,
+ 866, 433, 1300, 650, 325, 976, 488, 244, 122, 61, 184, 92, 46, 23, 70, 35, 106, 53,
+ 160, 80, 40, 20, 10, 5, 16, 8, 4, 2, 1)
+ >>> tuple(collatz_sequence(43)) # doctest: +NORMALIZE_WHITESPACE
+ (43, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56, 28, 14, 7, 22, 11, 34, 17, 52, 26,
+ 13, 40, 20, 10, 5, 16, 8, 4, 2, 1)
"""
-
if not isinstance(n, int) or n < 1:
- raise Exception("Sequence only defined for natural numbers")
+ raise Exception("Sequence only defined for positive integers")
- sequence = [n]
+ yield n
while n != 1:
- n = 3 * n + 1 if n & 1 else n // 2
- sequence.append(n)
- return sequence
+ if n % 2 == 0:
+ n //= 2
+ else:
+ n = 3 * n + 1
+ yield n
def main():
n = 43
- sequence = collatz_sequence(n)
+ sequence = tuple(collatz_sequence(n))
print(sequence)
- print(f"collatz sequence from {n} took {len(sequence)} steps.")
+ print(f"Collatz sequence from {n} took {len(sequence)} steps.")
if __name__ == "__main__":
From 929d3d9219020d2978d5560e3b931df69a6f2d50 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 27 Jun 2023 07:23:54 +0200
Subject: [PATCH 112/808] [pre-commit.ci] pre-commit autoupdate (#8842)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.274 → v0.0.275](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.274...v0.0.275)
- [github.com/tox-dev/pyproject-fmt: 0.12.0 → 0.12.1](https://github.com/tox-dev/pyproject-fmt/compare/0.12.0...0.12.1)
- [github.com/pre-commit/mirrors-mypy: v1.3.0 → v1.4.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.3.0...v1.4.1)
* updating DIRECTORY.md
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.pre-commit-config.yaml | 6 +++---
DIRECTORY.md | 1 +
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 3d4cc4084ccf..1d92d2ff31c1 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.274
+ rev: v0.0.275
hooks:
- id: ruff
@@ -33,7 +33,7 @@ repos:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
- rev: "0.12.0"
+ rev: "0.12.1"
hooks:
- id: pyproject-fmt
@@ -51,7 +51,7 @@ repos:
- id: validate-pyproject
- repo: https://github.com/pre-commit/mirrors-mypy
- rev: v1.3.0
+ rev: v1.4.1
hooks:
- id: mypy
args:
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 1e0e450bca2b..d25d665ef28b 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -29,6 +29,7 @@
* [Minmax](backtracking/minmax.py)
* [N Queens](backtracking/n_queens.py)
* [N Queens Math](backtracking/n_queens_math.py)
+ * [Power Sum](backtracking/power_sum.py)
* [Rat In Maze](backtracking/rat_in_maze.py)
* [Sudoku](backtracking/sudoku.py)
* [Sum Of Subsets](backtracking/sum_of_subsets.py)
From c9ee6ed1887fadd25c1c43c31ed55a99b2be5f24 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 4 Jul 2023 00:20:35 +0200
Subject: [PATCH 113/808] [pre-commit.ci] pre-commit autoupdate (#8853)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.275 → v0.0.276](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.275...v0.0.276)
* Update double_ended_queue.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update double_ended_queue.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.pre-commit-config.yaml | 2 +-
data_structures/queue/double_ended_queue.py | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 1d92d2ff31c1..42ebeed14fa9 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.275
+ rev: v0.0.276
hooks:
- id: ruff
diff --git a/data_structures/queue/double_ended_queue.py b/data_structures/queue/double_ended_queue.py
index 2472371b42fe..44dc863b9a4e 100644
--- a/data_structures/queue/double_ended_queue.py
+++ b/data_structures/queue/double_ended_queue.py
@@ -54,7 +54,7 @@ class _Iterator:
the current node of the iteration.
"""
- __slots__ = "_cur"
+ __slots__ = ("_cur",)
def __init__(self, cur: Deque._Node | None) -> None:
self._cur = cur
From a0eec90466beeb3b6ce0f7afd905f96454e9b14c Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Tue, 11 Jul 2023 02:44:12 -0700
Subject: [PATCH 114/808] Consolidate duplicate implementations of max subarray
(#8849)
* Remove max subarray sum duplicate implementations
* updating DIRECTORY.md
* Rename max_sum_contiguous_subsequence.py
* Fix typo in dynamic_programming/max_subarray_sum.py
* Remove duplicate divide and conquer max subarray
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 8 +-
divide_and_conquer/max_subarray.py | 112 ++++++++++++++++++
divide_and_conquer/max_subarray_sum.py | 78 ------------
dynamic_programming/max_sub_array.py | 93 ---------------
dynamic_programming/max_subarray_sum.py | 60 ++++++++++
.../max_sum_contiguous_subsequence.py | 20 ----
maths/kadanes.py | 63 ----------
maths/largest_subarray_sum.py | 21 ----
other/maximum_subarray.py | 32 -----
9 files changed, 174 insertions(+), 313 deletions(-)
create mode 100644 divide_and_conquer/max_subarray.py
delete mode 100644 divide_and_conquer/max_subarray_sum.py
delete mode 100644 dynamic_programming/max_sub_array.py
create mode 100644 dynamic_programming/max_subarray_sum.py
delete mode 100644 dynamic_programming/max_sum_contiguous_subsequence.py
delete mode 100644 maths/kadanes.py
delete mode 100644 maths/largest_subarray_sum.py
delete mode 100644 other/maximum_subarray.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index d25d665ef28b..77938f45011b 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -293,7 +293,7 @@
* [Inversions](divide_and_conquer/inversions.py)
* [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py)
* [Max Difference Pair](divide_and_conquer/max_difference_pair.py)
- * [Max Subarray Sum](divide_and_conquer/max_subarray_sum.py)
+ * [Max Subarray](divide_and_conquer/max_subarray.py)
* [Mergesort](divide_and_conquer/mergesort.py)
* [Peak](divide_and_conquer/peak.py)
* [Power](divide_and_conquer/power.py)
@@ -324,8 +324,7 @@
* [Matrix Chain Order](dynamic_programming/matrix_chain_order.py)
* [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py)
* [Max Product Subarray](dynamic_programming/max_product_subarray.py)
- * [Max Sub Array](dynamic_programming/max_sub_array.py)
- * [Max Sum Contiguous Subsequence](dynamic_programming/max_sum_contiguous_subsequence.py)
+ * [Max Subarray Sum](dynamic_programming/max_subarray_sum.py)
* [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py)
* [Minimum Coin Change](dynamic_programming/minimum_coin_change.py)
* [Minimum Cost Path](dynamic_programming/minimum_cost_path.py)
@@ -591,12 +590,10 @@
* [Is Square Free](maths/is_square_free.py)
* [Jaccard Similarity](maths/jaccard_similarity.py)
* [Juggler Sequence](maths/juggler_sequence.py)
- * [Kadanes](maths/kadanes.py)
* [Karatsuba](maths/karatsuba.py)
* [Krishnamurthy Number](maths/krishnamurthy_number.py)
* [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py)
* [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py)
- * [Largest Subarray Sum](maths/largest_subarray_sum.py)
* [Least Common Multiple](maths/least_common_multiple.py)
* [Line Length](maths/line_length.py)
* [Liouville Lambda](maths/liouville_lambda.py)
@@ -733,7 +730,6 @@
* [Linear Congruential Generator](other/linear_congruential_generator.py)
* [Lru Cache](other/lru_cache.py)
* [Magicdiamondpattern](other/magicdiamondpattern.py)
- * [Maximum Subarray](other/maximum_subarray.py)
* [Maximum Subsequence](other/maximum_subsequence.py)
* [Nested Brackets](other/nested_brackets.py)
* [Number Container System](other/number_container_system.py)
diff --git a/divide_and_conquer/max_subarray.py b/divide_and_conquer/max_subarray.py
new file mode 100644
index 000000000000..851ef621a24c
--- /dev/null
+++ b/divide_and_conquer/max_subarray.py
@@ -0,0 +1,112 @@
+"""
+The maximum subarray problem is the task of finding the continuous subarray that has the
+maximum sum within a given array of numbers. For example, given the array
+[-2, 1, -3, 4, -1, 2, 1, -5, 4], the contiguous subarray with the maximum sum is
+[4, -1, 2, 1], which has a sum of 6.
+
+This divide-and-conquer algorithm finds the maximum subarray in O(n log n) time.
+"""
+from __future__ import annotations
+
+import time
+from collections.abc import Sequence
+from random import randint
+
+from matplotlib import pyplot as plt
+
+
+def max_subarray(
+ arr: Sequence[float], low: int, high: int
+) -> tuple[int | None, int | None, float]:
+ """
+ Solves the maximum subarray problem using divide and conquer.
+ :param arr: the given array of numbers
+ :param low: the start index
+ :param high: the end index
+ :return: the start index of the maximum subarray, the end index of the
+ maximum subarray, and the maximum subarray sum
+
+ >>> nums = [-2, 1, -3, 4, -1, 2, 1, -5, 4]
+ >>> max_subarray(nums, 0, len(nums) - 1)
+ (3, 6, 6)
+ >>> nums = [2, 8, 9]
+ >>> max_subarray(nums, 0, len(nums) - 1)
+ (0, 2, 19)
+ >>> nums = [0, 0]
+ >>> max_subarray(nums, 0, len(nums) - 1)
+ (0, 0, 0)
+ >>> nums = [-1.0, 0.0, 1.0]
+ >>> max_subarray(nums, 0, len(nums) - 1)
+ (2, 2, 1.0)
+ >>> nums = [-2, -3, -1, -4, -6]
+ >>> max_subarray(nums, 0, len(nums) - 1)
+ (2, 2, -1)
+ >>> max_subarray([], 0, 0)
+ (None, None, 0)
+ """
+ if not arr:
+ return None, None, 0
+ if low == high:
+ return low, high, arr[low]
+
+ mid = (low + high) // 2
+ left_low, left_high, left_sum = max_subarray(arr, low, mid)
+ right_low, right_high, right_sum = max_subarray(arr, mid + 1, high)
+ cross_left, cross_right, cross_sum = max_cross_sum(arr, low, mid, high)
+ if left_sum >= right_sum and left_sum >= cross_sum:
+ return left_low, left_high, left_sum
+ elif right_sum >= left_sum and right_sum >= cross_sum:
+ return right_low, right_high, right_sum
+ return cross_left, cross_right, cross_sum
+
+
+def max_cross_sum(
+ arr: Sequence[float], low: int, mid: int, high: int
+) -> tuple[int, int, float]:
+ left_sum, max_left = float("-inf"), -1
+ right_sum, max_right = float("-inf"), -1
+
+ summ: int | float = 0
+ for i in range(mid, low - 1, -1):
+ summ += arr[i]
+ if summ > left_sum:
+ left_sum = summ
+ max_left = i
+
+ summ = 0
+ for i in range(mid + 1, high + 1):
+ summ += arr[i]
+ if summ > right_sum:
+ right_sum = summ
+ max_right = i
+
+ return max_left, max_right, (left_sum + right_sum)
+
+
+def time_max_subarray(input_size: int) -> float:
+ arr = [randint(1, input_size) for _ in range(input_size)]
+ start = time.time()
+ max_subarray(arr, 0, input_size - 1)
+ end = time.time()
+ return end - start
+
+
+def plot_runtimes() -> None:
+ input_sizes = [10, 100, 1000, 10000, 50000, 100000, 200000, 300000, 400000, 500000]
+ runtimes = [time_max_subarray(input_size) for input_size in input_sizes]
+ print("No of Inputs\t\tTime Taken")
+ for input_size, runtime in zip(input_sizes, runtimes):
+ print(input_size, "\t\t", runtime)
+ plt.plot(input_sizes, runtimes)
+ plt.xlabel("Number of Inputs")
+ plt.ylabel("Time taken in seconds")
+ plt.show()
+
+
+if __name__ == "__main__":
+ """
+ A random simulation of this algorithm.
+ """
+ from doctest import testmod
+
+ testmod()
diff --git a/divide_and_conquer/max_subarray_sum.py b/divide_and_conquer/max_subarray_sum.py
deleted file mode 100644
index f23e81719025..000000000000
--- a/divide_and_conquer/max_subarray_sum.py
+++ /dev/null
@@ -1,78 +0,0 @@
-"""
-Given a array of length n, max_subarray_sum() finds
-the maximum of sum of contiguous sub-array using divide and conquer method.
-
-Time complexity : O(n log n)
-
-Ref : INTRODUCTION TO ALGORITHMS THIRD EDITION
-(section : 4, sub-section : 4.1, page : 70)
-
-"""
-
-
-def max_sum_from_start(array):
- """This function finds the maximum contiguous sum of array from 0 index
-
- Parameters :
- array (list[int]) : given array
-
- Returns :
- max_sum (int) : maximum contiguous sum of array from 0 index
-
- """
- array_sum = 0
- max_sum = float("-inf")
- for num in array:
- array_sum += num
- if array_sum > max_sum:
- max_sum = array_sum
- return max_sum
-
-
-def max_cross_array_sum(array, left, mid, right):
- """This function finds the maximum contiguous sum of left and right arrays
-
- Parameters :
- array, left, mid, right (list[int], int, int, int)
-
- Returns :
- (int) : maximum of sum of contiguous sum of left and right arrays
-
- """
-
- max_sum_of_left = max_sum_from_start(array[left : mid + 1][::-1])
- max_sum_of_right = max_sum_from_start(array[mid + 1 : right + 1])
- return max_sum_of_left + max_sum_of_right
-
-
-def max_subarray_sum(array, left, right):
- """Maximum contiguous sub-array sum, using divide and conquer method
-
- Parameters :
- array, left, right (list[int], int, int) :
- given array, current left index and current right index
-
- Returns :
- int : maximum of sum of contiguous sub-array
-
- """
-
- # base case: array has only one element
- if left == right:
- return array[right]
-
- # Recursion
- mid = (left + right) // 2
- left_half_sum = max_subarray_sum(array, left, mid)
- right_half_sum = max_subarray_sum(array, mid + 1, right)
- cross_sum = max_cross_array_sum(array, left, mid, right)
- return max(left_half_sum, right_half_sum, cross_sum)
-
-
-if __name__ == "__main__":
- array = [-2, -5, 6, -2, -3, 1, 5, -6]
- array_length = len(array)
- print(
- "Maximum sum of contiguous subarray:",
- max_subarray_sum(array, 0, array_length - 1),
- )
diff --git a/dynamic_programming/max_sub_array.py b/dynamic_programming/max_sub_array.py
deleted file mode 100644
index 07717fba4172..000000000000
--- a/dynamic_programming/max_sub_array.py
+++ /dev/null
@@ -1,93 +0,0 @@
-"""
-author : Mayank Kumar Jha (mk9440)
-"""
-from __future__ import annotations
-
-
-def find_max_sub_array(a, low, high):
- if low == high:
- return low, high, a[low]
- else:
- mid = (low + high) // 2
- left_low, left_high, left_sum = find_max_sub_array(a, low, mid)
- right_low, right_high, right_sum = find_max_sub_array(a, mid + 1, high)
- cross_left, cross_right, cross_sum = find_max_cross_sum(a, low, mid, high)
- if left_sum >= right_sum and left_sum >= cross_sum:
- return left_low, left_high, left_sum
- elif right_sum >= left_sum and right_sum >= cross_sum:
- return right_low, right_high, right_sum
- else:
- return cross_left, cross_right, cross_sum
-
-
-def find_max_cross_sum(a, low, mid, high):
- left_sum, max_left = -999999999, -1
- right_sum, max_right = -999999999, -1
- summ = 0
- for i in range(mid, low - 1, -1):
- summ += a[i]
- if summ > left_sum:
- left_sum = summ
- max_left = i
- summ = 0
- for i in range(mid + 1, high + 1):
- summ += a[i]
- if summ > right_sum:
- right_sum = summ
- max_right = i
- return max_left, max_right, (left_sum + right_sum)
-
-
-def max_sub_array(nums: list[int]) -> int:
- """
- Finds the contiguous subarray which has the largest sum and return its sum.
-
- >>> max_sub_array([-2, 1, -3, 4, -1, 2, 1, -5, 4])
- 6
-
- An empty (sub)array has sum 0.
- >>> max_sub_array([])
- 0
-
- If all elements are negative, the largest subarray would be the empty array,
- having the sum 0.
- >>> max_sub_array([-1, -2, -3])
- 0
- >>> max_sub_array([5, -2, -3])
- 5
- >>> max_sub_array([31, -41, 59, 26, -53, 58, 97, -93, -23, 84])
- 187
- """
- best = 0
- current = 0
- for i in nums:
- current += i
- current = max(current, 0)
- best = max(best, current)
- return best
-
-
-if __name__ == "__main__":
- """
- A random simulation of this algorithm.
- """
- import time
- from random import randint
-
- from matplotlib import pyplot as plt
-
- inputs = [10, 100, 1000, 10000, 50000, 100000, 200000, 300000, 400000, 500000]
- tim = []
- for i in inputs:
- li = [randint(1, i) for j in range(i)]
- strt = time.time()
- (find_max_sub_array(li, 0, len(li) - 1))
- end = time.time()
- tim.append(end - strt)
- print("No of Inputs Time Taken")
- for i in range(len(inputs)):
- print(inputs[i], "\t\t", tim[i])
- plt.plot(inputs, tim)
- plt.xlabel("Number of Inputs")
- plt.ylabel("Time taken in seconds ")
- plt.show()
diff --git a/dynamic_programming/max_subarray_sum.py b/dynamic_programming/max_subarray_sum.py
new file mode 100644
index 000000000000..c76943472b97
--- /dev/null
+++ b/dynamic_programming/max_subarray_sum.py
@@ -0,0 +1,60 @@
+"""
+The maximum subarray sum problem is the task of finding the maximum sum that can be
+obtained from a contiguous subarray within a given array of numbers. For example, given
+the array [-2, 1, -3, 4, -1, 2, 1, -5, 4], the contiguous subarray with the maximum sum
+is [4, -1, 2, 1], so the maximum subarray sum is 6.
+
+Kadane's algorithm is a simple dynamic programming algorithm that solves the maximum
+subarray sum problem in O(n) time and O(1) space.
+
+Reference: https://en.wikipedia.org/wiki/Maximum_subarray_problem
+"""
+from collections.abc import Sequence
+
+
+def max_subarray_sum(
+ arr: Sequence[float], allow_empty_subarrays: bool = False
+) -> float:
+ """
+ Solves the maximum subarray sum problem using Kadane's algorithm.
+ :param arr: the given array of numbers
+ :param allow_empty_subarrays: if True, then the algorithm considers empty subarrays
+
+ >>> max_subarray_sum([2, 8, 9])
+ 19
+ >>> max_subarray_sum([0, 0])
+ 0
+ >>> max_subarray_sum([-1.0, 0.0, 1.0])
+ 1.0
+ >>> max_subarray_sum([1, 2, 3, 4, -2])
+ 10
+ >>> max_subarray_sum([-2, 1, -3, 4, -1, 2, 1, -5, 4])
+ 6
+ >>> max_subarray_sum([2, 3, -9, 8, -2])
+ 8
+ >>> max_subarray_sum([-2, -3, -1, -4, -6])
+ -1
+ >>> max_subarray_sum([-2, -3, -1, -4, -6], allow_empty_subarrays=True)
+ 0
+ >>> max_subarray_sum([])
+ 0
+ """
+ if not arr:
+ return 0
+
+ max_sum = 0 if allow_empty_subarrays else float("-inf")
+ curr_sum = 0.0
+ for num in arr:
+ curr_sum = max(0 if allow_empty_subarrays else num, curr_sum + num)
+ max_sum = max(max_sum, curr_sum)
+
+ return max_sum
+
+
+if __name__ == "__main__":
+ from doctest import testmod
+
+ testmod()
+
+ nums = [-2, 1, -3, 4, -1, 2, 1, -5, 4]
+ print(f"{max_subarray_sum(nums) = }")
diff --git a/dynamic_programming/max_sum_contiguous_subsequence.py b/dynamic_programming/max_sum_contiguous_subsequence.py
deleted file mode 100644
index bac592370c5d..000000000000
--- a/dynamic_programming/max_sum_contiguous_subsequence.py
+++ /dev/null
@@ -1,20 +0,0 @@
-def max_subarray_sum(nums: list) -> int:
- """
- >>> max_subarray_sum([6 , 9, -1, 3, -7, -5, 10])
- 17
- """
- if not nums:
- return 0
- n = len(nums)
-
- res, s, s_pre = nums[0], nums[0], nums[0]
- for i in range(1, n):
- s = max(nums[i], s_pre + nums[i])
- s_pre = s
- res = max(res, s)
- return res
-
-
-if __name__ == "__main__":
- nums = [6, 9, -1, 3, -7, -5, 10]
- print(max_subarray_sum(nums))
diff --git a/maths/kadanes.py b/maths/kadanes.py
deleted file mode 100644
index c2ea53a6cc84..000000000000
--- a/maths/kadanes.py
+++ /dev/null
@@ -1,63 +0,0 @@
-"""
-Kadane's algorithm to get maximum subarray sum
-https://medium.com/@rsinghal757/kadanes-algorithm-dynamic-programming-how-and-why-does-it-work-3fd8849ed73d
-https://en.wikipedia.org/wiki/Maximum_subarray_problem
-"""
-test_data: tuple = ([-2, -8, -9], [2, 8, 9], [-1, 0, 1], [0, 0], [])
-
-
-def negative_exist(arr: list) -> int:
- """
- >>> negative_exist([-2,-8,-9])
- -2
- >>> [negative_exist(arr) for arr in test_data]
- [-2, 0, 0, 0, 0]
- """
- arr = arr or [0]
- max_number = arr[0]
- for i in arr:
- if i >= 0:
- return 0
- elif max_number <= i:
- max_number = i
- return max_number
-
-
-def kadanes(arr: list) -> int:
- """
- If negative_exist() returns 0 than this function will execute
- else it will return the value return by negative_exist function
-
- For example: arr = [2, 3, -9, 8, -2]
- Initially we set value of max_sum to 0 and max_till_element to 0 than when
- max_sum is less than max_till particular element it will assign that value to
- max_sum and when value of max_till_sum is less than 0 it will assign 0 to i
- and after that whole process, return the max_sum
- So the output for above arr is 8
-
- >>> kadanes([2, 3, -9, 8, -2])
- 8
- >>> [kadanes(arr) for arr in test_data]
- [-2, 19, 1, 0, 0]
- """
- max_sum = negative_exist(arr)
- if max_sum < 0:
- return max_sum
-
- max_sum = 0
- max_till_element = 0
-
- for i in arr:
- max_till_element += i
- max_sum = max(max_sum, max_till_element)
- max_till_element = max(max_till_element, 0)
- return max_sum
-
-
-if __name__ == "__main__":
- try:
- print("Enter integer values sepatated by spaces")
- arr = [int(x) for x in input().split()]
- print(f"Maximum subarray sum of {arr} is {kadanes(arr)}")
- except ValueError:
- print("Please enter integer values.")
diff --git a/maths/largest_subarray_sum.py b/maths/largest_subarray_sum.py
deleted file mode 100644
index 90f92c7127bf..000000000000
--- a/maths/largest_subarray_sum.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from sys import maxsize
-
-
-def max_sub_array_sum(a: list, size: int = 0):
- """
- >>> max_sub_array_sum([-13, -3, -25, -20, -3, -16, -23, -12, -5, -22, -15, -4, -7])
- -3
- """
- size = size or len(a)
- max_so_far = -maxsize - 1
- max_ending_here = 0
- for i in range(0, size):
- max_ending_here = max_ending_here + a[i]
- max_so_far = max(max_so_far, max_ending_here)
- max_ending_here = max(max_ending_here, 0)
- return max_so_far
-
-
-if __name__ == "__main__":
- a = [-13, -3, -25, -20, 1, -16, -23, -12, -5, -22, -15, -4, -7]
- print(("Maximum contiguous sum is", max_sub_array_sum(a, len(a))))
diff --git a/other/maximum_subarray.py b/other/maximum_subarray.py
deleted file mode 100644
index 1c8c8cabcd2d..000000000000
--- a/other/maximum_subarray.py
+++ /dev/null
@@ -1,32 +0,0 @@
-from collections.abc import Sequence
-
-
-def max_subarray_sum(nums: Sequence[int]) -> int:
- """Return the maximum possible sum amongst all non - empty subarrays.
-
- Raises:
- ValueError: when nums is empty.
-
- >>> max_subarray_sum([1,2,3,4,-2])
- 10
- >>> max_subarray_sum([-2,1,-3,4,-1,2,1,-5,4])
- 6
- """
- if not nums:
- raise ValueError("Input sequence should not be empty")
-
- curr_max = ans = nums[0]
- nums_len = len(nums)
-
- for i in range(1, nums_len):
- num = nums[i]
- curr_max = max(curr_max + num, num)
- ans = max(curr_max, ans)
-
- return ans
-
-
-if __name__ == "__main__":
- n = int(input("Enter number of elements : ").strip())
- array = list(map(int, input("\nEnter the numbers : ").strip().split()))[:n]
- print(max_subarray_sum(array))
From 44b1bcc7c7e0f15385530bf54c59ad4eb86fef0b Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Tue, 11 Jul 2023 10:51:21 +0100
Subject: [PATCH 115/808] Fix failing tests from ruff/newton_raphson (ignore
S307 "possibly insecure function") (#8862)
* chore: Fix failing tests (ignore S307 "possibly insecure function")
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix: Move noqa back to right line
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
arithmetic_analysis/newton_raphson.py | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arithmetic_analysis/newton_raphson.py b/arithmetic_analysis/newton_raphson.py
index aee2f07e5743..1b90ad4177f6 100644
--- a/arithmetic_analysis/newton_raphson.py
+++ b/arithmetic_analysis/newton_raphson.py
@@ -25,9 +25,11 @@ def newton_raphson(
"""
x = a
while True:
- x = Decimal(x) - (Decimal(eval(func)) / Decimal(eval(str(diff(func)))))
+ x = Decimal(x) - (
+ Decimal(eval(func)) / Decimal(eval(str(diff(func)))) # noqa: S307
+ )
# This number dictates the accuracy of the answer
- if abs(eval(func)) < precision:
+ if abs(eval(func)) < precision: # noqa: S307
return float(x)
From f614ed72170011d2d439f7901e1c8daa7deac8c4 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 11 Jul 2023 11:55:32 +0200
Subject: [PATCH 116/808] [pre-commit.ci] pre-commit autoupdate (#8860)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.276 → v0.0.277](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.276...v0.0.277)
- [github.com/tox-dev/pyproject-fmt: 0.12.1 → 0.13.0](https://github.com/tox-dev/pyproject-fmt/compare/0.12.1...0.13.0)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.pre-commit-config.yaml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 42ebeed14fa9..bf30703bdffc 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.276
+ rev: v0.0.277
hooks:
- id: ruff
@@ -33,7 +33,7 @@ repos:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
- rev: "0.12.1"
+ rev: "0.13.0"
hooks:
- id: pyproject-fmt
From 5aefc00f0f1c692ce772ddbc616d7cd91233236b Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 18 Jul 2023 09:58:22 +0530
Subject: [PATCH 117/808] [pre-commit.ci] pre-commit autoupdate (#8872)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.277 → v0.0.278](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.277...v0.0.278)
- [github.com/psf/black: 23.3.0 → 23.7.0](https://github.com/psf/black/compare/23.3.0...23.7.0)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.pre-commit-config.yaml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index bf30703bdffc..13b955dd374f 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,12 +16,12 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.277
+ rev: v0.0.278
hooks:
- id: ruff
- repo: https://github.com/psf/black
- rev: 23.3.0
+ rev: 23.7.0
hooks:
- id: black
From 93fb169627ea9fe43436a312fdfa751818808180 Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Sat, 22 Jul 2023 13:05:10 +0300
Subject: [PATCH 118/808] [Upgrade Ruff] Fix all errors raised from ruff
(#8879)
* chore: Fix tests
* chore: Fix failing ruff
* chore: Fix ruff errors
* chore: Fix ruff errors
* chore: Fix ruff errors
* chore: Fix ruff errors
* chore: Fix ruff errors
* chore: Fix ruff errors
* chore: Fix ruff errors
* chore: Fix ruff errors
* chore: Fix ruff errors
* chore: Fix ruff errors
* chore: Fix ruff errors
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* chore: Fix ruff errors
* chore: Fix ruff errors
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update cellular_automata/game_of_life.py
Co-authored-by: Christian Clauss
* chore: Update ruff version in pre-commit
* chore: Fix ruff errors
* Update edmonds_karp_multiple_source_and_sink.py
* Update factorial.py
* Update primelib.py
* Update min_cost_string_conversion.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.pre-commit-config.yaml | 2 +-
cellular_automata/game_of_life.py | 2 +-
data_structures/binary_tree/red_black_tree.py | 2 +-
data_structures/trie/radix_tree.py | 4 ++--
divide_and_conquer/convex_hull.py | 2 +-
...directed_and_undirected_(weighted)_graph.py | 18 +++++++++---------
.../edmonds_karp_multiple_source_and_sink.py | 2 +-
maths/factorial.py | 2 +-
maths/primelib.py | 2 +-
other/davisb_putnamb_logemannb_loveland.py | 2 +-
project_euler/problem_009/sol3.py | 16 ++++++++++------
quantum/ripple_adder_classic.py | 2 +-
strings/min_cost_string_conversion.py | 2 +-
web_programming/convert_number_to_words.py | 4 +---
14 files changed, 32 insertions(+), 30 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 13b955dd374f..5adf12cc70c5 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.278
+ rev: v0.0.280
hooks:
- id: ruff
diff --git a/cellular_automata/game_of_life.py b/cellular_automata/game_of_life.py
index 3382af7b5db6..b69afdce03eb 100644
--- a/cellular_automata/game_of_life.py
+++ b/cellular_automata/game_of_life.py
@@ -98,7 +98,7 @@ def __judge_point(pt: bool, neighbours: list[list[bool]]) -> bool:
if pt:
if alive < 2:
state = False
- elif alive == 2 or alive == 3:
+ elif alive in {2, 3}:
state = True
elif alive > 3:
state = False
diff --git a/data_structures/binary_tree/red_black_tree.py b/data_structures/binary_tree/red_black_tree.py
index 3ebc8d63939b..4ebe0e927ca0 100644
--- a/data_structures/binary_tree/red_black_tree.py
+++ b/data_structures/binary_tree/red_black_tree.py
@@ -152,7 +152,7 @@ def _insert_repair(self) -> None:
self.grandparent.color = 1
self.grandparent._insert_repair()
- def remove(self, label: int) -> RedBlackTree:
+ def remove(self, label: int) -> RedBlackTree: # noqa: PLR0912
"""Remove label from this tree."""
if self.label == label:
if self.left and self.right:
diff --git a/data_structures/trie/radix_tree.py b/data_structures/trie/radix_tree.py
index 66890346ec2b..cf2f25c29f13 100644
--- a/data_structures/trie/radix_tree.py
+++ b/data_structures/trie/radix_tree.py
@@ -156,7 +156,7 @@ def delete(self, word: str) -> bool:
del self.nodes[word[0]]
# We merge the current node with its only child
if len(self.nodes) == 1 and not self.is_leaf:
- merging_node = list(self.nodes.values())[0]
+ merging_node = next(iter(self.nodes.values()))
self.is_leaf = merging_node.is_leaf
self.prefix += merging_node.prefix
self.nodes = merging_node.nodes
@@ -165,7 +165,7 @@ def delete(self, word: str) -> bool:
incoming_node.is_leaf = False
# If there is 1 edge, we merge it with its child
else:
- merging_node = list(incoming_node.nodes.values())[0]
+ merging_node = next(iter(incoming_node.nodes.values()))
incoming_node.is_leaf = merging_node.is_leaf
incoming_node.prefix += merging_node.prefix
incoming_node.nodes = merging_node.nodes
diff --git a/divide_and_conquer/convex_hull.py b/divide_and_conquer/convex_hull.py
index 1ad933417da6..1d1bf301def5 100644
--- a/divide_and_conquer/convex_hull.py
+++ b/divide_and_conquer/convex_hull.py
@@ -266,7 +266,7 @@ def convex_hull_bf(points: list[Point]) -> list[Point]:
points_left_of_ij = points_right_of_ij = False
ij_part_of_convex_hull = True
for k in range(n):
- if k != i and k != j:
+ if k not in {i, j}:
det_k = _det(points[i], points[j], points[k])
if det_k > 0:
diff --git a/graphs/directed_and_undirected_(weighted)_graph.py b/graphs/directed_and_undirected_(weighted)_graph.py
index b29485031083..8ca645fdace8 100644
--- a/graphs/directed_and_undirected_(weighted)_graph.py
+++ b/graphs/directed_and_undirected_(weighted)_graph.py
@@ -39,7 +39,7 @@ def dfs(self, s=-2, d=-1):
stack = []
visited = []
if s == -2:
- s = list(self.graph)[0]
+ s = next(iter(self.graph))
stack.append(s)
visited.append(s)
ss = s
@@ -87,7 +87,7 @@ def bfs(self, s=-2):
d = deque()
visited = []
if s == -2:
- s = list(self.graph)[0]
+ s = next(iter(self.graph))
d.append(s)
visited.append(s)
while d:
@@ -114,7 +114,7 @@ def topological_sort(self, s=-2):
stack = []
visited = []
if s == -2:
- s = list(self.graph)[0]
+ s = next(iter(self.graph))
stack.append(s)
visited.append(s)
ss = s
@@ -146,7 +146,7 @@ def topological_sort(self, s=-2):
def cycle_nodes(self):
stack = []
visited = []
- s = list(self.graph)[0]
+ s = next(iter(self.graph))
stack.append(s)
visited.append(s)
parent = -2
@@ -199,7 +199,7 @@ def cycle_nodes(self):
def has_cycle(self):
stack = []
visited = []
- s = list(self.graph)[0]
+ s = next(iter(self.graph))
stack.append(s)
visited.append(s)
parent = -2
@@ -305,7 +305,7 @@ def dfs(self, s=-2, d=-1):
stack = []
visited = []
if s == -2:
- s = list(self.graph)[0]
+ s = next(iter(self.graph))
stack.append(s)
visited.append(s)
ss = s
@@ -353,7 +353,7 @@ def bfs(self, s=-2):
d = deque()
visited = []
if s == -2:
- s = list(self.graph)[0]
+ s = next(iter(self.graph))
d.append(s)
visited.append(s)
while d:
@@ -371,7 +371,7 @@ def degree(self, u):
def cycle_nodes(self):
stack = []
visited = []
- s = list(self.graph)[0]
+ s = next(iter(self.graph))
stack.append(s)
visited.append(s)
parent = -2
@@ -424,7 +424,7 @@ def cycle_nodes(self):
def has_cycle(self):
stack = []
visited = []
- s = list(self.graph)[0]
+ s = next(iter(self.graph))
stack.append(s)
visited.append(s)
parent = -2
diff --git a/graphs/edmonds_karp_multiple_source_and_sink.py b/graphs/edmonds_karp_multiple_source_and_sink.py
index d0610804109f..5c774f4b812b 100644
--- a/graphs/edmonds_karp_multiple_source_and_sink.py
+++ b/graphs/edmonds_karp_multiple_source_and_sink.py
@@ -113,7 +113,7 @@ def _algorithm(self):
vertices_list = [
i
for i in range(self.verticies_count)
- if i != self.source_index and i != self.sink_index
+ if i not in {self.source_index, self.sink_index}
]
# move through list
diff --git a/maths/factorial.py b/maths/factorial.py
index bbf0efc011d8..18cacdef9b1f 100644
--- a/maths/factorial.py
+++ b/maths/factorial.py
@@ -55,7 +55,7 @@ def factorial_recursive(n: int) -> int:
raise ValueError("factorial() only accepts integral values")
if n < 0:
raise ValueError("factorial() not defined for negative values")
- return 1 if n == 0 or n == 1 else n * factorial(n - 1)
+ return 1 if n in {0, 1} else n * factorial(n - 1)
if __name__ == "__main__":
diff --git a/maths/primelib.py b/maths/primelib.py
index 81d5737063f0..28b5aee9dcc8 100644
--- a/maths/primelib.py
+++ b/maths/primelib.py
@@ -154,7 +154,7 @@ def prime_factorization(number):
quotient = number
- if number == 0 or number == 1:
+ if number in {0, 1}:
ans.append(number)
# if 'number' not prime then builds the prime factorization of 'number'
diff --git a/other/davisb_putnamb_logemannb_loveland.py b/other/davisb_putnamb_logemannb_loveland.py
index a1bea5b3992e..f5fb103ba528 100644
--- a/other/davisb_putnamb_logemannb_loveland.py
+++ b/other/davisb_putnamb_logemannb_loveland.py
@@ -253,7 +253,7 @@ def find_unit_clauses(
unit_symbols = []
for clause in clauses:
if len(clause) == 1:
- unit_symbols.append(list(clause.literals.keys())[0])
+ unit_symbols.append(next(iter(clause.literals.keys())))
else:
f_count, n_count = 0, 0
for literal, value in clause.literals.items():
diff --git a/project_euler/problem_009/sol3.py b/project_euler/problem_009/sol3.py
index d299f821d4f6..37340d3063bb 100644
--- a/project_euler/problem_009/sol3.py
+++ b/project_euler/problem_009/sol3.py
@@ -28,12 +28,16 @@ def solution() -> int:
31875000
"""
- return [
- a * b * (1000 - a - b)
- for a in range(1, 999)
- for b in range(a, 999)
- if (a * a + b * b == (1000 - a - b) ** 2)
- ][0]
+ return next(
+ iter(
+ [
+ a * b * (1000 - a - b)
+ for a in range(1, 999)
+ for b in range(a, 999)
+ if (a * a + b * b == (1000 - a - b) ** 2)
+ ]
+ )
+ )
if __name__ == "__main__":
diff --git a/quantum/ripple_adder_classic.py b/quantum/ripple_adder_classic.py
index b604395bc583..2284141ccac2 100644
--- a/quantum/ripple_adder_classic.py
+++ b/quantum/ripple_adder_classic.py
@@ -107,7 +107,7 @@ def ripple_adder(
res = qiskit.execute(circuit, backend, shots=1).result()
# The result is in binary. Convert it back to int
- return int(list(res.get_counts())[0], 2)
+ return int(next(iter(res.get_counts())), 2)
if __name__ == "__main__":
diff --git a/strings/min_cost_string_conversion.py b/strings/min_cost_string_conversion.py
index 089c2532f900..0fad0b88c370 100644
--- a/strings/min_cost_string_conversion.py
+++ b/strings/min_cost_string_conversion.py
@@ -61,7 +61,7 @@ def assemble_transformation(ops: list[list[str]], i: int, j: int) -> list[str]:
if i == 0 and j == 0:
return []
else:
- if ops[i][j][0] == "C" or ops[i][j][0] == "R":
+ if ops[i][j][0] in {"C", "R"}:
seq = assemble_transformation(ops, i - 1, j - 1)
seq.append(ops[i][j])
return seq
diff --git a/web_programming/convert_number_to_words.py b/web_programming/convert_number_to_words.py
index 1e293df9660c..dac9e3e38e7c 100644
--- a/web_programming/convert_number_to_words.py
+++ b/web_programming/convert_number_to_words.py
@@ -90,9 +90,7 @@ def convert(number: int) -> str:
else:
addition = ""
if counter in placevalue:
- if current == 0 and ((temp_num % 100) // 10) == 0:
- addition = ""
- else:
+ if current != 0 and ((temp_num % 100) // 10) != 0:
addition = placevalue[counter]
if ((temp_num % 100) // 10) == 1:
words = teens[current] + addition + words
From f7531d9874e0dd3682bf0ed7ae408927e1fae472 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Sat, 22 Jul 2023 03:11:04 -0700
Subject: [PATCH 119/808] Add note in `CONTRIBUTING.md` about not asking to be
assigned to issues (#8871)
* Add note in CONTRIBUTING.md about not asking to be assigned to issues
Add a paragraph to CONTRIBUTING.md explicitly asking contributors to not ask to be assigned to issues
* Update CONTRIBUTING.md
* Update CONTRIBUTING.md
---------
Co-authored-by: Christian Clauss
---
CONTRIBUTING.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 618cca868d83..4a1bb652738f 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -25,6 +25,8 @@ We appreciate any contribution, from fixing a grammar mistake in a comment to im
Your contribution will be tested by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) to save time and mental energy. After you have submitted your pull request, you should see the GitHub Actions tests start to run at the bottom of your submission page. If those tests fail, then click on the ___details___ button try to read through the GitHub Actions output to understand the failure. If you do not understand, please leave a comment on your submission page and a community member will try to help.
+If you are interested in resolving an [open issue](https://github.com/TheAlgorithms/Python/issues), simply make a pull request with your proposed fix. __We do not assign issues in this repo__ so please do not ask for permission to work on an issue.
+
Please help us keep our issue list small by adding `Fixes #{$ISSUE_NUMBER}` to the description of pull requests that resolve open issues.
For example, if your pull request fixes issue #10, then please add the following to its description:
```
From 9e08c7726dee5b18585a76e54c71922ca96c0b3a Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Sat, 22 Jul 2023 13:34:19 +0300
Subject: [PATCH 120/808] Small docstring time complexity fix in
number_container _system (#8875)
* fix: Write time is O(log n) not O(n log n)
* chore: Update pre-commit ruff version
* revert: Undo previous commit
---
other/number_container_system.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/other/number_container_system.py b/other/number_container_system.py
index f547bc8a229e..6c95dd0a3544 100644
--- a/other/number_container_system.py
+++ b/other/number_container_system.py
@@ -1,6 +1,6 @@
"""
A number container system that uses binary search to delete and insert values into
-arrays with O(n logn) write times and O(1) read times.
+arrays with O(log n) write times and O(1) read times.
This container system holds integers at indexes.
From a03b739d23b59890b59d2d2288ebaa56e3be47ce Mon Sep 17 00:00:00 2001
From: Sangmin Jeon
Date: Mon, 24 Jul 2023 18:29:05 +0900
Subject: [PATCH 121/808] Fix `radix_tree.py` insertion fail in ["*X", "*XX"]
cases (#8870)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* Fix insertion fail in ["*X", "*XX"] cases
Consider a word, and a copy of that word, but with the last letter repeating twice. (e.g., ["ABC", "ABCC"])
When adding the second word's last letter, it only compares the previous word's prefix—the last letter of the word already in the Radix Tree: 'C'—and the letter to be added—the last letter of the word we're currently adding: 'C'. So it wrongly passes the "Case 1" check, marks the current node as a leaf node when it already was, then returns when there's still one more letter to add.
The issue arises because `prefix` includes the letter of the node itself. (e.g., `nodes: {'C' : RadixNode()}, is_leaf: True, prefix: 'C'`) It can be easily fixed by simply adding the `is_leaf` check, asking if there are more letters to be added.
- Test Case: `"A AA AAA AAAA"`
- Fixed correct output:
```
Words: ['A', 'AA', 'AAA', 'AAAA']
Tree:
- A (leaf)
-- A (leaf)
--- A (leaf)
---- A (leaf)
```
- Current incorrect output:
```
Words: ['A', 'AA', 'AAA', 'AAAA']
Tree:
- A (leaf)
-- AA (leaf)
--- A (leaf)
```
*N.B.* This passed test cases for [Croatian Open Competition in Informatics 2012/2013 Contest #3 Task 5 HERKABE](https://hsin.hr/coci/archive/2012_2013/)
* Add a doctest for previous fix
* improve doctest readability
---
data_structures/trie/radix_tree.py | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/data_structures/trie/radix_tree.py b/data_structures/trie/radix_tree.py
index cf2f25c29f13..fadc50cb49a7 100644
--- a/data_structures/trie/radix_tree.py
+++ b/data_structures/trie/radix_tree.py
@@ -54,10 +54,17 @@ def insert(self, word: str) -> None:
word (str): word to insert
>>> RadixNode("myprefix").insert("mystring")
+
+ >>> root = RadixNode()
+ >>> root.insert_many(['myprefix', 'myprefixA', 'myprefixAA'])
+ >>> root.print_tree()
+ - myprefix (leaf)
+ -- A (leaf)
+ --- A (leaf)
"""
# Case 1: If the word is the prefix of the node
# Solution: We set the current node as leaf
- if self.prefix == word:
+ if self.prefix == word and not self.is_leaf:
self.is_leaf = True
# Case 2: The node has no edges that have a prefix to the word
From b77e6adf3abba674eb83ab7c0182bd6c89c08891 Mon Sep 17 00:00:00 2001
From: HManiac74 <63391783+HManiac74@users.noreply.github.com>
Date: Tue, 25 Jul 2023 22:23:20 +0200
Subject: [PATCH 122/808] Add Docker devcontainer configuration files (#8887)
* Added Docker container configuration files
* Update Dockerfile
Copy and install requirements
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Updated Docker devcontainer configuration
* Update requierements.txt
* Update Dockerfile
* Update Dockerfile
* Update .devcontainer/devcontainer.json
Co-authored-by: Christian Clauss
* Update Dockerfile
* Update Dockerfile. Add linebreak
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.devcontainer/Dockerfile | 6 +++++
.devcontainer/devcontainer.json | 42 +++++++++++++++++++++++++++++++++
2 files changed, 48 insertions(+)
create mode 100644 .devcontainer/Dockerfile
create mode 100644 .devcontainer/devcontainer.json
diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile
new file mode 100644
index 000000000000..27b25c09b1c9
--- /dev/null
+++ b/.devcontainer/Dockerfile
@@ -0,0 +1,6 @@
+# https://github.com/microsoft/vscode-dev-containers/blob/main/containers/python-3/README.md
+ARG VARIANT=3.11-bookworm
+FROM mcr.microsoft.com/vscode/devcontainers/python:${VARIANT}
+COPY requirements.txt /tmp/pip-tmp/
+RUN python3 -m pip install --upgrade pip \
+ && python3 -m pip install --no-cache-dir install ruff -r /tmp/pip-tmp/requirements.txt
diff --git a/.devcontainer/devcontainer.json b/.devcontainer/devcontainer.json
new file mode 100644
index 000000000000..c5a855b2550c
--- /dev/null
+++ b/.devcontainer/devcontainer.json
@@ -0,0 +1,42 @@
+{
+ "name": "Python 3",
+ "build": {
+ "dockerfile": "Dockerfile",
+ "context": "..",
+ "args": {
+ // Update 'VARIANT' to pick a Python version: 3, 3.10, 3.9, 3.8, 3.7, 3.6
+ // Append -bullseye or -buster to pin to an OS version.
+ // Use -bullseye variants on local on arm64/Apple Silicon.
+ "VARIANT": "3.11-bookworm",
+ }
+ },
+
+ // Configure tool-specific properties.
+ "customizations": {
+ // Configure properties specific to VS Code.
+ "vscode": {
+ // Set *default* container specific settings.json values on container create.
+ "settings": {
+ "python.defaultInterpreterPath": "/usr/local/bin/python",
+ "python.linting.enabled": true,
+ "python.formatting.blackPath": "/usr/local/py-utils/bin/black",
+ "python.linting.mypyPath": "/usr/local/py-utils/bin/mypy"
+ },
+
+ // Add the IDs of extensions you want installed when the container is created.
+ "extensions": [
+ "ms-python.python",
+ "ms-python.vscode-pylance"
+ ]
+ }
+ },
+
+ // Use 'forwardPorts' to make a list of ports inside the container available locally.
+ // "forwardPorts": [],
+
+ // Use 'postCreateCommand' to run commands after the container is created.
+ // "postCreateCommand": "pip3 install --user -r requirements.txt",
+
+ // Comment out to connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
+ "remoteUser": "vscode"
+}
From dbaff345724040b270b3097cb02759f36ce0ef46 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Fri, 28 Jul 2023 18:53:09 +0200
Subject: [PATCH 123/808] Fix ruff rules ISC flake8-implicit-str-concat (#8892)
---
ciphers/diffie_hellman.py | 244 ++++++++++++-------------
compression/burrows_wheeler.py | 2 +-
neural_network/input_data.py | 4 +-
pyproject.toml | 2 +-
strings/is_srilankan_phone_number.py | 4 +-
web_programming/world_covid19_stats.py | 5 +-
6 files changed, 128 insertions(+), 133 deletions(-)
diff --git a/ciphers/diffie_hellman.py b/ciphers/diffie_hellman.py
index cd40a6b9c3b3..aec7fb3eaf17 100644
--- a/ciphers/diffie_hellman.py
+++ b/ciphers/diffie_hellman.py
@@ -10,13 +10,13 @@
5: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
- + "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
- + "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
- + "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
- + "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
- + "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
- + "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
- + "670C354E4ABC9804F1746C08CA237327FFFFFFFFFFFFFFFF",
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA237327FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
@@ -25,16 +25,16 @@
14: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
- + "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
- + "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
- + "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
- + "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
- + "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
- + "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
- + "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
- + "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
- + "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
- + "15728E5A8AACAA68FFFFFFFFFFFFFFFF",
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AACAA68FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
@@ -43,21 +43,21 @@
15: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
- + "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
- + "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
- + "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
- + "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
- + "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
- + "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
- + "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
- + "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
- + "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
- + "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
- + "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
- + "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
- + "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
- + "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
- + "43DB5BFCE0FD108E4B82D120A93AD2CAFFFFFFFFFFFFFFFF",
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
+ "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
+ "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
+ "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
+ "43DB5BFCE0FD108E4B82D120A93AD2CAFFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
@@ -66,27 +66,27 @@
16: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
- + "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
- + "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
- + "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
- + "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
- + "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
- + "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
- + "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
- + "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
- + "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
- + "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
- + "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
- + "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
- + "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
- + "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
- + "43DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D7"
- + "88719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA"
- + "2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6"
- + "287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED"
- + "1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA9"
- + "93B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934063199"
- + "FFFFFFFFFFFFFFFF",
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
+ "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
+ "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
+ "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
+ "43DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D7"
+ "88719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA"
+ "2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6"
+ "287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED"
+ "1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA9"
+ "93B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934063199"
+ "FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
@@ -95,33 +95,33 @@
17: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E08"
- + "8A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B"
- + "302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9"
- + "A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE6"
- + "49286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8"
- + "FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D"
- + "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C"
- + "180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF695581718"
- + "3995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D"
- + "04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7D"
- + "B3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D226"
- + "1AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
- + "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFC"
- + "E0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B26"
- + "99C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB"
- + "04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2"
- + "233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127"
- + "D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934028492"
- + "36C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BDF8FF9406"
- + "AD9E530EE5DB382F413001AEB06A53ED9027D831179727B0865A8918"
- + "DA3EDBEBCF9B14ED44CE6CBACED4BB1BDB7F1447E6CC254B33205151"
- + "2BD7AF426FB8F401378CD2BF5983CA01C64B92ECF032EA15D1721D03"
- + "F482D7CE6E74FEF6D55E702F46980C82B5A84031900B1C9E59E7C97F"
- + "BEC7E8F323A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AA"
- + "CC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE32806A1D58B"
- + "B7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55CDA56C9EC2EF29632"
- + "387FE8D76E3C0468043E8F663F4860EE12BF2D5B0B7474D6E694F91E"
- + "6DCC4024FFFFFFFFFFFFFFFF",
+ "8A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B"
+ "302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9"
+ "A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE6"
+ "49286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8"
+ "FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C"
+ "180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF695581718"
+ "3995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D"
+ "04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7D"
+ "B3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D226"
+ "1AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFC"
+ "E0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B26"
+ "99C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB"
+ "04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2"
+ "233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127"
+ "D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934028492"
+ "36C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BDF8FF9406"
+ "AD9E530EE5DB382F413001AEB06A53ED9027D831179727B0865A8918"
+ "DA3EDBEBCF9B14ED44CE6CBACED4BB1BDB7F1447E6CC254B33205151"
+ "2BD7AF426FB8F401378CD2BF5983CA01C64B92ECF032EA15D1721D03"
+ "F482D7CE6E74FEF6D55E702F46980C82B5A84031900B1C9E59E7C97F"
+ "BEC7E8F323A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AA"
+ "CC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE32806A1D58B"
+ "B7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55CDA56C9EC2EF29632"
+ "387FE8D76E3C0468043E8F663F4860EE12BF2D5B0B7474D6E694F91E"
+ "6DCC4024FFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
@@ -130,48 +130,48 @@
18: {
"prime": int(
"FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
- + "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
- + "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
- + "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
- + "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
- + "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
- + "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
- + "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
- + "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
- + "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
- + "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
- + "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
- + "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
- + "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
- + "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
- + "43DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D7"
- + "88719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA"
- + "2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6"
- + "287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED"
- + "1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA9"
- + "93B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934028492"
- + "36C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BD"
- + "F8FF9406AD9E530EE5DB382F413001AEB06A53ED9027D831"
- + "179727B0865A8918DA3EDBEBCF9B14ED44CE6CBACED4BB1B"
- + "DB7F1447E6CC254B332051512BD7AF426FB8F401378CD2BF"
- + "5983CA01C64B92ECF032EA15D1721D03F482D7CE6E74FEF6"
- + "D55E702F46980C82B5A84031900B1C9E59E7C97FBEC7E8F3"
- + "23A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AA"
- + "CC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE328"
- + "06A1D58BB7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55C"
- + "DA56C9EC2EF29632387FE8D76E3C0468043E8F663F4860EE"
- + "12BF2D5B0B7474D6E694F91E6DBE115974A3926F12FEE5E4"
- + "38777CB6A932DF8CD8BEC4D073B931BA3BC832B68D9DD300"
- + "741FA7BF8AFC47ED2576F6936BA424663AAB639C5AE4F568"
- + "3423B4742BF1C978238F16CBE39D652DE3FDB8BEFC848AD9"
- + "22222E04A4037C0713EB57A81A23F0C73473FC646CEA306B"
- + "4BCBC8862F8385DDFA9D4B7FA2C087E879683303ED5BDD3A"
- + "062B3CF5B3A278A66D2A13F83F44F82DDF310EE074AB6A36"
- + "4597E899A0255DC164F31CC50846851DF9AB48195DED7EA1"
- + "B1D510BD7EE74D73FAF36BC31ECFA268359046F4EB879F92"
- + "4009438B481C6CD7889A002ED5EE382BC9190DA6FC026E47"
- + "9558E4475677E9AA9E3050E2765694DFC81F56E880B96E71"
- + "60C980DD98EDD3DFFFFFFFFFFFFFFFFF",
+ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ "15728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64"
+ "ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7"
+ "ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6B"
+ "F12FFA06D98A0864D87602733EC86A64521F2B18177B200C"
+ "BBE117577A615D6C770988C0BAD946E208E24FA074E5AB31"
+ "43DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D7"
+ "88719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA"
+ "2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6"
+ "287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED"
+ "1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA9"
+ "93B4EA988D8FDDC186FFB7DC90A6C08F4DF435C934028492"
+ "36C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BD"
+ "F8FF9406AD9E530EE5DB382F413001AEB06A53ED9027D831"
+ "179727B0865A8918DA3EDBEBCF9B14ED44CE6CBACED4BB1B"
+ "DB7F1447E6CC254B332051512BD7AF426FB8F401378CD2BF"
+ "5983CA01C64B92ECF032EA15D1721D03F482D7CE6E74FEF6"
+ "D55E702F46980C82B5A84031900B1C9E59E7C97FBEC7E8F3"
+ "23A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AA"
+ "CC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE328"
+ "06A1D58BB7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55C"
+ "DA56C9EC2EF29632387FE8D76E3C0468043E8F663F4860EE"
+ "12BF2D5B0B7474D6E694F91E6DBE115974A3926F12FEE5E4"
+ "38777CB6A932DF8CD8BEC4D073B931BA3BC832B68D9DD300"
+ "741FA7BF8AFC47ED2576F6936BA424663AAB639C5AE4F568"
+ "3423B4742BF1C978238F16CBE39D652DE3FDB8BEFC848AD9"
+ "22222E04A4037C0713EB57A81A23F0C73473FC646CEA306B"
+ "4BCBC8862F8385DDFA9D4B7FA2C087E879683303ED5BDD3A"
+ "062B3CF5B3A278A66D2A13F83F44F82DDF310EE074AB6A36"
+ "4597E899A0255DC164F31CC50846851DF9AB48195DED7EA1"
+ "B1D510BD7EE74D73FAF36BC31ECFA268359046F4EB879F92"
+ "4009438B481C6CD7889A002ED5EE382BC9190DA6FC026E47"
+ "9558E4475677E9AA9E3050E2765694DFC81F56E880B96E71"
+ "60C980DD98EDD3DFFFFFFFFFFFFFFFFF",
base=16,
),
"generator": 2,
diff --git a/compression/burrows_wheeler.py b/compression/burrows_wheeler.py
index 0916b8a654d2..52bb045d9398 100644
--- a/compression/burrows_wheeler.py
+++ b/compression/burrows_wheeler.py
@@ -150,7 +150,7 @@ def reverse_bwt(bwt_string: str, idx_original_string: int) -> str:
raise ValueError("The parameter idx_original_string must not be lower than 0.")
if idx_original_string >= len(bwt_string):
raise ValueError(
- "The parameter idx_original_string must be lower than" " len(bwt_string)."
+ "The parameter idx_original_string must be lower than len(bwt_string)."
)
ordered_rotations = [""] * len(bwt_string)
diff --git a/neural_network/input_data.py b/neural_network/input_data.py
index 94c018ece9ba..a58e64907e45 100644
--- a/neural_network/input_data.py
+++ b/neural_network/input_data.py
@@ -263,9 +263,7 @@ def _maybe_download(filename, work_directory, source_url):
return filepath
-@deprecated(
- None, "Please use alternatives such as:" " tensorflow_datasets.load('mnist')"
-)
+@deprecated(None, "Please use alternatives such as: tensorflow_datasets.load('mnist')")
def read_data_sets(
train_dir,
fake_data=False,
diff --git a/pyproject.toml b/pyproject.toml
index 4f21a95190da..f9091fb8578d 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -49,6 +49,7 @@ select = [ # https://beta.ruff.rs/docs/rules
"ICN", # flake8-import-conventions
"INP", # flake8-no-pep420
"INT", # flake8-gettext
+ "ISC", # flake8-implicit-str-concat
"N", # pep8-naming
"NPY", # NumPy-specific rules
"PGH", # pygrep-hooks
@@ -72,7 +73,6 @@ select = [ # https://beta.ruff.rs/docs/rules
# "DJ", # flake8-django
# "ERA", # eradicate -- DO NOT FIX
# "FBT", # flake8-boolean-trap # FIX ME
- # "ISC", # flake8-implicit-str-concat # FIX ME
# "PD", # pandas-vet
# "PT", # flake8-pytest-style
# "PTH", # flake8-use-pathlib # FIX ME
diff --git a/strings/is_srilankan_phone_number.py b/strings/is_srilankan_phone_number.py
index 7bded93f7f1d..6456f85e1a3d 100644
--- a/strings/is_srilankan_phone_number.py
+++ b/strings/is_srilankan_phone_number.py
@@ -22,9 +22,7 @@ def is_sri_lankan_phone_number(phone: str) -> bool:
False
"""
- pattern = re.compile(
- r"^(?:0|94|\+94|0{2}94)" r"7(0|1|2|4|5|6|7|8)" r"(-| |)" r"\d{7}$"
- )
+ pattern = re.compile(r"^(?:0|94|\+94|0{2}94)7(0|1|2|4|5|6|7|8)(-| |)\d{7}$")
return bool(re.search(pattern, phone))
diff --git a/web_programming/world_covid19_stats.py b/web_programming/world_covid19_stats.py
index 1dd1ff6d188e..ca81abdc4ce9 100644
--- a/web_programming/world_covid19_stats.py
+++ b/web_programming/world_covid19_stats.py
@@ -22,6 +22,5 @@ def world_covid19_stats(url: str = "https://www.worldometers.info/coronavirus")
if __name__ == "__main__":
- print("\033[1m" + "COVID-19 Status of the World" + "\033[0m\n")
- for key, value in world_covid19_stats().items():
- print(f"{key}\n{value}\n")
+ print("\033[1m COVID-19 Status of the World \033[0m\n")
+ print("\n".join(f"{key}\n{value}" for key, value in world_covid19_stats().items()))
From 46454e204cc587d1ef044e4b1a11050c30aab4f6 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Fri, 28 Jul 2023 18:54:45 +0200
Subject: [PATCH 124/808] [skip-ci] In .devcontainer/Dockerfile: pipx install
pre-commit ruff (#8893)
[skip-ci] In .devcontainer/Dockerfile: pipx install pre-commit ruff
---
.devcontainer/Dockerfile | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile
index 27b25c09b1c9..b5a5347c66b0 100644
--- a/.devcontainer/Dockerfile
+++ b/.devcontainer/Dockerfile
@@ -3,4 +3,6 @@ ARG VARIANT=3.11-bookworm
FROM mcr.microsoft.com/vscode/devcontainers/python:${VARIANT}
COPY requirements.txt /tmp/pip-tmp/
RUN python3 -m pip install --upgrade pip \
- && python3 -m pip install --no-cache-dir install ruff -r /tmp/pip-tmp/requirements.txt
+ && python3 -m pip install --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \
+ && pipx install pre-commit ruff \
+ && pre-commit install
From 4a83e3f0b1b2a3b414134c3498e57c0fea3b9fcf Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Fri, 28 Jul 2023 21:12:31 +0300
Subject: [PATCH 125/808] Fix failing build due to missing requirement (#8900)
* feat(cellular_automata): Create wa-tor algorithm
* updating DIRECTORY.md
* chore(quality): Implement algo-keeper bot changes
* build: Fix broken ci
* git rm cellular_automata/wa_tor.py
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
requirements.txt | 1 +
1 file changed, 1 insertion(+)
diff --git a/requirements.txt b/requirements.txt
index acfbc823e77f..2702523d542e 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9,6 +9,7 @@ pandas
pillow
projectq
qiskit
+qiskit-aer
requests
rich
scikit-fuzzy
From e406801f9e3967ff0533dfe8cb98a3249db48d33 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Fri, 28 Jul 2023 11:17:46 -0700
Subject: [PATCH 126/808] Reimplement polynomial_regression.py (#8889)
* Reimplement polynomial_regression.py
Rename machine_learning/polymonial_regression.py to
machine_learning/polynomial_regression.py
Reimplement machine_learning/polynomial_regression.py using numpy
because the old original implementation was just a how-to on doing
polynomial regression using sklearn
Add detailed function documentation, doctests, and algorithm
explanation
* updating DIRECTORY.md
* Fix matrix formatting in docstrings
* Try to fix failing doctest
* Debugging failing doctest
* Fix failing doctest attempt 2
* Remove unnecessary return value descriptions in docstrings
* Readd placeholder doctest for main function
* Fix typo in algorithm description
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 2 +-
machine_learning/polymonial_regression.py | 44 -----
machine_learning/polynomial_regression.py | 213 ++++++++++++++++++++++
3 files changed, 214 insertions(+), 45 deletions(-)
delete mode 100644 machine_learning/polymonial_regression.py
create mode 100644 machine_learning/polynomial_regression.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 77938f45011b..133a1ab019d8 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -511,7 +511,7 @@
* Lstm
* [Lstm Prediction](machine_learning/lstm/lstm_prediction.py)
* [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py)
- * [Polymonial Regression](machine_learning/polymonial_regression.py)
+ * [Polynomial Regression](machine_learning/polynomial_regression.py)
* [Scoring Functions](machine_learning/scoring_functions.py)
* [Self Organizing Map](machine_learning/self_organizing_map.py)
* [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py)
diff --git a/machine_learning/polymonial_regression.py b/machine_learning/polymonial_regression.py
deleted file mode 100644
index 487fb814526f..000000000000
--- a/machine_learning/polymonial_regression.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import pandas as pd
-from matplotlib import pyplot as plt
-from sklearn.linear_model import LinearRegression
-
-# Splitting the dataset into the Training set and Test set
-from sklearn.model_selection import train_test_split
-
-# Fitting Polynomial Regression to the dataset
-from sklearn.preprocessing import PolynomialFeatures
-
-# Importing the dataset
-dataset = pd.read_csv(
- "https://s3.us-west-2.amazonaws.com/public.gamelab.fun/dataset/"
- "position_salaries.csv"
-)
-X = dataset.iloc[:, 1:2].values
-y = dataset.iloc[:, 2].values
-
-
-X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
-
-
-poly_reg = PolynomialFeatures(degree=4)
-X_poly = poly_reg.fit_transform(X)
-pol_reg = LinearRegression()
-pol_reg.fit(X_poly, y)
-
-
-# Visualizing the Polymonial Regression results
-def viz_polymonial():
- plt.scatter(X, y, color="red")
- plt.plot(X, pol_reg.predict(poly_reg.fit_transform(X)), color="blue")
- plt.title("Truth or Bluff (Linear Regression)")
- plt.xlabel("Position level")
- plt.ylabel("Salary")
- plt.show()
-
-
-if __name__ == "__main__":
- viz_polymonial()
-
- # Predicting a new result with Polymonial Regression
- pol_reg.predict(poly_reg.fit_transform([[5.5]]))
- # output should be 132148.43750003
diff --git a/machine_learning/polynomial_regression.py b/machine_learning/polynomial_regression.py
new file mode 100644
index 000000000000..5bafea96f41e
--- /dev/null
+++ b/machine_learning/polynomial_regression.py
@@ -0,0 +1,213 @@
+"""
+Polynomial regression is a type of regression analysis that models the relationship
+between a predictor x and the response y as an mth-degree polynomial:
+
+y = β₀ + β₁x + β₂x² + ... + βₘxᵐ + ε
+
+By treating x, x², ..., xᵐ as distinct variables, we see that polynomial regression is a
+special case of multiple linear regression. Therefore, we can use ordinary least squares
+(OLS) estimation to estimate the vector of model parameters β = (β₀, β₁, β₂, ..., βₘ)
+for polynomial regression:
+
+β = (XᵀX)⁻¹Xᵀy = X⁺y
+
+where X is the design matrix, y is the response vector, and X⁺ denotes the Moore–Penrose
+pseudoinverse of X. In the case of polynomial regression, the design matrix is
+
+ |1 x₁ x₁² ⋯ x₁ᵐ|
+X = |1 x₂ x₂² ⋯ x₂ᵐ|
+ |⋮ ⋮ ⋮ ⋱ ⋮ |
+ |1 xₙ xₙ² ⋯ xₙᵐ|
+
+In OLS estimation, inverting XᵀX to compute X⁺ can be very numerically unstable. This
+implementation sidesteps this need to invert XᵀX by computing X⁺ using singular value
+decomposition (SVD):
+
+β = VΣ⁺Uᵀy
+
+where UΣVᵀ is an SVD of X.
+
+References:
+ - https://en.wikipedia.org/wiki/Polynomial_regression
+ - https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse
+ - https://en.wikipedia.org/wiki/Numerical_methods_for_linear_least_squares
+ - https://en.wikipedia.org/wiki/Singular_value_decomposition
+"""
+
+import matplotlib.pyplot as plt
+import numpy as np
+
+
+class PolynomialRegression:
+ __slots__ = "degree", "params"
+
+ def __init__(self, degree: int) -> None:
+ """
+ @raises ValueError: if the polynomial degree is negative
+ """
+ if degree < 0:
+ raise ValueError("Polynomial degree must be non-negative")
+
+ self.degree = degree
+ self.params = None
+
+ @staticmethod
+ def _design_matrix(data: np.ndarray, degree: int) -> np.ndarray:
+ """
+ Constructs a polynomial regression design matrix for the given input data. For
+ input data x = (x₁, x₂, ..., xₙ) and polynomial degree m, the design matrix is
+ the Vandermonde matrix
+
+ |1 x₁ x₁² ⋯ x₁ᵐ|
+ X = |1 x₂ x₂² ⋯ x₂ᵐ|
+ |⋮ ⋮ ⋮ ⋱ ⋮ |
+ |1 xₙ xₙ² ⋯ xₙᵐ|
+
+ Reference: https://en.wikipedia.org/wiki/Vandermonde_matrix
+
+ @param data: the input predictor values x, either for model fitting or for
+ prediction
+ @param degree: the polynomial degree m
+ @returns: the Vandermonde matrix X (see above)
+ @raises ValueError: if input data is not N x 1
+
+ >>> x = np.array([0, 1, 2])
+ >>> PolynomialRegression._design_matrix(x, degree=0)
+ array([[1],
+ [1],
+ [1]])
+ >>> PolynomialRegression._design_matrix(x, degree=1)
+ array([[1, 0],
+ [1, 1],
+ [1, 2]])
+ >>> PolynomialRegression._design_matrix(x, degree=2)
+ array([[1, 0, 0],
+ [1, 1, 1],
+ [1, 2, 4]])
+ >>> PolynomialRegression._design_matrix(x, degree=3)
+ array([[1, 0, 0, 0],
+ [1, 1, 1, 1],
+ [1, 2, 4, 8]])
+ >>> PolynomialRegression._design_matrix(np.array([[0, 0], [0 , 0]]), degree=3)
+ Traceback (most recent call last):
+ ...
+ ValueError: Data must have dimensions N x 1
+ """
+ rows, *remaining = data.shape
+ if remaining:
+ raise ValueError("Data must have dimensions N x 1")
+
+ return np.vander(data, N=degree + 1, increasing=True)
+
+ def fit(self, x_train: np.ndarray, y_train: np.ndarray) -> None:
+ """
+ Computes the polynomial regression model parameters using ordinary least squares
+ (OLS) estimation:
+
+ β = (XᵀX)⁻¹Xᵀy = X⁺y
+
+ where X⁺ denotes the Moore–Penrose pseudoinverse of the design matrix X. This
+ function computes X⁺ using singular value decomposition (SVD).
+
+ References:
+ - https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse
+ - https://en.wikipedia.org/wiki/Singular_value_decomposition
+ - https://en.wikipedia.org/wiki/Multicollinearity
+
+ @param x_train: the predictor values x for model fitting
+ @param y_train: the response values y for model fitting
+ @raises ArithmeticError: if X isn't full rank, then XᵀX is singular and β
+ doesn't exist
+
+ >>> x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
+ >>> y = x**3 - 2 * x**2 + 3 * x - 5
+ >>> poly_reg = PolynomialRegression(degree=3)
+ >>> poly_reg.fit(x, y)
+ >>> poly_reg.params
+ array([-5., 3., -2., 1.])
+ >>> poly_reg = PolynomialRegression(degree=20)
+ >>> poly_reg.fit(x, y)
+ Traceback (most recent call last):
+ ...
+ ArithmeticError: Design matrix is not full rank, can't compute coefficients
+
+ Make sure errors don't grow too large:
+ >>> coefs = np.array([-250, 50, -2, 36, 20, -12, 10, 2, -1, -15, 1])
+ >>> y = PolynomialRegression._design_matrix(x, len(coefs) - 1) @ coefs
+ >>> poly_reg = PolynomialRegression(degree=len(coefs) - 1)
+ >>> poly_reg.fit(x, y)
+ >>> np.allclose(poly_reg.params, coefs, atol=10e-3)
+ True
+ """
+ X = PolynomialRegression._design_matrix(x_train, self.degree) # noqa: N806
+ _, cols = X.shape
+ if np.linalg.matrix_rank(X) < cols:
+ raise ArithmeticError(
+ "Design matrix is not full rank, can't compute coefficients"
+ )
+
+ # np.linalg.pinv() computes the Moore–Penrose pseudoinverse using SVD
+ self.params = np.linalg.pinv(X) @ y_train
+
+ def predict(self, data: np.ndarray) -> np.ndarray:
+ """
+ Computes the predicted response values y for the given input data by
+ constructing the design matrix X and evaluating y = Xβ.
+
+ @param data: the predictor values x for prediction
+ @returns: the predicted response values y = Xβ
+ @raises ArithmeticError: if this function is called before the model
+ parameters are fit
+
+ >>> x = np.array([0, 1, 2, 3, 4])
+ >>> y = x**3 - 2 * x**2 + 3 * x - 5
+ >>> poly_reg = PolynomialRegression(degree=3)
+ >>> poly_reg.fit(x, y)
+ >>> poly_reg.predict(np.array([-1]))
+ array([-11.])
+ >>> poly_reg.predict(np.array([-2]))
+ array([-27.])
+ >>> poly_reg.predict(np.array([6]))
+ array([157.])
+ >>> PolynomialRegression(degree=3).predict(x)
+ Traceback (most recent call last):
+ ...
+ ArithmeticError: Predictor hasn't been fit yet
+ """
+ if self.params is None:
+ raise ArithmeticError("Predictor hasn't been fit yet")
+
+ return PolynomialRegression._design_matrix(data, self.degree) @ self.params
+
+
+def main() -> None:
+ """
+ Fit a polynomial regression model to predict fuel efficiency using seaborn's mpg
+ dataset
+
+ >>> pass # Placeholder, function is only for demo purposes
+ """
+ import seaborn as sns
+
+ mpg_data = sns.load_dataset("mpg")
+
+ poly_reg = PolynomialRegression(degree=2)
+ poly_reg.fit(mpg_data.weight, mpg_data.mpg)
+
+ weight_sorted = np.sort(mpg_data.weight)
+ predictions = poly_reg.predict(weight_sorted)
+
+ plt.scatter(mpg_data.weight, mpg_data.mpg, color="gray", alpha=0.5)
+ plt.plot(weight_sorted, predictions, color="red", linewidth=3)
+ plt.title("Predicting Fuel Efficiency Using Polynomial Regression")
+ plt.xlabel("Weight (lbs)")
+ plt.ylabel("Fuel Efficiency (mpg)")
+ plt.show()
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
+ main()
From a0b642cfe58c215b8ead3f2a40655e144e07aacc Mon Sep 17 00:00:00 2001
From: Alex Bernhardt <54606095+FatAnorexic@users.noreply.github.com>
Date: Fri, 28 Jul 2023 14:30:05 -0400
Subject: [PATCH 127/808] Physics/basic orbital capture (#8857)
* Added file basic_orbital_capture
* updating DIRECTORY.md
* added second source
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fixed spelling errors
* accepted changes
* updating DIRECTORY.md
* corrected spelling error
* Added file basic_orbital_capture
* added second source
* fixed spelling errors
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* applied changes
* reviewed and checked file
* added doctest
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* removed redundant constnant
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* added scipy imports
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* added doctests to capture_radii and scipy const
* fixed conflicts
* finalizing file. Added tests
* Update physics/basic_orbital_capture.py
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
DIRECTORY.md | 1 +
physics/basic_orbital_capture.py | 178 +++++++++++++++++++++++++++++++
2 files changed, 179 insertions(+)
create mode 100644 physics/basic_orbital_capture.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 133a1ab019d8..29514579ceb0 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -741,6 +741,7 @@
## Physics
* [Archimedes Principle](physics/archimedes_principle.py)
+ * [Basic Orbital Capture](physics/basic_orbital_capture.py)
* [Casimir Effect](physics/casimir_effect.py)
* [Centripetal Force](physics/centripetal_force.py)
* [Grahams Law](physics/grahams_law.py)
diff --git a/physics/basic_orbital_capture.py b/physics/basic_orbital_capture.py
new file mode 100644
index 000000000000..eeb45e60240c
--- /dev/null
+++ b/physics/basic_orbital_capture.py
@@ -0,0 +1,178 @@
+from math import pow, sqrt
+
+from scipy.constants import G, c, pi
+
+"""
+These two functions will return the radii of impact for a target object
+of mass M and radius R as well as it's effective cross sectional area σ(sigma).
+That is to say any projectile with velocity v passing within σ, will impact the
+target object with mass M. The derivation of which is given at the bottom
+of this file.
+
+The derivation shows that a projectile does not need to aim directly at the target
+body in order to hit it, as R_capture>R_target. Astronomers refer to the effective
+cross section for capture as σ=π*R_capture**2.
+
+This algorithm does not account for an N-body problem.
+
+"""
+
+
+def capture_radii(
+ target_body_radius: float, target_body_mass: float, projectile_velocity: float
+) -> float:
+ """
+ Input Params:
+ -------------
+ target_body_radius: Radius of the central body SI units: meters | m
+ target_body_mass: Mass of the central body SI units: kilograms | kg
+ projectile_velocity: Velocity of object moving toward central body
+ SI units: meters/second | m/s
+ Returns:
+ --------
+ >>> capture_radii(6.957e8, 1.99e30, 25000.0)
+ 17209590691.0
+ >>> capture_radii(-6.957e8, 1.99e30, 25000.0)
+ Traceback (most recent call last):
+ ...
+ ValueError: Radius cannot be less than 0
+ >>> capture_radii(6.957e8, -1.99e30, 25000.0)
+ Traceback (most recent call last):
+ ...
+ ValueError: Mass cannot be less than 0
+ >>> capture_radii(6.957e8, 1.99e30, c+1)
+ Traceback (most recent call last):
+ ...
+ ValueError: Cannot go beyond speed of light
+
+ Returned SI units:
+ ------------------
+ meters | m
+ """
+
+ if target_body_mass < 0:
+ raise ValueError("Mass cannot be less than 0")
+ if target_body_radius < 0:
+ raise ValueError("Radius cannot be less than 0")
+ if projectile_velocity > c:
+ raise ValueError("Cannot go beyond speed of light")
+
+ escape_velocity_squared = (2 * G * target_body_mass) / target_body_radius
+ capture_radius = target_body_radius * sqrt(
+ 1 + escape_velocity_squared / pow(projectile_velocity, 2)
+ )
+ return round(capture_radius, 0)
+
+
+def capture_area(capture_radius: float) -> float:
+ """
+ Input Param:
+ ------------
+ capture_radius: The radius of orbital capture and impact for a central body of
+ mass M and a projectile moving towards it with velocity v
+ SI units: meters | m
+ Returns:
+ --------
+ >>> capture_area(17209590691)
+ 9.304455331329126e+20
+ >>> capture_area(-1)
+ Traceback (most recent call last):
+ ...
+ ValueError: Cannot have a capture radius less than 0
+
+ Returned SI units:
+ ------------------
+ meters*meters | m**2
+ """
+
+ if capture_radius < 0:
+ raise ValueError("Cannot have a capture radius less than 0")
+ sigma = pi * pow(capture_radius, 2)
+ return round(sigma, 0)
+
+
+if __name__ == "__main__":
+ from doctest import testmod
+
+ testmod()
+
+"""
+Derivation:
+
+Let: Mt=target mass, Rt=target radius, v=projectile_velocity,
+ r_0=radius of projectile at instant 0 to CM of target
+ v_p=v at closest approach,
+ r_p=radius from projectile to target CM at closest approach,
+ R_capture= radius of impact for projectile with velocity v
+
+(1)At time=0 the projectile's energy falling from infinity| E=K+U=0.5*m*(v**2)+0
+
+ E_initial=0.5*m*(v**2)
+
+(2)at time=0 the angular momentum of the projectile relative to CM target|
+ L_initial=m*r_0*v*sin(Θ)->m*r_0*v*(R_capture/r_0)->m*v*R_capture
+
+ L_i=m*v*R_capture
+
+(3)The energy of the projectile at closest approach will be its kinetic energy
+ at closest approach plus gravitational potential energy(-(GMm)/R)|
+ E_p=K_p+U_p->E_p=0.5*m*(v_p**2)-(G*Mt*m)/r_p
+
+ E_p=0.0.5*m*(v_p**2)-(G*Mt*m)/r_p
+
+(4)The angular momentum of the projectile relative to the target at closest
+ approach will be L_p=m*r_p*v_p*sin(Θ), however relative to the target Θ=90°
+ sin(90°)=1|
+
+ L_p=m*r_p*v_p
+(5)Using conservation of angular momentum and energy, we can write a quadratic
+ equation that solves for r_p|
+
+ (a)
+ Ei=Ep-> 0.5*m*(v**2)=0.5*m*(v_p**2)-(G*Mt*m)/r_p-> v**2=v_p**2-(2*G*Mt)/r_p
+
+ (b)
+ Li=Lp-> m*v*R_capture=m*r_p*v_p-> v*R_capture=r_p*v_p-> v_p=(v*R_capture)/r_p
+
+ (c) b plugs int a|
+ v**2=((v*R_capture)/r_p)**2-(2*G*Mt)/r_p->
+
+ v**2-(v**2)*(R_c**2)/(r_p**2)+(2*G*Mt)/r_p=0->
+
+ (v**2)*(r_p**2)+2*G*Mt*r_p-(v**2)*(R_c**2)=0
+
+ (d) Using the quadratic formula, we'll solve for r_p then rearrange to solve to
+ R_capture
+
+ r_p=(-2*G*Mt ± sqrt(4*G^2*Mt^2+ 4(v^4*R_c^2)))/(2*v^2)->
+
+ r_p=(-G*Mt ± sqrt(G^2*Mt+v^4*R_c^2))/v^2->
+
+ r_p<0 is something we can ignore, as it has no physical meaning for our purposes.->
+
+ r_p=(-G*Mt)/v^2 + sqrt(G^2*Mt^2/v^4 + R_c^2)
+
+ (e)We are trying to solve for R_c. We are looking for impact, so we want r_p=Rt
+
+ Rt + G*Mt/v^2 = sqrt(G^2*Mt^2/v^4 + R_c^2)->
+
+ (Rt + G*Mt/v^2)^2 = G^2*Mt^2/v^4 + R_c^2->
+
+ Rt^2 + 2*G*Mt*Rt/v^2 + G^2*Mt^2/v^4 = G^2*Mt^2/v^4 + R_c^2->
+
+ Rt**2 + 2*G*Mt*Rt/v**2 = R_c**2->
+
+ Rt**2 * (1 + 2*G*Mt/Rt *1/v**2) = R_c**2->
+
+ escape velocity = sqrt(2GM/R)= v_escape**2=2GM/R->
+
+ Rt**2 * (1 + v_esc**2/v**2) = R_c**2->
+
+(6)
+ R_capture = Rt * sqrt(1 + v_esc**2/v**2)
+
+Source: Problem Set 3 #8 c.Fall_2017|Honors Astronomy|Professor Rachel Bezanson
+
+Source #2: http://www.nssc.ac.cn/wxzygx/weixin/201607/P020160718380095698873.pdf
+ 8.8 Planetary Rendezvous: Pg.368
+"""
From 0ef930697632a1f05dbbd956c4ccab0473025f5b Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Fri, 28 Jul 2023 13:08:40 -0700
Subject: [PATCH 128/808] Disable quantum/quantum_random.py (attempt 2) (#8902)
* Disable quantum/quantum_random.py
Temporarily disable quantum/quantum_random.py because it produces an illegal instruction error that causes all builds to fail
* updating DIRECTORY.md
* Disable quantum/quantum_random.py attempt 2
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 -
quantum/{quantum_random.py => quantum_random.py.DISABLED.txt} | 0
2 files changed, 1 deletion(-)
rename quantum/{quantum_random.py => quantum_random.py.DISABLED.txt} (100%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 29514579ceb0..af150b12984b 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -1063,7 +1063,6 @@
* [Q Fourier Transform](quantum/q_fourier_transform.py)
* [Q Full Adder](quantum/q_full_adder.py)
* [Quantum Entanglement](quantum/quantum_entanglement.py)
- * [Quantum Random](quantum/quantum_random.py)
* [Quantum Teleportation](quantum/quantum_teleportation.py)
* [Ripple Adder Classic](quantum/ripple_adder_classic.py)
* [Single Qubit Measure](quantum/single_qubit_measure.py)
diff --git a/quantum/quantum_random.py b/quantum/quantum_random.py.DISABLED.txt
similarity index 100%
rename from quantum/quantum_random.py
rename to quantum/quantum_random.py.DISABLED.txt
From 2cfef0913a36e967d828881386ae78457cf65f33 Mon Sep 17 00:00:00 2001
From: Colin Leroy-Mira
Date: Sat, 29 Jul 2023 19:03:43 +0200
Subject: [PATCH 129/808] Fix greyscale computation and inverted coords (#8905)
* Fix greyscale computation and inverted coords
* Fix test
* Add test cases
* Add reference to the greyscaling formula
---------
Co-authored-by: Colin Leroy-Mira
---
digital_image_processing/dithering/burkes.py | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/digital_image_processing/dithering/burkes.py b/digital_image_processing/dithering/burkes.py
index 0804104abe58..35aedc16d404 100644
--- a/digital_image_processing/dithering/burkes.py
+++ b/digital_image_processing/dithering/burkes.py
@@ -39,9 +39,18 @@ def __init__(self, input_img, threshold: int):
def get_greyscale(cls, blue: int, green: int, red: int) -> float:
"""
>>> Burkes.get_greyscale(3, 4, 5)
- 3.753
+ 4.185
+ >>> Burkes.get_greyscale(0, 0, 0)
+ 0.0
+ >>> Burkes.get_greyscale(255, 255, 255)
+ 255.0
"""
- return 0.114 * blue + 0.587 * green + 0.2126 * red
+ """
+ Formula from https://en.wikipedia.org/wiki/HSL_and_HSV
+ cf Lightness section, and Fig 13c.
+ We use the first of four possible.
+ """
+ return 0.114 * blue + 0.587 * green + 0.299 * red
def process(self) -> None:
for y in range(self.height):
@@ -49,10 +58,10 @@ def process(self) -> None:
greyscale = int(self.get_greyscale(*self.input_img[y][x]))
if self.threshold > greyscale + self.error_table[y][x]:
self.output_img[y][x] = (0, 0, 0)
- current_error = greyscale + self.error_table[x][y]
+ current_error = greyscale + self.error_table[y][x]
else:
self.output_img[y][x] = (255, 255, 255)
- current_error = greyscale + self.error_table[x][y] - 255
+ current_error = greyscale + self.error_table[y][x] - 255
"""
Burkes error propagation (`*` is current pixel):
From d31750adece86ebf39a09dd3adb2039098f58586 Mon Sep 17 00:00:00 2001
From: Yatharth Mathur <31852880+yatharthmathur@users.noreply.github.com>
Date: Sun, 30 Jul 2023 02:27:45 -0700
Subject: [PATCH 130/808] Pythonic implementation of LRU Cache (#4630)
* Added a more pythonic implementation of LRU_Cache.[#4628]
* Added test cases and doctest
* Fixed doc tests
* Added more tests in doctests and fixed return types fixes [#4628]
* better doctests
* added doctests to main()
* Added dutch_national_flag.py in sorts. fixing [#4636]
* Delete dutch_national_flag.py
incorrect commit
* Update lru_cache_pythonic.py
* Remove pontification
---------
Co-authored-by: Christian Clauss
---
other/lru_cache_pythonic.py | 113 ++++++++++++++++++++++++++++++++++++
1 file changed, 113 insertions(+)
create mode 100644 other/lru_cache_pythonic.py
diff --git a/other/lru_cache_pythonic.py b/other/lru_cache_pythonic.py
new file mode 100644
index 000000000000..425691ef18cf
--- /dev/null
+++ b/other/lru_cache_pythonic.py
@@ -0,0 +1,113 @@
+"""
+This implementation of LRU Cache uses the in-built Python dictionary (dict) which from
+Python 3.6 onward maintain the insertion order of keys and ensures O(1) operations on
+insert, delete and access. https://docs.python.org/3/library/stdtypes.html#typesmapping
+"""
+from typing import Any, Hashable
+
+
+class LRUCache(dict):
+ def __init__(self, capacity: int) -> None:
+ """
+ Initialize an LRU Cache with given capacity.
+ capacity : int -> the capacity of the LRU Cache
+ >>> cache = LRUCache(2)
+ >>> cache
+ {}
+ """
+ self.remaining: int = capacity
+
+ def get(self, key: Hashable) -> Any:
+ """
+ This method returns the value associated with the key.
+ key : A hashable object that is mapped to a value in the LRU cache.
+ return -> Any object that has been stored as a value in the LRU cache.
+
+ >>> cache = LRUCache(2)
+ >>> cache.put(1,1)
+ >>> cache.get(1)
+ 1
+ >>> cache.get(2)
+ Traceback (most recent call last):
+ ...
+ KeyError: '2 not found.'
+ """
+ if key not in self:
+ raise KeyError(f"{key} not found.")
+ val = self.pop(key) # Pop the key-value and re-insert to maintain the order
+ self[key] = val
+ return val
+
+ def put(self, key: Hashable, value: Any) -> None:
+ """
+ This method puts the value associated with the key provided in the LRU cache.
+ key : A hashable object that is mapped to a value in the LRU cache.
+ value: Any object that is to be associated with the key in the LRU cache.
+ >>> cache = LRUCache(2)
+ >>> cache.put(3,3)
+ >>> cache
+ {3: 3}
+ >>> cache.put(2,2)
+ >>> cache
+ {3: 3, 2: 2}
+ """
+ # To pop the last value inside of the LRU cache
+ if key in self:
+ self.pop(key)
+ self[key] = value
+ return
+
+ if self.remaining > 0:
+ self.remaining -= 1
+ # To pop the least recently used item from the dictionary
+ else:
+ self.pop(next(iter(self)))
+ self[key] = value
+
+
+def main() -> None:
+ """Example test case with LRU_Cache of size 2
+ >>> main()
+ 1
+ Key=2 not found in cache
+ Key=1 not found in cache
+ 3
+ 4
+ """
+ cache = LRUCache(2) # Creates an LRU cache with size 2
+ cache.put(1, 1) # cache = {1:1}
+ cache.put(2, 2) # cache = {1:1, 2:2}
+ try:
+ print(cache.get(1)) # Prints 1
+ except KeyError:
+ print("Key not found in cache")
+ cache.put(
+ 3, 3
+ ) # cache = {1:1, 3:3} key=2 is evicted because it wasn't used recently
+ try:
+ print(cache.get(2))
+ except KeyError:
+ print("Key=2 not found in cache") # Prints key not found
+ cache.put(
+ 4, 4
+ ) # cache = {4:4, 3:3} key=1 is evicted because it wasn't used recently
+ try:
+ print(cache.get(1))
+ except KeyError:
+ print("Key=1 not found in cache") # Prints key not found
+ try:
+ print(cache.get(3)) # Prints value 3
+ except KeyError:
+ print("Key not found in cache")
+
+ try:
+ print(cache.get(4)) # Prints value 4
+ except KeyError:
+ print("Key not found in cache")
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+ main()
From 8b831cb60003443c9967ac8a33df4151dc883484 Mon Sep 17 00:00:00 2001
From: Bazif Rasool <45148731+Bazifrasool@users.noreply.github.com>
Date: Sun, 30 Jul 2023 20:30:58 +0530
Subject: [PATCH 131/808] Added Altitude Pressure equation (#8909)
* Added Altitude Pressure equation
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Removed trailing whitespaces
* Removed pylint
* Fix lru_cache_pythonic.py
* Fixed spellings
* Fix again lru_cache_pythonic.py
* Update .vscode/settings.json
Co-authored-by: Christian Clauss
* Third fix lru_cache_pythonic.py
* Update .vscode/settings.json
Co-authored-by: Christian Clauss
* 4th fix lru_cache_pythonic.py
* Update physics/altitude_pressure.py
Co-authored-by: Christian Clauss
* lru_cache_pythonic.py: def get(self, key: Any, /) -> Any | None:
* Delete lru_cache_pythonic.py
* Added positive and negative pressure test cases
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
other/lru_cache_pythonic.py | 113 -----------------------------------
physics/altitude_pressure.py | 52 ++++++++++++++++
2 files changed, 52 insertions(+), 113 deletions(-)
delete mode 100644 other/lru_cache_pythonic.py
create mode 100644 physics/altitude_pressure.py
diff --git a/other/lru_cache_pythonic.py b/other/lru_cache_pythonic.py
deleted file mode 100644
index 425691ef18cf..000000000000
--- a/other/lru_cache_pythonic.py
+++ /dev/null
@@ -1,113 +0,0 @@
-"""
-This implementation of LRU Cache uses the in-built Python dictionary (dict) which from
-Python 3.6 onward maintain the insertion order of keys and ensures O(1) operations on
-insert, delete and access. https://docs.python.org/3/library/stdtypes.html#typesmapping
-"""
-from typing import Any, Hashable
-
-
-class LRUCache(dict):
- def __init__(self, capacity: int) -> None:
- """
- Initialize an LRU Cache with given capacity.
- capacity : int -> the capacity of the LRU Cache
- >>> cache = LRUCache(2)
- >>> cache
- {}
- """
- self.remaining: int = capacity
-
- def get(self, key: Hashable) -> Any:
- """
- This method returns the value associated with the key.
- key : A hashable object that is mapped to a value in the LRU cache.
- return -> Any object that has been stored as a value in the LRU cache.
-
- >>> cache = LRUCache(2)
- >>> cache.put(1,1)
- >>> cache.get(1)
- 1
- >>> cache.get(2)
- Traceback (most recent call last):
- ...
- KeyError: '2 not found.'
- """
- if key not in self:
- raise KeyError(f"{key} not found.")
- val = self.pop(key) # Pop the key-value and re-insert to maintain the order
- self[key] = val
- return val
-
- def put(self, key: Hashable, value: Any) -> None:
- """
- This method puts the value associated with the key provided in the LRU cache.
- key : A hashable object that is mapped to a value in the LRU cache.
- value: Any object that is to be associated with the key in the LRU cache.
- >>> cache = LRUCache(2)
- >>> cache.put(3,3)
- >>> cache
- {3: 3}
- >>> cache.put(2,2)
- >>> cache
- {3: 3, 2: 2}
- """
- # To pop the last value inside of the LRU cache
- if key in self:
- self.pop(key)
- self[key] = value
- return
-
- if self.remaining > 0:
- self.remaining -= 1
- # To pop the least recently used item from the dictionary
- else:
- self.pop(next(iter(self)))
- self[key] = value
-
-
-def main() -> None:
- """Example test case with LRU_Cache of size 2
- >>> main()
- 1
- Key=2 not found in cache
- Key=1 not found in cache
- 3
- 4
- """
- cache = LRUCache(2) # Creates an LRU cache with size 2
- cache.put(1, 1) # cache = {1:1}
- cache.put(2, 2) # cache = {1:1, 2:2}
- try:
- print(cache.get(1)) # Prints 1
- except KeyError:
- print("Key not found in cache")
- cache.put(
- 3, 3
- ) # cache = {1:1, 3:3} key=2 is evicted because it wasn't used recently
- try:
- print(cache.get(2))
- except KeyError:
- print("Key=2 not found in cache") # Prints key not found
- cache.put(
- 4, 4
- ) # cache = {4:4, 3:3} key=1 is evicted because it wasn't used recently
- try:
- print(cache.get(1))
- except KeyError:
- print("Key=1 not found in cache") # Prints key not found
- try:
- print(cache.get(3)) # Prints value 3
- except KeyError:
- print("Key not found in cache")
-
- try:
- print(cache.get(4)) # Prints value 4
- except KeyError:
- print("Key not found in cache")
-
-
-if __name__ == "__main__":
- import doctest
-
- doctest.testmod()
- main()
diff --git a/physics/altitude_pressure.py b/physics/altitude_pressure.py
new file mode 100644
index 000000000000..65307d223fa7
--- /dev/null
+++ b/physics/altitude_pressure.py
@@ -0,0 +1,52 @@
+"""
+Title : Calculate altitude using Pressure
+
+Description :
+ The below algorithm approximates the altitude using Barometric formula
+
+
+"""
+
+
+def get_altitude_at_pressure(pressure: float) -> float:
+ """
+ This method calculates the altitude from Pressure wrt to
+ Sea level pressure as reference .Pressure is in Pascals
+ https://en.wikipedia.org/wiki/Pressure_altitude
+ https://community.bosch-sensortec.com/t5/Question-and-answers/How-to-calculate-the-altitude-from-the-pressure-sensor-data/qaq-p/5702
+
+ H = 44330 * [1 - (P/p0)^(1/5.255) ]
+
+ Where :
+ H = altitude (m)
+ P = measured pressure
+ p0 = reference pressure at sea level 101325 Pa
+
+ Examples:
+ >>> get_altitude_at_pressure(pressure=100_000)
+ 105.47836610778828
+ >>> get_altitude_at_pressure(pressure=101_325)
+ 0.0
+ >>> get_altitude_at_pressure(pressure=80_000)
+ 1855.873388064995
+ >>> get_altitude_at_pressure(pressure=201_325)
+ Traceback (most recent call last):
+ ...
+ ValueError: Value Higher than Pressure at Sea Level !
+ >>> get_altitude_at_pressure(pressure=-80_000)
+ Traceback (most recent call last):
+ ...
+ ValueError: Atmospheric Pressure can not be negative !
+ """
+
+ if pressure > 101325:
+ raise ValueError("Value Higher than Pressure at Sea Level !")
+ if pressure < 0:
+ raise ValueError("Atmospheric Pressure can not be negative !")
+ return 44_330 * (1 - (pressure / 101_325) ** (1 / 5.5255))
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From d4f2873e39f041513aa9f5c287ec9b46e2236dad Mon Sep 17 00:00:00 2001
From: AmirSoroush
Date: Mon, 31 Jul 2023 03:54:15 +0300
Subject: [PATCH 132/808] add reverse_inorder traversal to
binary_tree_traversals.py (#8726)
* add reverse_inorder traversal to binary_tree_traversals.py
* Apply suggestions from code review
Co-authored-by: Tianyi Zheng
---------
Co-authored-by: Tianyi Zheng
---
.../binary_tree/binary_tree_traversals.py | 22 ++++++++++++++-----
1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/data_structures/binary_tree/binary_tree_traversals.py b/data_structures/binary_tree/binary_tree_traversals.py
index 71a895e76ce4..2afb7604f9c6 100644
--- a/data_structures/binary_tree/binary_tree_traversals.py
+++ b/data_structures/binary_tree/binary_tree_traversals.py
@@ -58,6 +58,19 @@ def inorder(root: Node | None) -> list[int]:
return [*inorder(root.left), root.data, *inorder(root.right)] if root else []
+def reverse_inorder(root: Node | None) -> list[int]:
+ """
+ Reverse in-order traversal visits right subtree, root node, left subtree.
+ >>> reverse_inorder(make_tree())
+ [3, 1, 5, 2, 4]
+ """
+ return (
+ [*reverse_inorder(root.right), root.data, *reverse_inorder(root.left)]
+ if root
+ else []
+ )
+
+
def height(root: Node | None) -> int:
"""
Recursive function for calculating the height of the binary tree.
@@ -161,15 +174,12 @@ def zigzag(root: Node | None) -> Sequence[Node | None] | list[Any]:
def main() -> None: # Main function for testing.
- """
- Create binary tree.
- """
+ # Create binary tree.
root = make_tree()
- """
- All Traversals of the binary are as follows:
- """
+ # All Traversals of the binary are as follows:
print(f"In-order Traversal: {inorder(root)}")
+ print(f"Reverse In-order Traversal: {reverse_inorder(root)}")
print(f"Pre-order Traversal: {preorder(root)}")
print(f"Post-order Traversal: {postorder(root)}", "\n")
From 4710e51deb2dc07e32884391a36d40e08398e6be Mon Sep 17 00:00:00 2001
From: David Leal
Date: Sun, 30 Jul 2023 19:15:30 -0600
Subject: [PATCH 133/808] chore: use newest Discord invite link (#8696)
* updating DIRECTORY.md
* chore: use newest Discord invite link
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
README.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index bf6e0ed3cf75..d8eba4e016fa 100644
--- a/README.md
+++ b/README.md
@@ -13,7 +13,7 @@
-
+
@@ -42,7 +42,7 @@ Read through our [Contribution Guidelines](CONTRIBUTING.md) before you contribut
## Community Channels
-We are on [Discord](https://discord.gg/c7MnfGFGa6) and [Gitter](https://gitter.im/TheAlgorithms/community)! Community channels are a great way for you to ask questions and get help. Please join us!
+We are on [Discord](https://the-algorithms.com/discord) and [Gitter](https://gitter.im/TheAlgorithms/community)! Community channels are a great way for you to ask questions and get help. Please join us!
## List of Algorithms
From 8cce9cf066396bb220515c03849fbc1a16d800d0 Mon Sep 17 00:00:00 2001
From: Almas Bekbayev <121730304+bekbayev@users.noreply.github.com>
Date: Mon, 31 Jul 2023 07:32:05 +0600
Subject: [PATCH 134/808] Fix linear_search docstring return value (#8644)
---
searches/linear_search.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/searches/linear_search.py b/searches/linear_search.py
index 777080d14e36..ba6e81d6bae4 100644
--- a/searches/linear_search.py
+++ b/searches/linear_search.py
@@ -15,7 +15,7 @@ def linear_search(sequence: list, target: int) -> int:
:param sequence: a collection with comparable items (as sorted items not required
in Linear Search)
:param target: item value to search
- :return: index of found item or None if item is not found
+ :return: index of found item or -1 if item is not found
Examples:
>>> linear_search([0, 5, 7, 10, 15], 0)
From 384c407a265ac44d15eecdd339bb154147cda4f8 Mon Sep 17 00:00:00 2001
From: AmirSoroush
Date: Mon, 31 Jul 2023 05:07:35 +0300
Subject: [PATCH 135/808] Enhance the implementation of Queue using list
(#8608)
* enhance the implementation of queue using list
* enhance readability of queue_on_list.py
* rename 'queue_on_list' to 'queue_by_list' to match the class name
---
data_structures/queue/queue_by_list.py | 141 +++++++++++++++++++++++++
data_structures/queue/queue_on_list.py | 52 ---------
2 files changed, 141 insertions(+), 52 deletions(-)
create mode 100644 data_structures/queue/queue_by_list.py
delete mode 100644 data_structures/queue/queue_on_list.py
diff --git a/data_structures/queue/queue_by_list.py b/data_structures/queue/queue_by_list.py
new file mode 100644
index 000000000000..4b05be9fd08e
--- /dev/null
+++ b/data_structures/queue/queue_by_list.py
@@ -0,0 +1,141 @@
+"""Queue represented by a Python list"""
+
+from collections.abc import Iterable
+from typing import Generic, TypeVar
+
+_T = TypeVar("_T")
+
+
+class QueueByList(Generic[_T]):
+ def __init__(self, iterable: Iterable[_T] | None = None) -> None:
+ """
+ >>> QueueByList()
+ Queue(())
+ >>> QueueByList([10, 20, 30])
+ Queue((10, 20, 30))
+ >>> QueueByList((i**2 for i in range(1, 4)))
+ Queue((1, 4, 9))
+ """
+ self.entries: list[_T] = list(iterable or [])
+
+ def __len__(self) -> int:
+ """
+ >>> len(QueueByList())
+ 0
+ >>> from string import ascii_lowercase
+ >>> len(QueueByList(ascii_lowercase))
+ 26
+ >>> queue = QueueByList()
+ >>> for i in range(1, 11):
+ ... queue.put(i)
+ >>> len(queue)
+ 10
+ >>> for i in range(2):
+ ... queue.get()
+ 1
+ 2
+ >>> len(queue)
+ 8
+ """
+
+ return len(self.entries)
+
+ def __repr__(self) -> str:
+ """
+ >>> queue = QueueByList()
+ >>> queue
+ Queue(())
+ >>> str(queue)
+ 'Queue(())'
+ >>> queue.put(10)
+ >>> queue
+ Queue((10,))
+ >>> queue.put(20)
+ >>> queue.put(30)
+ >>> queue
+ Queue((10, 20, 30))
+ """
+
+ return f"Queue({tuple(self.entries)})"
+
+ def put(self, item: _T) -> None:
+ """Put `item` to the Queue
+
+ >>> queue = QueueByList()
+ >>> queue.put(10)
+ >>> queue.put(20)
+ >>> len(queue)
+ 2
+ >>> queue
+ Queue((10, 20))
+ """
+
+ self.entries.append(item)
+
+ def get(self) -> _T:
+ """
+ Get `item` from the Queue
+
+ >>> queue = QueueByList((10, 20, 30))
+ >>> queue.get()
+ 10
+ >>> queue.put(40)
+ >>> queue.get()
+ 20
+ >>> queue.get()
+ 30
+ >>> len(queue)
+ 1
+ >>> queue.get()
+ 40
+ >>> queue.get()
+ Traceback (most recent call last):
+ ...
+ IndexError: Queue is empty
+ """
+
+ if not self.entries:
+ raise IndexError("Queue is empty")
+ return self.entries.pop(0)
+
+ def rotate(self, rotation: int) -> None:
+ """Rotate the items of the Queue `rotation` times
+
+ >>> queue = QueueByList([10, 20, 30, 40])
+ >>> queue
+ Queue((10, 20, 30, 40))
+ >>> queue.rotate(1)
+ >>> queue
+ Queue((20, 30, 40, 10))
+ >>> queue.rotate(2)
+ >>> queue
+ Queue((40, 10, 20, 30))
+ """
+
+ put = self.entries.append
+ get = self.entries.pop
+
+ for _ in range(rotation):
+ put(get(0))
+
+ def get_front(self) -> _T:
+ """Get the front item from the Queue
+
+ >>> queue = QueueByList((10, 20, 30))
+ >>> queue.get_front()
+ 10
+ >>> queue
+ Queue((10, 20, 30))
+ >>> queue.get()
+ 10
+ >>> queue.get_front()
+ 20
+ """
+
+ return self.entries[0]
+
+
+if __name__ == "__main__":
+ from doctest import testmod
+
+ testmod()
diff --git a/data_structures/queue/queue_on_list.py b/data_structures/queue/queue_on_list.py
deleted file mode 100644
index 71fca6b2f5f4..000000000000
--- a/data_structures/queue/queue_on_list.py
+++ /dev/null
@@ -1,52 +0,0 @@
-"""Queue represented by a Python list"""
-
-
-class Queue:
- def __init__(self):
- self.entries = []
- self.length = 0
- self.front = 0
-
- def __str__(self):
- printed = "<" + str(self.entries)[1:-1] + ">"
- return printed
-
- """Enqueues {@code item}
- @param item
- item to enqueue"""
-
- def put(self, item):
- self.entries.append(item)
- self.length = self.length + 1
-
- """Dequeues {@code item}
- @requirement: |self.length| > 0
- @return dequeued
- item that was dequeued"""
-
- def get(self):
- self.length = self.length - 1
- dequeued = self.entries[self.front]
- # self.front-=1
- # self.entries = self.entries[self.front:]
- self.entries = self.entries[1:]
- return dequeued
-
- """Rotates the queue {@code rotation} times
- @param rotation
- number of times to rotate queue"""
-
- def rotate(self, rotation):
- for _ in range(rotation):
- self.put(self.get())
-
- """Enqueues {@code item}
- @return item at front of self.entries"""
-
- def get_front(self):
- return self.entries[0]
-
- """Returns the length of this.entries"""
-
- def size(self):
- return self.length
From 629eb86ce0d30dd6031fa482f4a477ac3df345ab Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Sun, 30 Jul 2023 22:23:23 -0700
Subject: [PATCH 136/808] Fix merge conflicts to merge change from #5080
(#8911)
* Input for user choose his Collatz sequence
Now the user can tell the algorithm what number he wants to run on the Collatz Sequence.
* updating DIRECTORY.md
---------
Co-authored-by: Hugo Folloni
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 +
maths/collatz_sequence.py | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index af150b12984b..aa9bd313b898 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -740,6 +740,7 @@
* [Tower Of Hanoi](other/tower_of_hanoi.py)
## Physics
+ * [Altitude Pressure](physics/altitude_pressure.py)
* [Archimedes Principle](physics/archimedes_principle.py)
* [Basic Orbital Capture](physics/basic_orbital_capture.py)
* [Casimir Effect](physics/casimir_effect.py)
diff --git a/maths/collatz_sequence.py b/maths/collatz_sequence.py
index 4f3aa5582731..b47017146a1e 100644
--- a/maths/collatz_sequence.py
+++ b/maths/collatz_sequence.py
@@ -57,7 +57,7 @@ def collatz_sequence(n: int) -> Generator[int, None, None]:
def main():
- n = 43
+ n = int(input("Your number: "))
sequence = tuple(collatz_sequence(n))
print(sequence)
print(f"Collatz sequence from {n} took {len(sequence)} steps.")
From 0b0214c42f563e7af885058c0e3a32d292f7f1da Mon Sep 17 00:00:00 2001
From: roger-sato
Date: Tue, 1 Aug 2023 03:46:30 +0900
Subject: [PATCH 137/808] Handle empty input case in Segment Tree build process
(#8718)
---
data_structures/binary_tree/segment_tree.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/data_structures/binary_tree/segment_tree.py b/data_structures/binary_tree/segment_tree.py
index b0580386954a..5f822407d8cb 100644
--- a/data_structures/binary_tree/segment_tree.py
+++ b/data_structures/binary_tree/segment_tree.py
@@ -7,7 +7,8 @@ def __init__(self, a):
self.st = [0] * (
4 * self.N
) # approximate the overall size of segment tree with array N
- self.build(1, 0, self.N - 1)
+ if self.N:
+ self.build(1, 0, self.N - 1)
def left(self, idx):
return idx * 2
From 90a8e6e0d210a5c526c8f485fa825e1649d217e2 Mon Sep 17 00:00:00 2001
From: Dylan Buchi
Date: Mon, 31 Jul 2023 15:50:00 -0300
Subject: [PATCH 138/808] Update `sorts/bubble_sort.py` (#5802)
* Add missing type annotations in bubble_sort.py
* Refactor bubble_sort function
---
sorts/bubble_sort.py | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/sorts/bubble_sort.py b/sorts/bubble_sort.py
index aef2da272bd0..7da4362a5b97 100644
--- a/sorts/bubble_sort.py
+++ b/sorts/bubble_sort.py
@@ -1,4 +1,7 @@
-def bubble_sort(collection):
+from typing import Any
+
+
+def bubble_sort(collection: list[Any]) -> list[Any]:
"""Pure implementation of bubble sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
@@ -28,9 +31,9 @@ def bubble_sort(collection):
True
"""
length = len(collection)
- for i in range(length - 1):
+ for i in reversed(range(length)):
swapped = False
- for j in range(length - 1 - i):
+ for j in range(i):
if collection[j] > collection[j + 1]:
swapped = True
collection[j], collection[j + 1] = collection[j + 1], collection[j]
From 5cf34d901e32b65425103309bbad0068b1851238 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Mon, 31 Jul 2023 13:53:26 -0700
Subject: [PATCH 139/808] Ruff fixes (#8913)
* updating DIRECTORY.md
* Fix ruff error in eulerian_path_and_circuit_for_undirected_graph.py
* Fix ruff error in newtons_second_law_of_motion.py
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 2 +-
graphs/eulerian_path_and_circuit_for_undirected_graph.py | 2 +-
physics/newtons_second_law_of_motion.py | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index aa9bd313b898..fdcf0ceedf1f 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -236,8 +236,8 @@
* [Double Ended Queue](data_structures/queue/double_ended_queue.py)
* [Linked Queue](data_structures/queue/linked_queue.py)
* [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py)
+ * [Queue By List](data_structures/queue/queue_by_list.py)
* [Queue By Two Stacks](data_structures/queue/queue_by_two_stacks.py)
- * [Queue On List](data_structures/queue/queue_on_list.py)
* [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py)
* Stacks
* [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py)
diff --git a/graphs/eulerian_path_and_circuit_for_undirected_graph.py b/graphs/eulerian_path_and_circuit_for_undirected_graph.py
index 6c43c5d3e6e3..6b4ea8e21e8b 100644
--- a/graphs/eulerian_path_and_circuit_for_undirected_graph.py
+++ b/graphs/eulerian_path_and_circuit_for_undirected_graph.py
@@ -20,7 +20,7 @@ def check_circuit_or_path(graph, max_node):
odd_degree_nodes = 0
odd_node = -1
for i in range(max_node):
- if i not in graph.keys():
+ if i not in graph:
continue
if len(graph[i]) % 2 == 1:
odd_degree_nodes += 1
diff --git a/physics/newtons_second_law_of_motion.py b/physics/newtons_second_law_of_motion.py
index cb53f8f6571f..53fab6ce78b9 100644
--- a/physics/newtons_second_law_of_motion.py
+++ b/physics/newtons_second_law_of_motion.py
@@ -60,7 +60,7 @@ def newtons_second_law_of_motion(mass: float, acceleration: float) -> float:
>>> newtons_second_law_of_motion(2.0, 1)
2.0
"""
- force = float()
+ force = 0.0
try:
force = mass * acceleration
except Exception:
From f8fe72dc378232107100acc1924fef31b1198124 Mon Sep 17 00:00:00 2001
From: "Minha, Jeong"
Date: Tue, 1 Aug 2023 06:24:12 +0900
Subject: [PATCH 140/808] Update game_of_life.py (#4921)
* Update game_of_life.py
docstring error fix
delete no reason delete next_gen_canvas code(local variable)
* Update cellular_automata/game_of_life.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: Tianyi Zheng
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
cellular_automata/game_of_life.py | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/cellular_automata/game_of_life.py b/cellular_automata/game_of_life.py
index b69afdce03eb..d691a2b73af0 100644
--- a/cellular_automata/game_of_life.py
+++ b/cellular_automata/game_of_life.py
@@ -10,7 +10,7 @@
- 3.5
Usage:
- - $python3 game_o_life
+ - $python3 game_of_life
Game-Of-Life Rules:
@@ -52,7 +52,8 @@ def seed(canvas: list[list[bool]]) -> None:
def run(canvas: list[list[bool]]) -> list[list[bool]]:
- """This function runs the rules of game through all points, and changes their
+ """
+ This function runs the rules of game through all points, and changes their
status accordingly.(in the same canvas)
@Args:
--
@@ -60,7 +61,7 @@ def run(canvas: list[list[bool]]) -> list[list[bool]]:
@returns:
--
- None
+ canvas of population after one step
"""
current_canvas = np.array(canvas)
next_gen_canvas = np.array(create_canvas(current_canvas.shape[0]))
@@ -70,10 +71,7 @@ def run(canvas: list[list[bool]]) -> list[list[bool]]:
pt, current_canvas[r - 1 : r + 2, c - 1 : c + 2]
)
- current_canvas = next_gen_canvas
- del next_gen_canvas # cleaning memory as we move on.
- return_canvas: list[list[bool]] = current_canvas.tolist()
- return return_canvas
+ return next_gen_canvas.tolist()
def __judge_point(pt: bool, neighbours: list[list[bool]]) -> bool:
From f7c5e55609afa1e4e7ae2ee3f442bbd5d0b43b8a Mon Sep 17 00:00:00 2001
From: Jan Wojciechowski <96974442+yanvoi@users.noreply.github.com>
Date: Tue, 1 Aug 2023 05:02:49 +0200
Subject: [PATCH 141/808] Window closing fix (#8625)
* The window will now remain open after the fractal is finished being drawn, and will only close upon your click.
* Update fractals/sierpinski_triangle.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: Tianyi Zheng
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
fractals/sierpinski_triangle.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/fractals/sierpinski_triangle.py b/fractals/sierpinski_triangle.py
index c28ec00b27fe..45f7ab84cfff 100644
--- a/fractals/sierpinski_triangle.py
+++ b/fractals/sierpinski_triangle.py
@@ -82,3 +82,4 @@ def triangle(
vertices = [(-175, -125), (0, 175), (175, -125)] # vertices of triangle
triangle(vertices[0], vertices[1], vertices[2], int(sys.argv[1]))
+ turtle.Screen().exitonclick()
From c9a7234a954dd280dc8192ae77a564e647d013d4 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 1 Aug 2023 09:26:23 +0530
Subject: [PATCH 142/808] [pre-commit.ci] pre-commit autoupdate (#8914)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.280 → v0.0.281](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.280...v0.0.281)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.pre-commit-config.yaml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 5adf12cc70c5..e158bd8d6879 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.280
+ rev: v0.0.281
hooks:
- id: ruff
From ce218c57f1f494cfca69bc01ba660c97385e5330 Mon Sep 17 00:00:00 2001
From: AmirSoroush
Date: Tue, 1 Aug 2023 21:23:34 +0300
Subject: [PATCH 143/808] =?UTF-8?q?fixes=20#8673;=20Add=20operator's=20ass?=
=?UTF-8?q?ociativity=20check=20for=20stacks/infix=5Fto=5Fp=E2=80=A6=20(#8?=
=?UTF-8?q?674)?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* fixes #8673; Add operator's associativity check for stacks/infix_to_postfix_conversion.py
* fix ruff N806 in stacks/infix_to_postfix_conversion.py
* Update data_structures/stacks/infix_to_postfix_conversion.py
Co-authored-by: Tianyi Zheng
* Update data_structures/stacks/infix_to_postfix_conversion.py
Co-authored-by: Tianyi Zheng
---------
Co-authored-by: Tianyi Zheng
---
.../stacks/infix_to_postfix_conversion.py | 50 +++++++++++++++++--
1 file changed, 47 insertions(+), 3 deletions(-)
diff --git a/data_structures/stacks/infix_to_postfix_conversion.py b/data_structures/stacks/infix_to_postfix_conversion.py
index 9017443091cf..e697061937c9 100644
--- a/data_structures/stacks/infix_to_postfix_conversion.py
+++ b/data_structures/stacks/infix_to_postfix_conversion.py
@@ -4,9 +4,26 @@
https://en.wikipedia.org/wiki/Shunting-yard_algorithm
"""
+from typing import Literal
+
from .balanced_parentheses import balanced_parentheses
from .stack import Stack
+PRECEDENCES: dict[str, int] = {
+ "+": 1,
+ "-": 1,
+ "*": 2,
+ "/": 2,
+ "^": 3,
+}
+ASSOCIATIVITIES: dict[str, Literal["LR", "RL"]] = {
+ "+": "LR",
+ "-": "LR",
+ "*": "LR",
+ "/": "LR",
+ "^": "RL",
+}
+
def precedence(char: str) -> int:
"""
@@ -14,7 +31,15 @@ def precedence(char: str) -> int:
order of operation.
https://en.wikipedia.org/wiki/Order_of_operations
"""
- return {"+": 1, "-": 1, "*": 2, "/": 2, "^": 3}.get(char, -1)
+ return PRECEDENCES.get(char, -1)
+
+
+def associativity(char: str) -> Literal["LR", "RL"]:
+ """
+ Return the associativity of the operator `char`.
+ https://en.wikipedia.org/wiki/Operator_associativity
+ """
+ return ASSOCIATIVITIES[char]
def infix_to_postfix(expression_str: str) -> str:
@@ -35,6 +60,8 @@ def infix_to_postfix(expression_str: str) -> str:
'a b c * + d e * f + g * +'
>>> infix_to_postfix("x^y/(5*z)+2")
'x y ^ 5 z * / 2 +'
+ >>> infix_to_postfix("2^3^2")
+ '2 3 2 ^ ^'
"""
if not balanced_parentheses(expression_str):
raise ValueError("Mismatched parentheses")
@@ -50,9 +77,26 @@ def infix_to_postfix(expression_str: str) -> str:
postfix.append(stack.pop())
stack.pop()
else:
- while not stack.is_empty() and precedence(char) <= precedence(stack.peek()):
+ while True:
+ if stack.is_empty():
+ stack.push(char)
+ break
+
+ char_precedence = precedence(char)
+ tos_precedence = precedence(stack.peek())
+
+ if char_precedence > tos_precedence:
+ stack.push(char)
+ break
+ if char_precedence < tos_precedence:
+ postfix.append(stack.pop())
+ continue
+ # Precedences are equal
+ if associativity(char) == "RL":
+ stack.push(char)
+ break
postfix.append(stack.pop())
- stack.push(char)
+
while not stack.is_empty():
postfix.append(stack.pop())
return " ".join(postfix)
From db6bd4b17f471d4def7aa441f1da43bb6a0f18ae Mon Sep 17 00:00:00 2001
From: Dipankar Mitra <50228537+Mitra-babu@users.noreply.github.com>
Date: Mon, 7 Aug 2023 17:17:42 +0530
Subject: [PATCH 144/808] IQR function is added (#8851)
* tanh function been added
* tanh function been added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* tanh function is added
* tanh function is added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* tanh function added
* tanh function added
* tanh function is added
* Apply suggestions from code review
* ELU activation function is added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* elu activation is added
* ELU activation is added
* Update maths/elu_activation.py
Co-authored-by: Christian Clauss
* Exponential_linear_unit activation is added
* Exponential_linear_unit activation is added
* SiLU activation is added
* SiLU activation is added
* mish added
* mish activation is added
* inter_quartile_range function is added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Mish activation function is added
* Mish action is added
* mish activation added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* mish activation added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* inter quartile range (IQR) function is added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* IQR function is added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* code optimized in IQR function
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* interquartile_range function is added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update maths/interquartile_range.py
Co-authored-by: Christian Clauss
* Changes on interquartile_range
* numpy removed from interquartile_range
* Fixes from code review
* Update interquartile_range.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
maths/interquartile_range.py | 66 ++++++++++++++++++++++++++++++++++++
1 file changed, 66 insertions(+)
create mode 100644 maths/interquartile_range.py
diff --git a/maths/interquartile_range.py b/maths/interquartile_range.py
new file mode 100644
index 000000000000..d4d72e73ef49
--- /dev/null
+++ b/maths/interquartile_range.py
@@ -0,0 +1,66 @@
+"""
+An implementation of interquartile range (IQR) which is a measure of statistical
+dispersion, which is the spread of the data.
+
+The function takes the list of numeric values as input and returns the IQR.
+
+Script inspired by this Wikipedia article:
+https://en.wikipedia.org/wiki/Interquartile_range
+"""
+from __future__ import annotations
+
+
+def find_median(nums: list[int | float]) -> float:
+ """
+ This is the implementation of the median.
+ :param nums: The list of numeric nums
+ :return: Median of the list
+ >>> find_median(nums=([1, 2, 2, 3, 4]))
+ 2
+ >>> find_median(nums=([1, 2, 2, 3, 4, 4]))
+ 2.5
+ >>> find_median(nums=([-1, 2, 0, 3, 4, -4]))
+ 1.5
+ >>> find_median(nums=([1.1, 2.2, 2, 3.3, 4.4, 4]))
+ 2.65
+ """
+ div, mod = divmod(len(nums), 2)
+ if mod:
+ return nums[div]
+ return (nums[div] + nums[(div) - 1]) / 2
+
+
+def interquartile_range(nums: list[int | float]) -> float:
+ """
+ Return the interquartile range for a list of numeric values.
+ :param nums: The list of numeric values.
+ :return: interquartile range
+
+ >>> interquartile_range(nums=[4, 1, 2, 3, 2])
+ 2.0
+ >>> interquartile_range(nums = [-2, -7, -10, 9, 8, 4, -67, 45])
+ 17.0
+ >>> interquartile_range(nums = [-2.1, -7.1, -10.1, 9.1, 8.1, 4.1, -67.1, 45.1])
+ 17.2
+ >>> interquartile_range(nums = [0, 0, 0, 0, 0])
+ 0.0
+ >>> interquartile_range(nums=[])
+ Traceback (most recent call last):
+ ...
+ ValueError: The list is empty. Provide a non-empty list.
+ """
+ if not nums:
+ raise ValueError("The list is empty. Provide a non-empty list.")
+ nums.sort()
+ length = len(nums)
+ div, mod = divmod(length, 2)
+ q1 = find_median(nums[:div])
+ half_length = sum((div, mod))
+ q3 = find_median(nums[half_length:length])
+ return q3 - q1
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From ac62cdb94fe2478fd809d9ec91e3b85304a5ac6d Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Mon, 7 Aug 2023 19:52:39 -0400
Subject: [PATCH 145/808] [pre-commit.ci] pre-commit autoupdate (#8930)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.281 → v0.0.282](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.281...v0.0.282)
* updating DIRECTORY.md
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.pre-commit-config.yaml | 2 +-
DIRECTORY.md | 1 +
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index e158bd8d6879..da6762123b04 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.281
+ rev: v0.0.282
hooks:
- id: ruff
diff --git a/DIRECTORY.md b/DIRECTORY.md
index fdcf0ceedf1f..e6a1ff356143 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -585,6 +585,7 @@
* [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py)
* [Hexagonal Number](maths/hexagonal_number.py)
* [Integration By Simpson Approx](maths/integration_by_simpson_approx.py)
+ * [Interquartile Range](maths/interquartile_range.py)
* [Is Int Palindrome](maths/is_int_palindrome.py)
* [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py)
* [Is Square Free](maths/is_square_free.py)
From 842d03fb2ab7d83e4d4081c248d71e89bb520809 Mon Sep 17 00:00:00 2001
From: AmirSoroush
Date: Wed, 9 Aug 2023 00:47:09 +0300
Subject: [PATCH 146/808] improvements to jump_search.py (#8932)
* improvements to jump_search.py
* add more tests to jump_search.py
---
searches/jump_search.py | 45 +++++++++++++++++++++++++++++------------
1 file changed, 32 insertions(+), 13 deletions(-)
diff --git a/searches/jump_search.py b/searches/jump_search.py
index 31a9656c55fe..3bc3c37809a1 100644
--- a/searches/jump_search.py
+++ b/searches/jump_search.py
@@ -4,14 +4,28 @@
until the element compared is bigger than the one searched.
It will then perform a linear search until it matches the wanted number.
If not found, it returns -1.
+
+https://en.wikipedia.org/wiki/Jump_search
"""
import math
+from collections.abc import Sequence
+from typing import Any, Protocol, TypeVar
+
+
+class Comparable(Protocol):
+ def __lt__(self, other: Any, /) -> bool:
+ ...
+
+T = TypeVar("T", bound=Comparable)
-def jump_search(arr: list, x: int) -> int:
+
+def jump_search(arr: Sequence[T], item: T) -> int:
"""
- Pure Python implementation of the jump search algorithm.
+ Python implementation of the jump search algorithm.
+ Return the index if the `item` is found, otherwise return -1.
+
Examples:
>>> jump_search([0, 1, 2, 3, 4, 5], 3)
3
@@ -21,31 +35,36 @@ def jump_search(arr: list, x: int) -> int:
-1
>>> jump_search([0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610], 55)
10
+ >>> jump_search(["aa", "bb", "cc", "dd", "ee", "ff"], "ee")
+ 4
"""
- n = len(arr)
- step = int(math.floor(math.sqrt(n)))
+ arr_size = len(arr)
+ block_size = int(math.sqrt(arr_size))
+
prev = 0
- while arr[min(step, n) - 1] < x:
+ step = block_size
+ while arr[min(step, arr_size) - 1] < item:
prev = step
- step += int(math.floor(math.sqrt(n)))
- if prev >= n:
+ step += block_size
+ if prev >= arr_size:
return -1
- while arr[prev] < x:
- prev = prev + 1
- if prev == min(step, n):
+ while arr[prev] < item:
+ prev += 1
+ if prev == min(step, arr_size):
return -1
- if arr[prev] == x:
+ if arr[prev] == item:
return prev
return -1
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
- arr = [int(item) for item in user_input.split(",")]
+ array = [int(item) for item in user_input.split(",")]
x = int(input("Enter the number to be searched:\n"))
- res = jump_search(arr, x)
+
+ res = jump_search(array, x)
if res == -1:
print("Number not found!")
else:
From ae0fc85401efd9816193a06e554a66600cc09a97 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Wed, 9 Aug 2023 00:55:30 -0700
Subject: [PATCH 147/808] Fix ruff errors (#8936)
* Fix ruff errors
Renamed neural_network/input_data.py to neural_network/input_data.py_tf
because it should be left out of the directory for the following
reasons:
1. Its sole purpose is to be used by neural_network/gan.py_tf, which is
itself left out of the directory because of issues with TensorFlow.
2. It was taken directly from TensorFlow's codebase and is actually
already deprecated. If/when neural_network/gan.py_tf is eventually
re-added back to the directory, its implementation should be changed
to not use neural_network/input_data.py anyway.
* updating DIRECTORY.md
* Change input_data.py_tf file extension
Change input_data.py_tf file extension because algorithms-keeper bot is being picky about it
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 -
conversions/length_conversion.py | 30 +++++++++-----
conversions/pressure_conversions.py | 28 ++++++++-----
conversions/volume_conversions.py | 40 +++++++++++--------
.../binary_tree/distribute_coins.py | 10 +++--
electronics/electric_power.py | 22 +++++-----
graphs/bi_directional_dijkstra.py | 4 +-
maths/area_under_curve.py | 6 +--
maths/decimal_to_fraction.py | 2 +-
maths/line_length.py | 6 +--
maths/numerical_integration.py | 6 +--
.../single_indeterminate_operations.py | 4 +-
maths/series/geometric_series.py | 10 ++---
maths/series/p_series.py | 2 +-
maths/volume.py | 2 +-
matrix/matrix_class.py | 4 +-
matrix/matrix_operation.py | 6 +--
matrix/searching_in_sorted_matrix.py | 4 +-
matrix/sherman_morrison.py | 16 ++++----
...t_data.py => input_data.py.DEPRECATED.txt} | 0
web_programming/covid_stats_via_xpath.py | 12 ++++--
21 files changed, 121 insertions(+), 94 deletions(-)
rename neural_network/{input_data.py => input_data.py.DEPRECATED.txt} (100%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index e6a1ff356143..5578c1c9a6dd 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -710,7 +710,6 @@
* [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
- * [Input Data](neural_network/input_data.py)
* [Perceptron](neural_network/perceptron.py)
* [Simple Neural Network](neural_network/simple_neural_network.py)
diff --git a/conversions/length_conversion.py b/conversions/length_conversion.py
index d8f39515255e..07fa93a198c7 100644
--- a/conversions/length_conversion.py
+++ b/conversions/length_conversion.py
@@ -22,9 +22,13 @@
-> Wikipedia reference: https://en.wikipedia.org/wiki/Millimeter
"""
-from collections import namedtuple
+from typing import NamedTuple
+
+
+class FromTo(NamedTuple):
+ from_factor: float
+ to_factor: float
-from_to = namedtuple("from_to", "from_ to")
TYPE_CONVERSION = {
"millimeter": "mm",
@@ -40,14 +44,14 @@
}
METRIC_CONVERSION = {
- "mm": from_to(0.001, 1000),
- "cm": from_to(0.01, 100),
- "m": from_to(1, 1),
- "km": from_to(1000, 0.001),
- "in": from_to(0.0254, 39.3701),
- "ft": from_to(0.3048, 3.28084),
- "yd": from_to(0.9144, 1.09361),
- "mi": from_to(1609.34, 0.000621371),
+ "mm": FromTo(0.001, 1000),
+ "cm": FromTo(0.01, 100),
+ "m": FromTo(1, 1),
+ "km": FromTo(1000, 0.001),
+ "in": FromTo(0.0254, 39.3701),
+ "ft": FromTo(0.3048, 3.28084),
+ "yd": FromTo(0.9144, 1.09361),
+ "mi": FromTo(1609.34, 0.000621371),
}
@@ -115,7 +119,11 @@ def length_conversion(value: float, from_type: str, to_type: str) -> float:
f"Conversion abbreviations are: {', '.join(METRIC_CONVERSION)}"
)
raise ValueError(msg)
- return value * METRIC_CONVERSION[new_from].from_ * METRIC_CONVERSION[new_to].to
+ return (
+ value
+ * METRIC_CONVERSION[new_from].from_factor
+ * METRIC_CONVERSION[new_to].to_factor
+ )
if __name__ == "__main__":
diff --git a/conversions/pressure_conversions.py b/conversions/pressure_conversions.py
index e0cd18d234ba..fe78b1382677 100644
--- a/conversions/pressure_conversions.py
+++ b/conversions/pressure_conversions.py
@@ -19,19 +19,23 @@
-> https://www.unitconverters.net/pressure-converter.html
"""
-from collections import namedtuple
+from typing import NamedTuple
+
+
+class FromTo(NamedTuple):
+ from_factor: float
+ to_factor: float
-from_to = namedtuple("from_to", "from_ to")
PRESSURE_CONVERSION = {
- "atm": from_to(1, 1),
- "pascal": from_to(0.0000098, 101325),
- "bar": from_to(0.986923, 1.01325),
- "kilopascal": from_to(0.00986923, 101.325),
- "megapascal": from_to(9.86923, 0.101325),
- "psi": from_to(0.068046, 14.6959),
- "inHg": from_to(0.0334211, 29.9213),
- "torr": from_to(0.00131579, 760),
+ "atm": FromTo(1, 1),
+ "pascal": FromTo(0.0000098, 101325),
+ "bar": FromTo(0.986923, 1.01325),
+ "kilopascal": FromTo(0.00986923, 101.325),
+ "megapascal": FromTo(9.86923, 0.101325),
+ "psi": FromTo(0.068046, 14.6959),
+ "inHg": FromTo(0.0334211, 29.9213),
+ "torr": FromTo(0.00131579, 760),
}
@@ -71,7 +75,9 @@ def pressure_conversion(value: float, from_type: str, to_type: str) -> float:
+ ", ".join(PRESSURE_CONVERSION)
)
return (
- value * PRESSURE_CONVERSION[from_type].from_ * PRESSURE_CONVERSION[to_type].to
+ value
+ * PRESSURE_CONVERSION[from_type].from_factor
+ * PRESSURE_CONVERSION[to_type].to_factor
)
diff --git a/conversions/volume_conversions.py b/conversions/volume_conversions.py
index 44d29009120c..cb240380534b 100644
--- a/conversions/volume_conversions.py
+++ b/conversions/volume_conversions.py
@@ -18,35 +18,39 @@
-> Wikipedia reference: https://en.wikipedia.org/wiki/Cup_(unit)
"""
-from collections import namedtuple
+from typing import NamedTuple
+
+
+class FromTo(NamedTuple):
+ from_factor: float
+ to_factor: float
-from_to = namedtuple("from_to", "from_ to")
METRIC_CONVERSION = {
- "cubicmeter": from_to(1, 1),
- "litre": from_to(0.001, 1000),
- "kilolitre": from_to(1, 1),
- "gallon": from_to(0.00454, 264.172),
- "cubicyard": from_to(0.76455, 1.30795),
- "cubicfoot": from_to(0.028, 35.3147),
- "cup": from_to(0.000236588, 4226.75),
+ "cubic meter": FromTo(1, 1),
+ "litre": FromTo(0.001, 1000),
+ "kilolitre": FromTo(1, 1),
+ "gallon": FromTo(0.00454, 264.172),
+ "cubic yard": FromTo(0.76455, 1.30795),
+ "cubic foot": FromTo(0.028, 35.3147),
+ "cup": FromTo(0.000236588, 4226.75),
}
def volume_conversion(value: float, from_type: str, to_type: str) -> float:
"""
Conversion between volume units.
- >>> volume_conversion(4, "cubicmeter", "litre")
+ >>> volume_conversion(4, "cubic meter", "litre")
4000
>>> volume_conversion(1, "litre", "gallon")
0.264172
- >>> volume_conversion(1, "kilolitre", "cubicmeter")
+ >>> volume_conversion(1, "kilolitre", "cubic meter")
1
- >>> volume_conversion(3, "gallon", "cubicyard")
+ >>> volume_conversion(3, "gallon", "cubic yard")
0.017814279
- >>> volume_conversion(2, "cubicyard", "litre")
+ >>> volume_conversion(2, "cubic yard", "litre")
1529.1
- >>> volume_conversion(4, "cubicfoot", "cup")
+ >>> volume_conversion(4, "cubic foot", "cup")
473.396
>>> volume_conversion(1, "cup", "kilolitre")
0.000236588
@@ -54,7 +58,7 @@ def volume_conversion(value: float, from_type: str, to_type: str) -> float:
Traceback (most recent call last):
...
ValueError: Invalid 'from_type' value: 'wrongUnit' Supported values are:
- cubicmeter, litre, kilolitre, gallon, cubicyard, cubicfoot, cup
+ cubic meter, litre, kilolitre, gallon, cubic yard, cubic foot, cup
"""
if from_type not in METRIC_CONVERSION:
raise ValueError(
@@ -66,7 +70,11 @@ def volume_conversion(value: float, from_type: str, to_type: str) -> float:
f"Invalid 'to_type' value: {to_type!r}. Supported values are:\n"
+ ", ".join(METRIC_CONVERSION)
)
- return value * METRIC_CONVERSION[from_type].from_ * METRIC_CONVERSION[to_type].to
+ return (
+ value
+ * METRIC_CONVERSION[from_type].from_factor
+ * METRIC_CONVERSION[to_type].to_factor
+ )
if __name__ == "__main__":
diff --git a/data_structures/binary_tree/distribute_coins.py b/data_structures/binary_tree/distribute_coins.py
index ea02afc2cea6..5712604cb87c 100644
--- a/data_structures/binary_tree/distribute_coins.py
+++ b/data_structures/binary_tree/distribute_coins.py
@@ -39,8 +39,8 @@
from __future__ import annotations
-from collections import namedtuple
from dataclasses import dataclass
+from typing import NamedTuple
@dataclass
@@ -50,7 +50,9 @@ class TreeNode:
right: TreeNode | None = None
-CoinsDistribResult = namedtuple("CoinsDistribResult", "moves excess")
+class CoinsDistribResult(NamedTuple):
+ moves: int
+ excess: int
def distribute_coins(root: TreeNode | None) -> int:
@@ -79,7 +81,7 @@ def distribute_coins(root: TreeNode | None) -> int:
# Validation
def count_nodes(node: TreeNode | None) -> int:
"""
- >>> count_nodes(None):
+ >>> count_nodes(None)
0
"""
if node is None:
@@ -89,7 +91,7 @@ def count_nodes(node: TreeNode | None) -> int:
def count_coins(node: TreeNode | None) -> int:
"""
- >>> count_coins(None):
+ >>> count_coins(None)
0
"""
if node is None:
diff --git a/electronics/electric_power.py b/electronics/electric_power.py
index e59795601791..8b92e320ace3 100644
--- a/electronics/electric_power.py
+++ b/electronics/electric_power.py
@@ -1,7 +1,12 @@
# https://en.m.wikipedia.org/wiki/Electric_power
from __future__ import annotations
-from collections import namedtuple
+from typing import NamedTuple
+
+
+class Result(NamedTuple):
+ name: str
+ value: float
def electric_power(voltage: float, current: float, power: float) -> tuple:
@@ -10,11 +15,11 @@ def electric_power(voltage: float, current: float, power: float) -> tuple:
fundamental value of electrical system.
examples are below:
>>> electric_power(voltage=0, current=2, power=5)
- result(name='voltage', value=2.5)
+ Result(name='voltage', value=2.5)
>>> electric_power(voltage=2, current=2, power=0)
- result(name='power', value=4.0)
+ Result(name='power', value=4.0)
>>> electric_power(voltage=-2, current=3, power=0)
- result(name='power', value=6.0)
+ Result(name='power', value=6.0)
>>> electric_power(voltage=2, current=4, power=2)
Traceback (most recent call last):
...
@@ -28,9 +33,8 @@ def electric_power(voltage: float, current: float, power: float) -> tuple:
...
ValueError: Power cannot be negative in any electrical/electronics system
>>> electric_power(voltage=2.2, current=2.2, power=0)
- result(name='power', value=4.84)
+ Result(name='power', value=4.84)
"""
- result = namedtuple("result", "name value")
if (voltage, current, power).count(0) != 1:
raise ValueError("Only one argument must be 0")
elif power < 0:
@@ -38,11 +42,11 @@ def electric_power(voltage: float, current: float, power: float) -> tuple:
"Power cannot be negative in any electrical/electronics system"
)
elif voltage == 0:
- return result("voltage", power / current)
+ return Result("voltage", power / current)
elif current == 0:
- return result("current", power / voltage)
+ return Result("current", power / voltage)
elif power == 0:
- return result("power", float(round(abs(voltage * current), 2)))
+ return Result("power", float(round(abs(voltage * current), 2)))
else:
raise ValueError("Exactly one argument must be 0")
diff --git a/graphs/bi_directional_dijkstra.py b/graphs/bi_directional_dijkstra.py
index a4489026be80..529a235db625 100644
--- a/graphs/bi_directional_dijkstra.py
+++ b/graphs/bi_directional_dijkstra.py
@@ -26,8 +26,8 @@ def pass_and_relaxation(
cst_bwd: dict,
queue: PriorityQueue,
parent: dict,
- shortest_distance: float | int,
-) -> float | int:
+ shortest_distance: float,
+) -> float:
for nxt, d in graph[v]:
if nxt in visited_forward:
continue
diff --git a/maths/area_under_curve.py b/maths/area_under_curve.py
index b557b2029657..0da6546b2e36 100644
--- a/maths/area_under_curve.py
+++ b/maths/area_under_curve.py
@@ -7,9 +7,9 @@
def trapezoidal_area(
- fnc: Callable[[int | float], int | float],
- x_start: int | float,
- x_end: int | float,
+ fnc: Callable[[float], float],
+ x_start: float,
+ x_end: float,
steps: int = 100,
) -> float:
"""
diff --git a/maths/decimal_to_fraction.py b/maths/decimal_to_fraction.py
index 9462bafe0171..2aa8e3c3dfd6 100644
--- a/maths/decimal_to_fraction.py
+++ b/maths/decimal_to_fraction.py
@@ -1,4 +1,4 @@
-def decimal_to_fraction(decimal: int | float | str) -> tuple[int, int]:
+def decimal_to_fraction(decimal: float | str) -> tuple[int, int]:
"""
Return a decimal number in its simplest fraction form
>>> decimal_to_fraction(2)
diff --git a/maths/line_length.py b/maths/line_length.py
index b810f2d9ad1f..ed2efc31e96e 100644
--- a/maths/line_length.py
+++ b/maths/line_length.py
@@ -5,9 +5,9 @@
def line_length(
- fnc: Callable[[int | float], int | float],
- x_start: int | float,
- x_end: int | float,
+ fnc: Callable[[float], float],
+ x_start: float,
+ x_end: float,
steps: int = 100,
) -> float:
"""
diff --git a/maths/numerical_integration.py b/maths/numerical_integration.py
index f2d65f89e390..4ac562644a07 100644
--- a/maths/numerical_integration.py
+++ b/maths/numerical_integration.py
@@ -7,9 +7,9 @@
def trapezoidal_area(
- fnc: Callable[[int | float], int | float],
- x_start: int | float,
- x_end: int | float,
+ fnc: Callable[[float], float],
+ x_start: float,
+ x_end: float,
steps: int = 100,
) -> float:
"""
diff --git a/maths/polynomials/single_indeterminate_operations.py b/maths/polynomials/single_indeterminate_operations.py
index 8bafdb591793..e31e6caa3988 100644
--- a/maths/polynomials/single_indeterminate_operations.py
+++ b/maths/polynomials/single_indeterminate_operations.py
@@ -87,7 +87,7 @@ def __mul__(self, polynomial_2: Polynomial) -> Polynomial:
return Polynomial(self.degree + polynomial_2.degree, coefficients)
- def evaluate(self, substitution: int | float) -> int | float:
+ def evaluate(self, substitution: float) -> float:
"""
Evaluates the polynomial at x.
>>> p = Polynomial(2, [1, 2, 3])
@@ -144,7 +144,7 @@ def derivative(self) -> Polynomial:
coefficients[i] = self.coefficients[i + 1] * (i + 1)
return Polynomial(self.degree - 1, coefficients)
- def integral(self, constant: int | float = 0) -> Polynomial:
+ def integral(self, constant: float = 0) -> Polynomial:
"""
Returns the integral of the polynomial.
>>> p = Polynomial(2, [1, 2, 3])
diff --git a/maths/series/geometric_series.py b/maths/series/geometric_series.py
index 90c9fe77b733..b8d6a86206be 100644
--- a/maths/series/geometric_series.py
+++ b/maths/series/geometric_series.py
@@ -14,10 +14,10 @@
def geometric_series(
- nth_term: float | int,
- start_term_a: float | int,
- common_ratio_r: float | int,
-) -> list[float | int]:
+ nth_term: float,
+ start_term_a: float,
+ common_ratio_r: float,
+) -> list[float]:
"""
Pure Python implementation of Geometric Series algorithm
@@ -48,7 +48,7 @@ def geometric_series(
"""
if not all((nth_term, start_term_a, common_ratio_r)):
return []
- series: list[float | int] = []
+ series: list[float] = []
power = 1
multiple = common_ratio_r
for _ in range(int(nth_term)):
diff --git a/maths/series/p_series.py b/maths/series/p_series.py
index 34fa3f2399af..a091a6f3fecf 100644
--- a/maths/series/p_series.py
+++ b/maths/series/p_series.py
@@ -13,7 +13,7 @@
from __future__ import annotations
-def p_series(nth_term: int | float | str, power: int | float | str) -> list[str]:
+def p_series(nth_term: float | str, power: float | str) -> list[str]:
"""
Pure Python implementation of P-Series algorithm
:return: The P-Series starting from 1 to last (nth) term
diff --git a/maths/volume.py b/maths/volume.py
index 1da4584c893e..721974e68b66 100644
--- a/maths/volume.py
+++ b/maths/volume.py
@@ -8,7 +8,7 @@
from math import pi, pow
-def vol_cube(side_length: int | float) -> float:
+def vol_cube(side_length: float) -> float:
"""
Calculate the Volume of a Cube.
>>> vol_cube(1)
diff --git a/matrix/matrix_class.py b/matrix/matrix_class.py
index a73e8b92a286..a5940a38e836 100644
--- a/matrix/matrix_class.py
+++ b/matrix/matrix_class.py
@@ -141,7 +141,7 @@ def num_columns(self) -> int:
@property
def order(self) -> tuple[int, int]:
- return (self.num_rows, self.num_columns)
+ return self.num_rows, self.num_columns
@property
def is_square(self) -> bool:
@@ -315,7 +315,7 @@ def __sub__(self, other: Matrix) -> Matrix:
]
)
- def __mul__(self, other: Matrix | int | float) -> Matrix:
+ def __mul__(self, other: Matrix | float) -> Matrix:
if isinstance(other, (int, float)):
return Matrix(
[[int(element * other) for element in row] for row in self.rows]
diff --git a/matrix/matrix_operation.py b/matrix/matrix_operation.py
index f189f1898d33..d63e758f1838 100644
--- a/matrix/matrix_operation.py
+++ b/matrix/matrix_operation.py
@@ -47,7 +47,7 @@ def subtract(matrix_a: list[list[int]], matrix_b: list[list[int]]) -> list[list[
raise TypeError("Expected a matrix, got int/list instead")
-def scalar_multiply(matrix: list[list[int]], n: int | float) -> list[list[float]]:
+def scalar_multiply(matrix: list[list[int]], n: float) -> list[list[float]]:
"""
>>> scalar_multiply([[1,2],[3,4]],5)
[[5, 10], [15, 20]]
@@ -189,9 +189,7 @@ def main() -> None:
matrix_c = [[11, 12, 13, 14], [21, 22, 23, 24], [31, 32, 33, 34], [41, 42, 43, 44]]
matrix_d = [[3, 0, 2], [2, 0, -2], [0, 1, 1]]
print(f"Add Operation, {add(matrix_a, matrix_b) = } \n")
- print(
- f"Multiply Operation, {multiply(matrix_a, matrix_b) = } \n",
- )
+ print(f"Multiply Operation, {multiply(matrix_a, matrix_b) = } \n")
print(f"Identity: {identity(5)}\n")
print(f"Minor of {matrix_c} = {minor(matrix_c, 1, 2)} \n")
print(f"Determinant of {matrix_b} = {determinant(matrix_b)} \n")
diff --git a/matrix/searching_in_sorted_matrix.py b/matrix/searching_in_sorted_matrix.py
index ddca3b1ce781..f55cc71d6f3a 100644
--- a/matrix/searching_in_sorted_matrix.py
+++ b/matrix/searching_in_sorted_matrix.py
@@ -1,9 +1,7 @@
from __future__ import annotations
-def search_in_a_sorted_matrix(
- mat: list[list[int]], m: int, n: int, key: int | float
-) -> None:
+def search_in_a_sorted_matrix(mat: list[list[int]], m: int, n: int, key: float) -> None:
"""
>>> search_in_a_sorted_matrix(
... [[2, 5, 7], [4, 8, 13], [9, 11, 15], [12, 17, 20]], 3, 3, 5)
diff --git a/matrix/sherman_morrison.py b/matrix/sherman_morrison.py
index 256271e8a87d..b6e50f70fdcf 100644
--- a/matrix/sherman_morrison.py
+++ b/matrix/sherman_morrison.py
@@ -22,7 +22,7 @@ def __init__(self, row: int, column: int, default_value: float = 0) -> None:
"""
self.row, self.column = row, column
- self.array = [[default_value for c in range(column)] for r in range(row)]
+ self.array = [[default_value for _ in range(column)] for _ in range(row)]
def __str__(self) -> str:
"""
@@ -54,15 +54,15 @@ def single_line(row_vector: list[float]) -> str:
def __repr__(self) -> str:
return str(self)
- def validate_indicies(self, loc: tuple[int, int]) -> bool:
+ def validate_indices(self, loc: tuple[int, int]) -> bool:
"""
Check if given indices are valid to pick element from matrix.
Example:
>>> a = Matrix(2, 6, 0)
- >>> a.validate_indicies((2, 7))
+ >>> a.validate_indices((2, 7))
False
- >>> a.validate_indicies((0, 0))
+ >>> a.validate_indices((0, 0))
True
"""
if not (isinstance(loc, (list, tuple)) and len(loc) == 2):
@@ -81,7 +81,7 @@ def __getitem__(self, loc: tuple[int, int]) -> Any:
>>> a[1, 0]
7
"""
- assert self.validate_indicies(loc)
+ assert self.validate_indices(loc)
return self.array[loc[0]][loc[1]]
def __setitem__(self, loc: tuple[int, int], value: float) -> None:
@@ -96,7 +96,7 @@ def __setitem__(self, loc: tuple[int, int], value: float) -> None:
[ 1, 1, 1]
[ 1, 1, 51]
"""
- assert self.validate_indicies(loc)
+ assert self.validate_indices(loc)
self.array[loc[0]][loc[1]] = value
def __add__(self, another: Matrix) -> Matrix:
@@ -145,7 +145,7 @@ def __neg__(self) -> Matrix:
def __sub__(self, another: Matrix) -> Matrix:
return self + (-another)
- def __mul__(self, another: int | float | Matrix) -> Matrix:
+ def __mul__(self, another: float | Matrix) -> Matrix:
"""
Return self * another.
@@ -233,7 +233,7 @@ def sherman_morrison(self, u: Matrix, v: Matrix) -> Any:
v_t = v.transpose()
numerator_factor = (v_t * self * u)[0, 0] + 1
if numerator_factor == 0:
- return None # It's not invertable
+ return None # It's not invertible
return self - ((self * u) * (v_t * self) * (1.0 / numerator_factor))
diff --git a/neural_network/input_data.py b/neural_network/input_data.py.DEPRECATED.txt
similarity index 100%
rename from neural_network/input_data.py
rename to neural_network/input_data.py.DEPRECATED.txt
diff --git a/web_programming/covid_stats_via_xpath.py b/web_programming/covid_stats_via_xpath.py
index 85ea5d940d85..a95130badad9 100644
--- a/web_programming/covid_stats_via_xpath.py
+++ b/web_programming/covid_stats_via_xpath.py
@@ -4,17 +4,21 @@
more convenient to use in Python web projects (e.g. Django or Flask-based)
"""
-from collections import namedtuple
+from typing import NamedTuple
import requests
from lxml import html # type: ignore
-covid_data = namedtuple("covid_data", "cases deaths recovered")
+class CovidData(NamedTuple):
+ cases: int
+ deaths: int
+ recovered: int
-def covid_stats(url: str = "https://www.worldometers.info/coronavirus/") -> covid_data:
+
+def covid_stats(url: str = "https://www.worldometers.info/coronavirus/") -> CovidData:
xpath_str = '//div[@class = "maincounter-number"]/span/text()'
- return covid_data(*html.fromstring(requests.get(url).content).xpath(xpath_str))
+ return CovidData(*html.fromstring(requests.get(url).content).xpath(xpath_str))
fmt = """Total COVID-19 cases in the world: {}
From c39b7eadbd4d81dda5e7ffe4c169d670483f0113 Mon Sep 17 00:00:00 2001
From: Suman <66205793+Suman2023@users.noreply.github.com>
Date: Sun, 13 Aug 2023 03:28:37 +0530
Subject: [PATCH 148/808] updated the URL and HTML tags for scrapping yahoo
finance (#8942)
* updated the url and tags for yahoo finance
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* updated to return the error text
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
web_programming/current_stock_price.py | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/web_programming/current_stock_price.py b/web_programming/current_stock_price.py
index df44da4ef351..0c06354d8998 100644
--- a/web_programming/current_stock_price.py
+++ b/web_programming/current_stock_price.py
@@ -3,12 +3,18 @@
def stock_price(symbol: str = "AAPL") -> str:
- url = f"https://in.finance.yahoo.com/quote/{symbol}?s={symbol}"
- soup = BeautifulSoup(requests.get(url).text, "html.parser")
- class_ = "My(6px) Pos(r) smartphone_Mt(6px)"
- return soup.find("div", class_=class_).find("span").text
+ url = f"https://finance.yahoo.com/quote/{symbol}?p={symbol}"
+ yahoo_finance_source = requests.get(url, headers={"USER-AGENT": "Mozilla/5.0"}).text
+ soup = BeautifulSoup(yahoo_finance_source, "html.parser")
+ specific_fin_streamer_tag = soup.find("fin-streamer", {"data-test": "qsp-price"})
+ if specific_fin_streamer_tag:
+ text = specific_fin_streamer_tag.get_text()
+ return text
+ return "No tag with the specified data-test attribute found."
+
+# Search for the symbol at https://finance.yahoo.com/lookup
if __name__ == "__main__":
for symbol in "AAPL AMZN IBM GOOG MSFT ORCL".split():
print(f"Current {symbol:<4} stock price is {stock_price(symbol):>8}")
From 4f2a346c277076ce1d69578ef52a9766e5040176 Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Sun, 13 Aug 2023 13:05:42 +0300
Subject: [PATCH 149/808] Reduce the complexity of
linear_algebra/src/polynom_for_points.py (#8605)
* Reduce the complexity of linear_algebra/src/polynom_for_points.py
* updating DIRECTORY.md
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix
* Fix review issues
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
linear_algebra/src/polynom_for_points.py | 57 ++++++++----------------
1 file changed, 19 insertions(+), 38 deletions(-)
diff --git a/linear_algebra/src/polynom_for_points.py b/linear_algebra/src/polynom_for_points.py
index f5e3db0cbb13..a9a9a8117c18 100644
--- a/linear_algebra/src/polynom_for_points.py
+++ b/linear_algebra/src/polynom_for_points.py
@@ -43,62 +43,43 @@ def points_to_polynomial(coordinates: list[list[int]]) -> str:
x = len(coordinates)
- count_of_line = 0
- matrix: list[list[float]] = []
# put the x and x to the power values in a matrix
- while count_of_line < x:
- count_in_line = 0
- a = coordinates[count_of_line][0]
- count_line: list[float] = []
- while count_in_line < x:
- count_line.append(a ** (x - (count_in_line + 1)))
- count_in_line += 1
- matrix.append(count_line)
- count_of_line += 1
+ matrix: list[list[float]] = [
+ [
+ coordinates[count_of_line][0] ** (x - (count_in_line + 1))
+ for count_in_line in range(x)
+ ]
+ for count_of_line in range(x)
+ ]
- count_of_line = 0
# put the y values into a vector
- vector: list[float] = []
- while count_of_line < x:
- vector.append(coordinates[count_of_line][1])
- count_of_line += 1
+ vector: list[float] = [coordinates[count_of_line][1] for count_of_line in range(x)]
- count = 0
-
- while count < x:
- zahlen = 0
- while zahlen < x:
- if count == zahlen:
- zahlen += 1
- if zahlen == x:
- break
- bruch = matrix[zahlen][count] / matrix[count][count]
+ for count in range(x):
+ for number in range(x):
+ if count == number:
+ continue
+ fraction = matrix[number][count] / matrix[count][count]
for counting_columns, item in enumerate(matrix[count]):
# manipulating all the values in the matrix
- matrix[zahlen][counting_columns] -= item * bruch
+ matrix[number][counting_columns] -= item * fraction
# manipulating the values in the vector
- vector[zahlen] -= vector[count] * bruch
- zahlen += 1
- count += 1
+ vector[number] -= vector[count] * fraction
- count = 0
# make solutions
- solution: list[str] = []
- while count < x:
- solution.append(str(vector[count] / matrix[count][count]))
- count += 1
+ solution: list[str] = [
+ str(vector[count] / matrix[count][count]) for count in range(x)
+ ]
- count = 0
solved = "f(x)="
- while count < x:
+ for count in range(x):
remove_e: list[str] = solution[count].split("E")
if len(remove_e) > 1:
solution[count] = f"{remove_e[0]}*10^{remove_e[1]}"
solved += f"x^{x - (count + 1)}*{solution[count]}"
if count + 1 != x:
solved += "+"
- count += 1
return solved
From 9d86d4edaa754af06e0da9cac4a717f3765db7f4 Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Mon, 14 Aug 2023 01:58:17 +0100
Subject: [PATCH 150/808] Create wa-tor algorithm (#8899)
* feat(cellular_automata): Create wa-tor algorithm
* updating DIRECTORY.md
* chore(quality): Implement algo-keeper bot changes
* Update cellular_automata/wa_tor.py
Co-authored-by: Christian Clauss
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* refactor(repr): Return repr as python object
* Update cellular_automata/wa_tor.py
Co-authored-by: Tianyi Zheng
* Update cellular_automata/wa_tor.py
Co-authored-by: Tianyi Zheng
* Update cellular_automata/wa_tor.py
Co-authored-by: Tianyi Zheng
* Update cellular_automata/wa_tor.py
Co-authored-by: Tianyi Zheng
* Update cellular_automata/wa_tor.py
Co-authored-by: Tianyi Zheng
* Update cellular_automata/wa_tor.py
Co-authored-by: Tianyi Zheng
* Update cellular_automata/wa_tor.py
Co-authored-by: Tianyi Zheng
* Update cellular_automata/wa_tor.py
Co-authored-by: Tianyi Zheng
* Update cellular_automata/wa_tor.py
Co-authored-by: Tianyi Zheng
* Update cellular_automata/wa_tor.py
Co-authored-by: Tianyi Zheng
* Update cellular_automata/wa_tor.py
Co-authored-by: Tianyi Zheng
* refactor(display): Rename to display_visually to visualise
* refactor(wa-tor): Use double for loop
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* chore(wa-tor): Implement suggestions from code review
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Christian Clauss
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
DIRECTORY.md | 1 +
cellular_automata/wa_tor.py | 550 ++++++++++++++++++++++++++++++++++++
2 files changed, 551 insertions(+)
create mode 100644 cellular_automata/wa_tor.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 5578c1c9a6dd..cdcd1a8ae8cc 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -74,6 +74,7 @@
* [Game Of Life](cellular_automata/game_of_life.py)
* [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py)
* [One Dimensional](cellular_automata/one_dimensional.py)
+ * [Wa Tor](cellular_automata/wa_tor.py)
## Ciphers
* [A1Z26](ciphers/a1z26.py)
diff --git a/cellular_automata/wa_tor.py b/cellular_automata/wa_tor.py
new file mode 100644
index 000000000000..e423d1595bdb
--- /dev/null
+++ b/cellular_automata/wa_tor.py
@@ -0,0 +1,550 @@
+"""
+Wa-Tor algorithm (1984)
+
+@ https://en.wikipedia.org/wiki/Wa-Tor
+@ https://beltoforion.de/en/wator/
+@ https://beltoforion.de/en/wator/images/wator_medium.webm
+
+This solution aims to completely remove any systematic approach
+to the Wa-Tor planet, and utilise fully random methods.
+
+The constants are a working set that allows the Wa-Tor planet
+to result in one of the three possible results.
+"""
+
+from collections.abc import Callable
+from random import randint, shuffle
+from time import sleep
+from typing import Literal
+
+WIDTH = 50 # Width of the Wa-Tor planet
+HEIGHT = 50 # Height of the Wa-Tor planet
+
+PREY_INITIAL_COUNT = 30 # The initial number of prey entities
+PREY_REPRODUCTION_TIME = 5 # The chronons before reproducing
+
+PREDATOR_INITIAL_COUNT = 50 # The initial number of predator entities
+# The initial energy value of predator entities
+PREDATOR_INITIAL_ENERGY_VALUE = 15
+# The energy value provided when consuming prey
+PREDATOR_FOOD_VALUE = 5
+PREDATOR_REPRODUCTION_TIME = 20 # The chronons before reproducing
+
+MAX_ENTITIES = 500 # The max number of organisms on the board
+# The number of entities to delete from the unbalanced side
+DELETE_UNBALANCED_ENTITIES = 50
+
+
+class Entity:
+ """
+ Represents an entity (either prey or predator).
+
+ >>> e = Entity(True, coords=(0, 0))
+ >>> e.prey
+ True
+ >>> e.coords
+ (0, 0)
+ >>> e.alive
+ True
+ """
+
+ def __init__(self, prey: bool, coords: tuple[int, int]) -> None:
+ self.prey = prey
+ # The (row, col) pos of the entity
+ self.coords = coords
+
+ self.remaining_reproduction_time = (
+ PREY_REPRODUCTION_TIME if prey else PREDATOR_REPRODUCTION_TIME
+ )
+ self.energy_value = None if prey is True else PREDATOR_INITIAL_ENERGY_VALUE
+ self.alive = True
+
+ def reset_reproduction_time(self) -> None:
+ """
+ >>> e = Entity(True, coords=(0, 0))
+ >>> e.reset_reproduction_time()
+ >>> e.remaining_reproduction_time == PREY_REPRODUCTION_TIME
+ True
+ >>> e = Entity(False, coords=(0, 0))
+ >>> e.reset_reproduction_time()
+ >>> e.remaining_reproduction_time == PREDATOR_REPRODUCTION_TIME
+ True
+ """
+ self.remaining_reproduction_time = (
+ PREY_REPRODUCTION_TIME if self.prey is True else PREDATOR_REPRODUCTION_TIME
+ )
+
+ def __repr__(self) -> str:
+ """
+ >>> Entity(prey=True, coords=(1, 1))
+ Entity(prey=True, coords=(1, 1), remaining_reproduction_time=5)
+ >>> Entity(prey=False, coords=(2, 1)) # doctest: +NORMALIZE_WHITESPACE
+ Entity(prey=False, coords=(2, 1),
+ remaining_reproduction_time=20, energy_value=15)
+ """
+ repr_ = (
+ f"Entity(prey={self.prey}, coords={self.coords}, "
+ f"remaining_reproduction_time={self.remaining_reproduction_time}"
+ )
+ if self.energy_value is not None:
+ repr_ += f", energy_value={self.energy_value}"
+ return f"{repr_})"
+
+
+class WaTor:
+ """
+ Represents the main Wa-Tor algorithm.
+
+ :attr time_passed: A function that is called every time
+ time passes (a chronon) in order to visually display
+ the new Wa-Tor planet. The time_passed function can block
+ using time.sleep to slow the algorithm progression.
+
+ >>> wt = WaTor(10, 15)
+ >>> wt.width
+ 10
+ >>> wt.height
+ 15
+ >>> len(wt.planet)
+ 15
+ >>> len(wt.planet[0])
+ 10
+ >>> len(wt.get_entities()) == PREDATOR_INITIAL_COUNT + PREY_INITIAL_COUNT
+ True
+ """
+
+ time_passed: Callable[["WaTor", int], None] | None
+
+ def __init__(self, width: int, height: int) -> None:
+ self.width = width
+ self.height = height
+ self.time_passed = None
+
+ self.planet: list[list[Entity | None]] = [[None] * width for _ in range(height)]
+
+ # Populate planet with predators and prey randomly
+ for _ in range(PREY_INITIAL_COUNT):
+ self.add_entity(prey=True)
+ for _ in range(PREDATOR_INITIAL_COUNT):
+ self.add_entity(prey=False)
+ self.set_planet(self.planet)
+
+ def set_planet(self, planet: list[list[Entity | None]]) -> None:
+ """
+ Ease of access for testing
+
+ >>> wt = WaTor(WIDTH, HEIGHT)
+ >>> planet = [
+ ... [None, None, None],
+ ... [None, Entity(True, coords=(1, 1)), None]
+ ... ]
+ >>> wt.set_planet(planet)
+ >>> wt.planet == planet
+ True
+ >>> wt.width
+ 3
+ >>> wt.height
+ 2
+ """
+ self.planet = planet
+ self.width = len(planet[0])
+ self.height = len(planet)
+
+ def add_entity(self, prey: bool) -> None:
+ """
+ Adds an entity, making sure the entity does
+ not override another entity
+
+ >>> wt = WaTor(WIDTH, HEIGHT)
+ >>> wt.set_planet([[None, None], [None, None]])
+ >>> wt.add_entity(True)
+ >>> len(wt.get_entities())
+ 1
+ >>> wt.add_entity(False)
+ >>> len(wt.get_entities())
+ 2
+ """
+ while True:
+ row, col = randint(0, self.height - 1), randint(0, self.width - 1)
+ if self.planet[row][col] is None:
+ self.planet[row][col] = Entity(prey=prey, coords=(row, col))
+ return
+
+ def get_entities(self) -> list[Entity]:
+ """
+ Returns a list of all the entities within the planet.
+
+ >>> wt = WaTor(WIDTH, HEIGHT)
+ >>> len(wt.get_entities()) == PREDATOR_INITIAL_COUNT + PREY_INITIAL_COUNT
+ True
+ """
+ return [entity for column in self.planet for entity in column if entity]
+
+ def balance_predators_and_prey(self) -> None:
+ """
+ Balances predators and preys so that prey
+ can not dominate the predators, blocking up
+ space for them to reproduce.
+
+ >>> wt = WaTor(WIDTH, HEIGHT)
+ >>> for i in range(2000):
+ ... row, col = i // HEIGHT, i % WIDTH
+ ... wt.planet[row][col] = Entity(True, coords=(row, col))
+ >>> entities = len(wt.get_entities())
+ >>> wt.balance_predators_and_prey()
+ >>> len(wt.get_entities()) == entities
+ False
+ """
+ entities = self.get_entities()
+ shuffle(entities)
+
+ if len(entities) >= MAX_ENTITIES - MAX_ENTITIES / 10:
+ prey = [entity for entity in entities if entity.prey]
+ predators = [entity for entity in entities if not entity.prey]
+
+ prey_count, predator_count = len(prey), len(predators)
+
+ entities_to_purge = (
+ prey[:DELETE_UNBALANCED_ENTITIES]
+ if prey_count > predator_count
+ else predators[:DELETE_UNBALANCED_ENTITIES]
+ )
+ for entity in entities_to_purge:
+ self.planet[entity.coords[0]][entity.coords[1]] = None
+
+ def get_surrounding_prey(self, entity: Entity) -> list[Entity]:
+ """
+ Returns all the prey entities around (N, S, E, W) a predator entity.
+
+ Subtly different to the try_to_move_to_unoccupied square.
+
+ >>> wt = WaTor(WIDTH, HEIGHT)
+ >>> wt.set_planet([
+ ... [None, Entity(True, (0, 1)), None],
+ ... [None, Entity(False, (1, 1)), None],
+ ... [None, Entity(True, (2, 1)), None]])
+ >>> wt.get_surrounding_prey(
+ ... Entity(False, (1, 1))) # doctest: +NORMALIZE_WHITESPACE
+ [Entity(prey=True, coords=(0, 1), remaining_reproduction_time=5),
+ Entity(prey=True, coords=(2, 1), remaining_reproduction_time=5)]
+ >>> wt.set_planet([[Entity(False, (0, 0))]])
+ >>> wt.get_surrounding_prey(Entity(False, (0, 0)))
+ []
+ >>> wt.set_planet([
+ ... [Entity(True, (0, 0)), Entity(False, (1, 0)), Entity(False, (2, 0))],
+ ... [None, Entity(False, (1, 1)), Entity(True, (2, 1))],
+ ... [None, None, None]])
+ >>> wt.get_surrounding_prey(Entity(False, (1, 0)))
+ [Entity(prey=True, coords=(0, 0), remaining_reproduction_time=5)]
+ """
+ row, col = entity.coords
+ adjacent: list[tuple[int, int]] = [
+ (row - 1, col), # North
+ (row + 1, col), # South
+ (row, col - 1), # West
+ (row, col + 1), # East
+ ]
+
+ return [
+ ent
+ for r, c in adjacent
+ if 0 <= r < self.height
+ and 0 <= c < self.width
+ and (ent := self.planet[r][c]) is not None
+ and ent.prey
+ ]
+
+ def move_and_reproduce(
+ self, entity: Entity, direction_orders: list[Literal["N", "E", "S", "W"]]
+ ) -> None:
+ """
+ Attempts to move to an unoccupied neighbouring square
+ in either of the four directions (North, South, East, West).
+ If the move was successful and the remaining_reproduction time is
+ equal to 0, then a new prey or predator can also be created
+ in the previous square.
+
+ :param direction_orders: Ordered list (like priority queue) depicting
+ order to attempt to move. Removes any systematic
+ approach of checking neighbouring squares.
+
+ >>> planet = [
+ ... [None, None, None],
+ ... [None, Entity(True, coords=(1, 1)), None],
+ ... [None, None, None]
+ ... ]
+ >>> wt = WaTor(WIDTH, HEIGHT)
+ >>> wt.set_planet(planet)
+ >>> wt.move_and_reproduce(Entity(True, coords=(1, 1)), direction_orders=["N"])
+ >>> wt.planet # doctest: +NORMALIZE_WHITESPACE
+ [[None, Entity(prey=True, coords=(0, 1), remaining_reproduction_time=4), None],
+ [None, None, None],
+ [None, None, None]]
+ >>> wt.planet[0][0] = Entity(True, coords=(0, 0))
+ >>> wt.move_and_reproduce(Entity(True, coords=(0, 1)),
+ ... direction_orders=["N", "W", "E", "S"])
+ >>> wt.planet # doctest: +NORMALIZE_WHITESPACE
+ [[Entity(prey=True, coords=(0, 0), remaining_reproduction_time=5), None,
+ Entity(prey=True, coords=(0, 2), remaining_reproduction_time=4)],
+ [None, None, None],
+ [None, None, None]]
+ >>> wt.planet[0][1] = wt.planet[0][2]
+ >>> wt.planet[0][2] = None
+ >>> wt.move_and_reproduce(Entity(True, coords=(0, 1)),
+ ... direction_orders=["N", "W", "S", "E"])
+ >>> wt.planet # doctest: +NORMALIZE_WHITESPACE
+ [[Entity(prey=True, coords=(0, 0), remaining_reproduction_time=5), None, None],
+ [None, Entity(prey=True, coords=(1, 1), remaining_reproduction_time=4), None],
+ [None, None, None]]
+
+ >>> wt = WaTor(WIDTH, HEIGHT)
+ >>> reproducable_entity = Entity(False, coords=(0, 1))
+ >>> reproducable_entity.remaining_reproduction_time = 0
+ >>> wt.planet = [[None, reproducable_entity]]
+ >>> wt.move_and_reproduce(reproducable_entity,
+ ... direction_orders=["N", "W", "S", "E"])
+ >>> wt.planet # doctest: +NORMALIZE_WHITESPACE
+ [[Entity(prey=False, coords=(0, 0),
+ remaining_reproduction_time=20, energy_value=15),
+ Entity(prey=False, coords=(0, 1), remaining_reproduction_time=20,
+ energy_value=15)]]
+ """
+ row, col = coords = entity.coords
+
+ adjacent_squares: dict[Literal["N", "E", "S", "W"], tuple[int, int]] = {
+ "N": (row - 1, col), # North
+ "S": (row + 1, col), # South
+ "W": (row, col - 1), # West
+ "E": (row, col + 1), # East
+ }
+ # Weight adjacent locations
+ adjacent: list[tuple[int, int]] = []
+ for order in direction_orders:
+ adjacent.append(adjacent_squares[order])
+
+ for r, c in adjacent:
+ if (
+ 0 <= r < self.height
+ and 0 <= c < self.width
+ and self.planet[r][c] is None
+ ):
+ # Move entity to empty adjacent square
+ self.planet[r][c] = entity
+ self.planet[row][col] = None
+ entity.coords = (r, c)
+ break
+
+ # (2.) See if it possible to reproduce in previous square
+ if coords != entity.coords and entity.remaining_reproduction_time <= 0:
+ # Check if the entities on the planet is less than the max limit
+ if len(self.get_entities()) < MAX_ENTITIES:
+ # Reproduce in previous square
+ self.planet[row][col] = Entity(prey=entity.prey, coords=coords)
+ entity.reset_reproduction_time()
+ else:
+ entity.remaining_reproduction_time -= 1
+
+ def perform_prey_actions(
+ self, entity: Entity, direction_orders: list[Literal["N", "E", "S", "W"]]
+ ) -> None:
+ """
+ Performs the actions for a prey entity
+
+ For prey the rules are:
+ 1. At each chronon, a prey moves randomly to one of the adjacent unoccupied
+ squares. If there are no free squares, no movement takes place.
+ 2. Once a prey has survived a certain number of chronons it may reproduce.
+ This is done as it moves to a neighbouring square,
+ leaving behind a new prey in its old position.
+ Its reproduction time is also reset to zero.
+
+ >>> wt = WaTor(WIDTH, HEIGHT)
+ >>> reproducable_entity = Entity(True, coords=(0, 1))
+ >>> reproducable_entity.remaining_reproduction_time = 0
+ >>> wt.planet = [[None, reproducable_entity]]
+ >>> wt.perform_prey_actions(reproducable_entity,
+ ... direction_orders=["N", "W", "S", "E"])
+ >>> wt.planet # doctest: +NORMALIZE_WHITESPACE
+ [[Entity(prey=True, coords=(0, 0), remaining_reproduction_time=5),
+ Entity(prey=True, coords=(0, 1), remaining_reproduction_time=5)]]
+ """
+ self.move_and_reproduce(entity, direction_orders)
+
+ def perform_predator_actions(
+ self,
+ entity: Entity,
+ occupied_by_prey_coords: tuple[int, int] | None,
+ direction_orders: list[Literal["N", "E", "S", "W"]],
+ ) -> None:
+ """
+ Performs the actions for a predator entity
+
+ :param occupied_by_prey_coords: Move to this location if there is prey there
+
+ For predators the rules are:
+ 1. At each chronon, a predator moves randomly to an adjacent square occupied
+ by a prey. If there is none, the predator moves to a random adjacent
+ unoccupied square. If there are no free squares, no movement takes place.
+ 2. At each chronon, each predator is deprived of a unit of energy.
+ 3. Upon reaching zero energy, a predator dies.
+ 4. If a predator moves to a square occupied by a prey,
+ it eats the prey and earns a certain amount of energy.
+ 5. Once a predator has survived a certain number of chronons
+ it may reproduce in exactly the same way as the prey.
+
+ >>> wt = WaTor(WIDTH, HEIGHT)
+ >>> wt.set_planet([[Entity(True, coords=(0, 0)), Entity(False, coords=(0, 1))]])
+ >>> wt.perform_predator_actions(Entity(False, coords=(0, 1)), (0, 0), [])
+ >>> wt.planet # doctest: +NORMALIZE_WHITESPACE
+ [[Entity(prey=False, coords=(0, 0),
+ remaining_reproduction_time=20, energy_value=19), None]]
+ """
+ assert entity.energy_value is not None # [type checking]
+
+ # (3.) If the entity has 0 energy, it will die
+ if entity.energy_value == 0:
+ self.planet[entity.coords[0]][entity.coords[1]] = None
+ return
+
+ # (1.) Move to entity if possible
+ if occupied_by_prey_coords is not None:
+ # Kill the prey
+ prey = self.planet[occupied_by_prey_coords[0]][occupied_by_prey_coords[1]]
+ assert prey is not None
+ prey.alive = False
+
+ # Move onto prey
+ self.planet[occupied_by_prey_coords[0]][occupied_by_prey_coords[1]] = entity
+ self.planet[entity.coords[0]][entity.coords[1]] = None
+
+ entity.coords = occupied_by_prey_coords
+ # (4.) Eats the prey and earns energy
+ entity.energy_value += PREDATOR_FOOD_VALUE
+ else:
+ # (5.) If it has survived the certain number of chronons it will also
+ # reproduce in this function
+ self.move_and_reproduce(entity, direction_orders)
+
+ # (2.) Each chronon, the predator is deprived of a unit of energy
+ entity.energy_value -= 1
+
+ def run(self, *, iteration_count: int) -> None:
+ """
+ Emulate time passing by looping iteration_count times
+
+ >>> wt = WaTor(WIDTH, HEIGHT)
+ >>> wt.run(iteration_count=PREDATOR_INITIAL_ENERGY_VALUE - 1)
+ >>> len(list(filter(lambda entity: entity.prey is False,
+ ... wt.get_entities()))) >= PREDATOR_INITIAL_COUNT
+ True
+ """
+ for iter_num in range(iteration_count):
+ # Generate list of all entities in order to randomly
+ # pop an entity at a time to simulate true randomness
+ # This removes the systematic approach of iterating
+ # through each entity width by height
+ all_entities = self.get_entities()
+
+ for __ in range(len(all_entities)):
+ entity = all_entities.pop(randint(0, len(all_entities) - 1))
+ if entity.alive is False:
+ continue
+
+ directions: list[Literal["N", "E", "S", "W"]] = ["N", "E", "S", "W"]
+ shuffle(directions) # Randomly shuffle directions
+
+ if entity.prey:
+ self.perform_prey_actions(entity, directions)
+ else:
+ # Create list of surrounding prey
+ surrounding_prey = self.get_surrounding_prey(entity)
+ surrounding_prey_coords = None
+
+ if surrounding_prey:
+ # Again, randomly shuffle directions
+ shuffle(surrounding_prey)
+ surrounding_prey_coords = surrounding_prey[0].coords
+
+ self.perform_predator_actions(
+ entity, surrounding_prey_coords, directions
+ )
+
+ # Balance out the predators and prey
+ self.balance_predators_and_prey()
+
+ if self.time_passed is not None:
+ # Call time_passed function for Wa-Tor planet
+ # visualisation in a terminal or a graph.
+ self.time_passed(self, iter_num)
+
+
+def visualise(wt: WaTor, iter_number: int, *, colour: bool = True) -> None:
+ """
+ Visually displays the Wa-Tor planet using
+ an ascii code in terminal to clear and re-print
+ the Wa-Tor planet at intervals.
+
+ Uses ascii colour codes to colourfully display
+ the predators and prey.
+
+ (0x60f197) Prey = #
+ (0xfffff) Predator = x
+
+ >>> wt = WaTor(30, 30)
+ >>> wt.set_planet([
+ ... [Entity(True, coords=(0, 0)), Entity(False, coords=(0, 1)), None],
+ ... [Entity(False, coords=(1, 0)), None, Entity(False, coords=(1, 2))],
+ ... [None, Entity(True, coords=(2, 1)), None]
+ ... ])
+ >>> visualise(wt, 0, colour=False) # doctest: +NORMALIZE_WHITESPACE
+ # x .
+ x . x
+ . # .
+
+ Iteration: 0 | Prey count: 2 | Predator count: 3 |
+ """
+ if colour:
+ __import__("os").system("")
+ print("\x1b[0;0H\x1b[2J\x1b[?25l")
+
+ reprint = "\x1b[0;0H" if colour else ""
+ ansi_colour_end = "\x1b[0m " if colour else " "
+
+ planet = wt.planet
+ output = ""
+
+ # Iterate over every entity in the planet
+ for row in planet:
+ for entity in row:
+ if entity is None:
+ output += " . "
+ else:
+ if colour is True:
+ output += (
+ "\x1b[38;2;96;241;151m"
+ if entity.prey
+ else "\x1b[38;2;255;255;15m"
+ )
+ output += f" {'#' if entity.prey else 'x'}{ansi_colour_end}"
+
+ output += "\n"
+
+ entities = wt.get_entities()
+ prey_count = sum(entity.prey for entity in entities)
+
+ print(
+ f"{output}\n Iteration: {iter_number} | Prey count: {prey_count} | "
+ f"Predator count: {len(entities) - prey_count} | {reprint}"
+ )
+ # Block the thread to be able to visualise seeing the algorithm
+ sleep(0.05)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
+ wt = WaTor(WIDTH, HEIGHT)
+ wt.time_passed = visualise
+ wt.run(iteration_count=100_000)
From f24ab2c60dabb11c37667c5899c39713e84fc871 Mon Sep 17 00:00:00 2001
From: Amir Hosseini <19665344+itsamirhn@users.noreply.github.com>
Date: Mon, 14 Aug 2023 09:07:41 +0330
Subject: [PATCH 151/808] Add: Two Regex match algorithm (Recursive & DP)
(#6321)
* Add recursive solution to regex_match.py
* Add dp solution to regex_match.py
* Add link to regex_match.py
* Minor edit
* Minor change
* Minor change
* Update dynamic_programming/regex_match.py
Co-authored-by: Tianyi Zheng
* Update dynamic_programming/regex_match.py
Co-authored-by: Tianyi Zheng
* Fix ruff formatting in if statements
* Update dynamic_programming/regex_match.py
Co-authored-by: Tianyi Zheng
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: Tianyi Zheng
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
dynamic_programming/regex_match.py | 97 ++++++++++++++++++++++++++++++
1 file changed, 97 insertions(+)
create mode 100644 dynamic_programming/regex_match.py
diff --git a/dynamic_programming/regex_match.py b/dynamic_programming/regex_match.py
new file mode 100644
index 000000000000..200a882831c0
--- /dev/null
+++ b/dynamic_programming/regex_match.py
@@ -0,0 +1,97 @@
+"""
+Regex matching check if a text matches pattern or not.
+Pattern:
+ '.' Matches any single character.
+ '*' Matches zero or more of the preceding element.
+More info:
+ https://medium.com/trick-the-interviwer/regular-expression-matching-9972eb74c03
+"""
+
+
+def recursive_match(text: str, pattern: str) -> bool:
+ """
+ Recursive matching algorithm.
+
+ Time complexity: O(2 ^ (|text| + |pattern|))
+ Space complexity: Recursion depth is O(|text| + |pattern|).
+
+ :param text: Text to match.
+ :param pattern: Pattern to match.
+ :return: True if text matches pattern, False otherwise.
+
+ >>> recursive_match('abc', 'a.c')
+ True
+ >>> recursive_match('abc', 'af*.c')
+ True
+ >>> recursive_match('abc', 'a.c*')
+ True
+ >>> recursive_match('abc', 'a.c*d')
+ False
+ >>> recursive_match('aa', '.*')
+ True
+ """
+ if not pattern:
+ return not text
+
+ if not text:
+ return pattern[-1] == "*" and recursive_match(text, pattern[:-2])
+
+ if text[-1] == pattern[-1] or pattern[-1] == ".":
+ return recursive_match(text[:-1], pattern[:-1])
+
+ if pattern[-1] == "*":
+ return recursive_match(text[:-1], pattern) or recursive_match(
+ text, pattern[:-2]
+ )
+
+ return False
+
+
+def dp_match(text: str, pattern: str) -> bool:
+ """
+ Dynamic programming matching algorithm.
+
+ Time complexity: O(|text| * |pattern|)
+ Space complexity: O(|text| * |pattern|)
+
+ :param text: Text to match.
+ :param pattern: Pattern to match.
+ :return: True if text matches pattern, False otherwise.
+
+ >>> dp_match('abc', 'a.c')
+ True
+ >>> dp_match('abc', 'af*.c')
+ True
+ >>> dp_match('abc', 'a.c*')
+ True
+ >>> dp_match('abc', 'a.c*d')
+ False
+ >>> dp_match('aa', '.*')
+ True
+ """
+ m = len(text)
+ n = len(pattern)
+ dp = [[False for _ in range(n + 1)] for _ in range(m + 1)]
+ dp[0][0] = True
+
+ for j in range(1, n + 1):
+ dp[0][j] = pattern[j - 1] == "*" and dp[0][j - 2]
+
+ for i in range(1, m + 1):
+ for j in range(1, n + 1):
+ if pattern[j - 1] in {".", text[i - 1]}:
+ dp[i][j] = dp[i - 1][j - 1]
+ elif pattern[j - 1] == "*":
+ dp[i][j] = dp[i][j - 2]
+ if pattern[j - 2] in {".", text[i - 1]}:
+ dp[i][j] |= dp[i - 1][j]
+ else:
+ dp[i][j] = False
+
+ return dp[m][n]
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 02d89bde679488e97cdb077c511b3dbfb660e2b8 Mon Sep 17 00:00:00 2001
From: Ajinkya Chikhale <86607732+ajinkyac03@users.noreply.github.com>
Date: Mon, 14 Aug 2023 12:42:42 +0530
Subject: [PATCH 152/808] Added implementation for Tribonacci sequence using dp
(#6356)
* Added implementation for Tribonacci sequence using dp
* Updated parameter name
* Apply suggestions from code review
---------
Co-authored-by: Tianyi Zheng
---
dynamic_programming/tribonacci.py | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
create mode 100644 dynamic_programming/tribonacci.py
diff --git a/dynamic_programming/tribonacci.py b/dynamic_programming/tribonacci.py
new file mode 100644
index 000000000000..58e15da918e2
--- /dev/null
+++ b/dynamic_programming/tribonacci.py
@@ -0,0 +1,24 @@
+# Tribonacci sequence using Dynamic Programming
+
+
+def tribonacci(num: int) -> list[int]:
+ """
+ Given a number, return first n Tribonacci Numbers.
+ >>> tribonacci(5)
+ [0, 0, 1, 1, 2]
+ >>> tribonacci(8)
+ [0, 0, 1, 1, 2, 4, 7, 13]
+ """
+ dp = [0] * num
+ dp[2] = 1
+
+ for i in range(3, num):
+ dp[i] = dp[i - 1] + dp[i - 2] + dp[i - 3]
+
+ return dp
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From c290dd6a433b43b242336d49d227f5e25bbb76de Mon Sep 17 00:00:00 2001
From: Adithya Awati
Date: Mon, 14 Aug 2023 12:46:24 +0530
Subject: [PATCH 153/808] Update run.py in machine_learning/forecasting (#8957)
* Fixed reading CSV file, added type check for data_safety_checker function
* Formatted run.py
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 +
machine_learning/forecasting/ex_data.csv | 2 +-
machine_learning/forecasting/run.py | 35 ++++++++++++------------
3 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index cdcd1a8ae8cc..3a244ca6caaf 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -336,6 +336,7 @@
* [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py)
* [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py)
* [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py)
+ * [Regex Match](dynamic_programming/regex_match.py)
* [Rod Cutting](dynamic_programming/rod_cutting.py)
* [Subset Generation](dynamic_programming/subset_generation.py)
* [Sum Of Subset](dynamic_programming/sum_of_subset.py)
diff --git a/machine_learning/forecasting/ex_data.csv b/machine_learning/forecasting/ex_data.csv
index 1c429e649755..e6e73c4a1ca4 100644
--- a/machine_learning/forecasting/ex_data.csv
+++ b/machine_learning/forecasting/ex_data.csv
@@ -1,4 +1,4 @@
-total_user,total_events,days
+total_users,total_events,days
18231,0.0,1
22621,1.0,2
15675,0.0,3
diff --git a/machine_learning/forecasting/run.py b/machine_learning/forecasting/run.py
index 0909b76d8907..88c4a537b302 100644
--- a/machine_learning/forecasting/run.py
+++ b/machine_learning/forecasting/run.py
@@ -1,6 +1,6 @@
"""
this is code for forecasting
-but i modified it and used it for safety checker of data
+but I modified it and used it for safety checker of data
for ex: you have an online shop and for some reason some data are
missing (the amount of data that u expected are not supposed to be)
then we can use it
@@ -102,6 +102,10 @@ def data_safety_checker(list_vote: list, actual_result: float) -> bool:
"""
safe = 0
not_safe = 0
+
+ if not isinstance(actual_result, float):
+ raise TypeError("Actual result should be float. Value passed is a list")
+
for i in list_vote:
if i > actual_result:
safe = not_safe + 1
@@ -114,16 +118,11 @@ def data_safety_checker(list_vote: list, actual_result: float) -> bool:
if __name__ == "__main__":
- # data_input_df = pd.read_csv("ex_data.csv", header=None)
- data_input = [[18231, 0.0, 1], [22621, 1.0, 2], [15675, 0.0, 3], [23583, 1.0, 4]]
- data_input_df = pd.DataFrame(
- data_input, columns=["total_user", "total_even", "days"]
- )
-
"""
data column = total user in a day, how much online event held in one day,
what day is that(sunday-saturday)
"""
+ data_input_df = pd.read_csv("ex_data.csv")
# start normalization
normalize_df = Normalizer().fit_transform(data_input_df.values)
@@ -138,23 +137,23 @@ def data_safety_checker(list_vote: list, actual_result: float) -> bool:
x_test = x[len(x) - 1 :]
# for linear regression & sarimax
- trn_date = total_date[: len(total_date) - 1]
- trn_user = total_user[: len(total_user) - 1]
- trn_match = total_match[: len(total_match) - 1]
+ train_date = total_date[: len(total_date) - 1]
+ train_user = total_user[: len(total_user) - 1]
+ train_match = total_match[: len(total_match) - 1]
- tst_date = total_date[len(total_date) - 1 :]
- tst_user = total_user[len(total_user) - 1 :]
- tst_match = total_match[len(total_match) - 1 :]
+ test_date = total_date[len(total_date) - 1 :]
+ test_user = total_user[len(total_user) - 1 :]
+ test_match = total_match[len(total_match) - 1 :]
# voting system with forecasting
res_vote = [
linear_regression_prediction(
- trn_date, trn_user, trn_match, tst_date, tst_match
+ train_date, train_user, train_match, test_date, test_match
),
- sarimax_predictor(trn_user, trn_match, tst_match),
- support_vector_regressor(x_train, x_test, trn_user),
+ sarimax_predictor(train_user, train_match, test_match),
+ support_vector_regressor(x_train, x_test, train_user),
]
# check the safety of today's data
- not_str = "" if data_safety_checker(res_vote, tst_user) else "not "
- print("Today's data is {not_str}safe.")
+ not_str = "" if data_safety_checker(res_vote, test_user[0]) else "not "
+ print(f"Today's data is {not_str}safe.")
From 4b7ecb6a8134379481dd3d5035cb99a627930462 Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Mon, 14 Aug 2023 09:28:52 +0100
Subject: [PATCH 154/808] Create is valid email address algorithm (#8907)
* feat(strings): Create is valid email address
* updating DIRECTORY.md
* feat(strings): Create is_valid_email_address algorithm
* chore(is_valid_email_address): Implement changes from code review
* Update strings/is_valid_email_address.py
Co-authored-by: Tianyi Zheng
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* chore(is_valid_email_address): Fix ruff error
* Update strings/is_valid_email_address.py
Co-authored-by: Tianyi Zheng
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
DIRECTORY.md | 1 +
strings/is_valid_email_address.py | 117 ++++++++++++++++++++++++++++++
2 files changed, 118 insertions(+)
create mode 100644 strings/is_valid_email_address.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 3a244ca6caaf..14152e4abd04 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -1171,6 +1171,7 @@
* [Is Pangram](strings/is_pangram.py)
* [Is Spain National Id](strings/is_spain_national_id.py)
* [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py)
+ * [Is Valid Email Address](strings/is_valid_email_address.py)
* [Jaro Winkler](strings/jaro_winkler.py)
* [Join](strings/join.py)
* [Knuth Morris Pratt](strings/knuth_morris_pratt.py)
diff --git a/strings/is_valid_email_address.py b/strings/is_valid_email_address.py
new file mode 100644
index 000000000000..205394f81297
--- /dev/null
+++ b/strings/is_valid_email_address.py
@@ -0,0 +1,117 @@
+"""
+Implements an is valid email address algorithm
+
+@ https://en.wikipedia.org/wiki/Email_address
+"""
+
+import string
+
+email_tests: tuple[tuple[str, bool], ...] = (
+ ("simple@example.com", True),
+ ("very.common@example.com", True),
+ ("disposable.style.email.with+symbol@example.com", True),
+ ("other-email-with-hyphen@and.subdomains.example.com", True),
+ ("fully-qualified-domain@example.com", True),
+ ("user.name+tag+sorting@example.com", True),
+ ("x@example.com", True),
+ ("example-indeed@strange-example.com", True),
+ ("test/test@test.com", True),
+ (
+ "123456789012345678901234567890123456789012345678901234567890123@example.com",
+ True,
+ ),
+ ("admin@mailserver1", True),
+ ("example@s.example", True),
+ ("Abc.example.com", False),
+ ("A@b@c@example.com", False),
+ ("abc@example..com", False),
+ ("a(c)d,e:f;gi[j\\k]l@example.com", False),
+ (
+ "12345678901234567890123456789012345678901234567890123456789012345@example.com",
+ False,
+ ),
+ ("i.like.underscores@but_its_not_allowed_in_this_part", False),
+ ("", False),
+)
+
+# The maximum octets (one character as a standard unicode character is one byte)
+# that the local part and the domain part can have
+MAX_LOCAL_PART_OCTETS = 64
+MAX_DOMAIN_OCTETS = 255
+
+
+def is_valid_email_address(email: str) -> bool:
+ """
+ Returns True if the passed email address is valid.
+
+ The local part of the email precedes the singular @ symbol and
+ is associated with a display-name. For example, "john.smith"
+ The domain is stricter than the local part and follows the @ symbol.
+
+ Global email checks:
+ 1. There can only be one @ symbol in the email address. Technically if the
+ @ symbol is quoted in the local-part, then it is valid, however this
+ implementation ignores "" for now.
+ (See https://en.wikipedia.org/wiki/Email_address#:~:text=If%20quoted,)
+ 2. The local-part and the domain are limited to a certain number of octets. With
+ unicode storing a single character in one byte, each octet is equivalent to
+ a character. Hence, we can just check the length of the string.
+ Checks for the local-part:
+ 3. The local-part may contain: upper and lowercase latin letters, digits 0 to 9,
+ and printable characters (!#$%&'*+-/=?^_`{|}~)
+ 4. The local-part may also contain a "." in any place that is not the first or
+ last character, and may not have more than one "." consecutively.
+
+ Checks for the domain:
+ 5. The domain may contain: upper and lowercase latin letters and digits 0 to 9
+ 6. Hyphen "-", provided that it is not the first or last character
+ 7. The domain may also contain a "." in any place that is not the first or
+ last character, and may not have more than one "." consecutively.
+
+ >>> for email, valid in email_tests:
+ ... assert is_valid_email_address(email) == valid
+ """
+
+ # (1.) Make sure that there is only one @ symbol in the email address
+ if email.count("@") != 1:
+ return False
+
+ local_part, domain = email.split("@")
+ # (2.) Check octet length of the local part and domain
+ if len(local_part) > MAX_LOCAL_PART_OCTETS or len(domain) > MAX_DOMAIN_OCTETS:
+ return False
+
+ # (3.) Validate the characters in the local-part
+ if any(
+ char not in string.ascii_letters + string.digits + ".(!#$%&'*+-/=?^_`{|}~)"
+ for char in local_part
+ ):
+ return False
+
+ # (4.) Validate the placement of "." characters in the local-part
+ if local_part.startswith(".") or local_part.endswith(".") or ".." in local_part:
+ return False
+
+ # (5.) Validate the characters in the domain
+ if any(char not in string.ascii_letters + string.digits + ".-" for char in domain):
+ return False
+
+ # (6.) Validate the placement of "-" characters
+ if domain.startswith("-") or domain.endswith("."):
+ return False
+
+ # (7.) Validate the placement of "." characters
+ if domain.startswith(".") or domain.endswith(".") or ".." in domain:
+ return False
+ return True
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
+ for email, valid in email_tests:
+ is_valid = is_valid_email_address(email)
+ assert is_valid == valid, f"{email} is {is_valid}"
+ print(f"Email address {email} is {'not ' if not is_valid else ''}valid")
From ac68dc1128535b6798af256fcdab67340f6c0fd9 Mon Sep 17 00:00:00 2001
From: Adithya Awati <1ds21ai001@dsce.edu.in>
Date: Mon, 14 Aug 2023 14:04:16 +0530
Subject: [PATCH 155/808] Fixed Pytest warnings for
machine_learning/forecasting (#8958)
* updating DIRECTORY.md
* Fixed pyTest Warnings
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 +
machine_learning/forecasting/run.py | 6 +++++-
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 14152e4abd04..384ce1b2209d 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -340,6 +340,7 @@
* [Rod Cutting](dynamic_programming/rod_cutting.py)
* [Subset Generation](dynamic_programming/subset_generation.py)
* [Sum Of Subset](dynamic_programming/sum_of_subset.py)
+ * [Tribonacci](dynamic_programming/tribonacci.py)
* [Viterbi](dynamic_programming/viterbi.py)
* [Word Break](dynamic_programming/word_break.py)
diff --git a/machine_learning/forecasting/run.py b/machine_learning/forecasting/run.py
index 88c4a537b302..64e719daacc2 100644
--- a/machine_learning/forecasting/run.py
+++ b/machine_learning/forecasting/run.py
@@ -11,6 +11,8 @@
u can just adjust it for ur own purpose
"""
+from warnings import simplefilter
+
import numpy as np
import pandas as pd
from sklearn.preprocessing import Normalizer
@@ -45,8 +47,10 @@ def sarimax_predictor(train_user: list, train_match: list, test_match: list) ->
>>> sarimax_predictor([4,2,6,8], [3,1,2,4], [2])
6.6666671111109626
"""
+ # Suppress the User Warning raised by SARIMAX due to insufficient observations
+ simplefilter("ignore", UserWarning)
order = (1, 2, 1)
- seasonal_order = (1, 1, 0, 7)
+ seasonal_order = (1, 1, 1, 7)
model = SARIMAX(
train_user, exog=train_match, order=order, seasonal_order=seasonal_order
)
From 2ab3bf2689d21e7375539c79ecee358e9d7c3359 Mon Sep 17 00:00:00 2001
From: robertjcalistri <85811008+robertjcalistri@users.noreply.github.com>
Date: Mon, 14 Aug 2023 05:31:53 -0400
Subject: [PATCH 156/808] =?UTF-8?q?Added=20functions=20to=20calculate=20te?=
=?UTF-8?q?mperature=20of=20an=20ideal=20gas=20and=20number=20o=E2=80=A6?=
=?UTF-8?q?=20(#8919)?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* Added functions to calculate temperature of an ideal gas and number of moles of an ideal gas
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update physics/ideal_gas_law.py
Renamed function name
Co-authored-by: Tianyi Zheng
* Update physics/ideal_gas_law.py
Updated formatting
Co-authored-by: Tianyi Zheng
* Update physics/ideal_gas_law.py
Removed unnecessary parentheses
Co-authored-by: Tianyi Zheng
* Update physics/ideal_gas_law.py
Removed unnecessary parentheses
Co-authored-by: Tianyi Zheng
* Update ideal_gas_law.py
Updated incorrect function calls moles of gas system doctests
* Update physics/ideal_gas_law.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
physics/ideal_gas_law.py | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/physics/ideal_gas_law.py b/physics/ideal_gas_law.py
index 805da47b0079..09b4fb3a9c14 100644
--- a/physics/ideal_gas_law.py
+++ b/physics/ideal_gas_law.py
@@ -53,6 +53,40 @@ def volume_of_gas_system(moles: float, kelvin: float, pressure: float) -> float:
return moles * kelvin * UNIVERSAL_GAS_CONSTANT / pressure
+def temperature_of_gas_system(moles: float, volume: float, pressure: float) -> float:
+ """
+ >>> temperature_of_gas_system(2, 100, 5)
+ 30.068090996146232
+ >>> temperature_of_gas_system(11, 5009, 1000)
+ 54767.66101807144
+ >>> temperature_of_gas_system(3, -0.46, 23.5)
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid inputs. Enter positive value.
+ """
+ if moles < 0 or volume < 0 or pressure < 0:
+ raise ValueError("Invalid inputs. Enter positive value.")
+
+ return pressure * volume / (moles * UNIVERSAL_GAS_CONSTANT)
+
+
+def moles_of_gas_system(kelvin: float, volume: float, pressure: float) -> float:
+ """
+ >>> moles_of_gas_system(100, 5, 10)
+ 0.06013618199229246
+ >>> moles_of_gas_system(110, 5009, 1000)
+ 5476.766101807144
+ >>> moles_of_gas_system(3, -0.46, 23.5)
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid inputs. Enter positive value.
+ """
+ if kelvin < 0 or volume < 0 or pressure < 0:
+ raise ValueError("Invalid inputs. Enter positive value.")
+
+ return pressure * volume / (kelvin * UNIVERSAL_GAS_CONSTANT)
+
+
if __name__ == "__main__":
from doctest import testmod
From fb1b939a89fb08370297cbb455846f61f66847bc Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Mon, 14 Aug 2023 12:17:27 +0100
Subject: [PATCH 157/808] Consolidate find_min and find_min recursive and
find_max and find_max_recursive (#8960)
* updating DIRECTORY.md
* refactor(min-max): Consolidate implementations
* updating DIRECTORY.md
* refactor(min-max): Append _iterative to func name
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 2 --
maths/find_max.py | 65 +++++++++++++++++++++++++++++++++----
maths/find_max_recursion.py | 58 ---------------------------------
maths/find_min.py | 65 +++++++++++++++++++++++++++++++++----
maths/find_min_recursion.py | 58 ---------------------------------
5 files changed, 118 insertions(+), 130 deletions(-)
delete mode 100644 maths/find_max_recursion.py
delete mode 100644 maths/find_min_recursion.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 384ce1b2209d..be5fa3584a58 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -573,9 +573,7 @@
* [Fermat Little Theorem](maths/fermat_little_theorem.py)
* [Fibonacci](maths/fibonacci.py)
* [Find Max](maths/find_max.py)
- * [Find Max Recursion](maths/find_max_recursion.py)
* [Find Min](maths/find_min.py)
- * [Find Min Recursion](maths/find_min_recursion.py)
* [Floor](maths/floor.py)
* [Gamma](maths/gamma.py)
* [Gamma Recursive](maths/gamma_recursive.py)
diff --git a/maths/find_max.py b/maths/find_max.py
index 684fbe8161e8..729a80ab421c 100644
--- a/maths/find_max.py
+++ b/maths/find_max.py
@@ -1,23 +1,23 @@
from __future__ import annotations
-def find_max(nums: list[int | float]) -> int | float:
+def find_max_iterative(nums: list[int | float]) -> int | float:
"""
>>> for nums in ([3, 2, 1], [-3, -2, -1], [3, -3, 0], [3.0, 3.1, 2.9]):
- ... find_max(nums) == max(nums)
+ ... find_max_iterative(nums) == max(nums)
True
True
True
True
- >>> find_max([2, 4, 9, 7, 19, 94, 5])
+ >>> find_max_iterative([2, 4, 9, 7, 19, 94, 5])
94
- >>> find_max([])
+ >>> find_max_iterative([])
Traceback (most recent call last):
...
- ValueError: find_max() arg is an empty sequence
+ ValueError: find_max_iterative() arg is an empty sequence
"""
if len(nums) == 0:
- raise ValueError("find_max() arg is an empty sequence")
+ raise ValueError("find_max_iterative() arg is an empty sequence")
max_num = nums[0]
for x in nums:
if x > max_num:
@@ -25,6 +25,59 @@ def find_max(nums: list[int | float]) -> int | float:
return max_num
+# Divide and Conquer algorithm
+def find_max_recursive(nums: list[int | float], left: int, right: int) -> int | float:
+ """
+ find max value in list
+ :param nums: contains elements
+ :param left: index of first element
+ :param right: index of last element
+ :return: max in nums
+
+ >>> for nums in ([3, 2, 1], [-3, -2, -1], [3, -3, 0], [3.0, 3.1, 2.9]):
+ ... find_max_recursive(nums, 0, len(nums) - 1) == max(nums)
+ True
+ True
+ True
+ True
+ >>> nums = [1, 3, 5, 7, 9, 2, 4, 6, 8, 10]
+ >>> find_max_recursive(nums, 0, len(nums) - 1) == max(nums)
+ True
+ >>> find_max_recursive([], 0, 0)
+ Traceback (most recent call last):
+ ...
+ ValueError: find_max_recursive() arg is an empty sequence
+ >>> find_max_recursive(nums, 0, len(nums)) == max(nums)
+ Traceback (most recent call last):
+ ...
+ IndexError: list index out of range
+ >>> find_max_recursive(nums, -len(nums), -1) == max(nums)
+ True
+ >>> find_max_recursive(nums, -len(nums) - 1, -1) == max(nums)
+ Traceback (most recent call last):
+ ...
+ IndexError: list index out of range
+ """
+ if len(nums) == 0:
+ raise ValueError("find_max_recursive() arg is an empty sequence")
+ if (
+ left >= len(nums)
+ or left < -len(nums)
+ or right >= len(nums)
+ or right < -len(nums)
+ ):
+ raise IndexError("list index out of range")
+ if left == right:
+ return nums[left]
+ mid = (left + right) >> 1 # the middle
+ left_max = find_max_recursive(nums, left, mid) # find max in range[left, mid]
+ right_max = find_max_recursive(
+ nums, mid + 1, right
+ ) # find max in range[mid + 1, right]
+
+ return left_max if left_max >= right_max else right_max
+
+
if __name__ == "__main__":
import doctest
diff --git a/maths/find_max_recursion.py b/maths/find_max_recursion.py
deleted file mode 100644
index 629932e0818f..000000000000
--- a/maths/find_max_recursion.py
+++ /dev/null
@@ -1,58 +0,0 @@
-from __future__ import annotations
-
-
-# Divide and Conquer algorithm
-def find_max(nums: list[int | float], left: int, right: int) -> int | float:
- """
- find max value in list
- :param nums: contains elements
- :param left: index of first element
- :param right: index of last element
- :return: max in nums
-
- >>> for nums in ([3, 2, 1], [-3, -2, -1], [3, -3, 0], [3.0, 3.1, 2.9]):
- ... find_max(nums, 0, len(nums) - 1) == max(nums)
- True
- True
- True
- True
- >>> nums = [1, 3, 5, 7, 9, 2, 4, 6, 8, 10]
- >>> find_max(nums, 0, len(nums) - 1) == max(nums)
- True
- >>> find_max([], 0, 0)
- Traceback (most recent call last):
- ...
- ValueError: find_max() arg is an empty sequence
- >>> find_max(nums, 0, len(nums)) == max(nums)
- Traceback (most recent call last):
- ...
- IndexError: list index out of range
- >>> find_max(nums, -len(nums), -1) == max(nums)
- True
- >>> find_max(nums, -len(nums) - 1, -1) == max(nums)
- Traceback (most recent call last):
- ...
- IndexError: list index out of range
- """
- if len(nums) == 0:
- raise ValueError("find_max() arg is an empty sequence")
- if (
- left >= len(nums)
- or left < -len(nums)
- or right >= len(nums)
- or right < -len(nums)
- ):
- raise IndexError("list index out of range")
- if left == right:
- return nums[left]
- mid = (left + right) >> 1 # the middle
- left_max = find_max(nums, left, mid) # find max in range[left, mid]
- right_max = find_max(nums, mid + 1, right) # find max in range[mid + 1, right]
-
- return left_max if left_max >= right_max else right_max
-
-
-if __name__ == "__main__":
- import doctest
-
- doctest.testmod(verbose=True)
diff --git a/maths/find_min.py b/maths/find_min.py
index 2eac087c6388..762562e36ef9 100644
--- a/maths/find_min.py
+++ b/maths/find_min.py
@@ -1,33 +1,86 @@
from __future__ import annotations
-def find_min(nums: list[int | float]) -> int | float:
+def find_min_iterative(nums: list[int | float]) -> int | float:
"""
Find Minimum Number in a List
:param nums: contains elements
:return: min number in list
>>> for nums in ([3, 2, 1], [-3, -2, -1], [3, -3, 0], [3.0, 3.1, 2.9]):
- ... find_min(nums) == min(nums)
+ ... find_min_iterative(nums) == min(nums)
True
True
True
True
- >>> find_min([0, 1, 2, 3, 4, 5, -3, 24, -56])
+ >>> find_min_iterative([0, 1, 2, 3, 4, 5, -3, 24, -56])
-56
- >>> find_min([])
+ >>> find_min_iterative([])
Traceback (most recent call last):
...
- ValueError: find_min() arg is an empty sequence
+ ValueError: find_min_iterative() arg is an empty sequence
"""
if len(nums) == 0:
- raise ValueError("find_min() arg is an empty sequence")
+ raise ValueError("find_min_iterative() arg is an empty sequence")
min_num = nums[0]
for num in nums:
min_num = min(min_num, num)
return min_num
+# Divide and Conquer algorithm
+def find_min_recursive(nums: list[int | float], left: int, right: int) -> int | float:
+ """
+ find min value in list
+ :param nums: contains elements
+ :param left: index of first element
+ :param right: index of last element
+ :return: min in nums
+
+ >>> for nums in ([3, 2, 1], [-3, -2, -1], [3, -3, 0], [3.0, 3.1, 2.9]):
+ ... find_min_recursive(nums, 0, len(nums) - 1) == min(nums)
+ True
+ True
+ True
+ True
+ >>> nums = [1, 3, 5, 7, 9, 2, 4, 6, 8, 10]
+ >>> find_min_recursive(nums, 0, len(nums) - 1) == min(nums)
+ True
+ >>> find_min_recursive([], 0, 0)
+ Traceback (most recent call last):
+ ...
+ ValueError: find_min_recursive() arg is an empty sequence
+ >>> find_min_recursive(nums, 0, len(nums)) == min(nums)
+ Traceback (most recent call last):
+ ...
+ IndexError: list index out of range
+ >>> find_min_recursive(nums, -len(nums), -1) == min(nums)
+ True
+ >>> find_min_recursive(nums, -len(nums) - 1, -1) == min(nums)
+ Traceback (most recent call last):
+ ...
+ IndexError: list index out of range
+ """
+ if len(nums) == 0:
+ raise ValueError("find_min_recursive() arg is an empty sequence")
+ if (
+ left >= len(nums)
+ or left < -len(nums)
+ or right >= len(nums)
+ or right < -len(nums)
+ ):
+ raise IndexError("list index out of range")
+ if left == right:
+ return nums[left]
+ mid = (left + right) >> 1 # the middle
+ left_min = find_min_recursive(nums, left, mid) # find min in range[left, mid]
+ right_min = find_min_recursive(
+ nums, mid + 1, right
+ ) # find min in range[mid + 1, right]
+
+ return left_min if left_min <= right_min else right_min
+
+
if __name__ == "__main__":
import doctest
diff --git a/maths/find_min_recursion.py b/maths/find_min_recursion.py
deleted file mode 100644
index 4d11015efcd5..000000000000
--- a/maths/find_min_recursion.py
+++ /dev/null
@@ -1,58 +0,0 @@
-from __future__ import annotations
-
-
-# Divide and Conquer algorithm
-def find_min(nums: list[int | float], left: int, right: int) -> int | float:
- """
- find min value in list
- :param nums: contains elements
- :param left: index of first element
- :param right: index of last element
- :return: min in nums
-
- >>> for nums in ([3, 2, 1], [-3, -2, -1], [3, -3, 0], [3.0, 3.1, 2.9]):
- ... find_min(nums, 0, len(nums) - 1) == min(nums)
- True
- True
- True
- True
- >>> nums = [1, 3, 5, 7, 9, 2, 4, 6, 8, 10]
- >>> find_min(nums, 0, len(nums) - 1) == min(nums)
- True
- >>> find_min([], 0, 0)
- Traceback (most recent call last):
- ...
- ValueError: find_min() arg is an empty sequence
- >>> find_min(nums, 0, len(nums)) == min(nums)
- Traceback (most recent call last):
- ...
- IndexError: list index out of range
- >>> find_min(nums, -len(nums), -1) == min(nums)
- True
- >>> find_min(nums, -len(nums) - 1, -1) == min(nums)
- Traceback (most recent call last):
- ...
- IndexError: list index out of range
- """
- if len(nums) == 0:
- raise ValueError("find_min() arg is an empty sequence")
- if (
- left >= len(nums)
- or left < -len(nums)
- or right >= len(nums)
- or right < -len(nums)
- ):
- raise IndexError("list index out of range")
- if left == right:
- return nums[left]
- mid = (left + right) >> 1 # the middle
- left_min = find_min(nums, left, mid) # find min in range[left, mid]
- right_min = find_min(nums, mid + 1, right) # find min in range[mid + 1, right]
-
- return left_min if left_min <= right_min else right_min
-
-
-if __name__ == "__main__":
- import doctest
-
- doctest.testmod(verbose=True)
From 7021afda047b034958bfdb67e8479af2e8c7aeb9 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Mon, 14 Aug 2023 23:12:11 -0400
Subject: [PATCH 158/808] [pre-commit.ci] pre-commit autoupdate (#8963)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.282 → v0.0.284](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.282...v0.0.284)
- [github.com/tox-dev/pyproject-fmt: 0.13.0 → 0.13.1](https://github.com/tox-dev/pyproject-fmt/compare/0.13.0...0.13.1)
- [github.com/pre-commit/mirrors-mypy: v1.4.1 → v1.5.0](https://github.com/pre-commit/mirrors-mypy/compare/v1.4.1...v1.5.0)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.pre-commit-config.yaml | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index da6762123b04..b08139561639 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.282
+ rev: v0.0.284
hooks:
- id: ruff
@@ -33,7 +33,7 @@ repos:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
- rev: "0.13.0"
+ rev: "0.13.1"
hooks:
- id: pyproject-fmt
@@ -51,7 +51,7 @@ repos:
- id: validate-pyproject
- repo: https://github.com/pre-commit/mirrors-mypy
- rev: v1.4.1
+ rev: v1.5.0
hooks:
- id: mypy
args:
From 7618a92fee002475b3bed9227944972d346db440 Mon Sep 17 00:00:00 2001
From: Erfan Alimohammadi
Date: Wed, 16 Aug 2023 00:07:49 +0330
Subject: [PATCH 159/808] Remove a slash in path to save the file correctly on
Linux (#8053)
---
computer_vision/flip_augmentation.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/computer_vision/flip_augmentation.py b/computer_vision/flip_augmentation.py
index 93b4e3f6da79..77a8cbd7b14f 100644
--- a/computer_vision/flip_augmentation.py
+++ b/computer_vision/flip_augmentation.py
@@ -32,13 +32,13 @@ def main() -> None:
letter_code = random_chars(32)
file_name = paths[index].split(os.sep)[-1].rsplit(".", 1)[0]
file_root = f"{OUTPUT_DIR}/{file_name}_FLIP_{letter_code}"
- cv2.imwrite(f"/{file_root}.jpg", image, [cv2.IMWRITE_JPEG_QUALITY, 85])
+ cv2.imwrite(f"{file_root}.jpg", image, [cv2.IMWRITE_JPEG_QUALITY, 85])
print(f"Success {index+1}/{len(new_images)} with {file_name}")
annos_list = []
for anno in new_annos[index]:
obj = f"{anno[0]} {anno[1]} {anno[2]} {anno[3]} {anno[4]}"
annos_list.append(obj)
- with open(f"/{file_root}.txt", "w") as outfile:
+ with open(f"{file_root}.txt", "w") as outfile:
outfile.write("\n".join(line for line in annos_list))
From 490e645ed3b7ae50f0d7e23e047d088ba069ed56 Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Tue, 15 Aug 2023 22:27:41 +0100
Subject: [PATCH 160/808] Fix minor typing errors in maths/ (#8959)
* updating DIRECTORY.md
* types(maths): Fix pylance issues in maths
* reset(vsc): Reset settings changes
* Update maths/jaccard_similarity.py
Co-authored-by: Tianyi Zheng
* revert(erosion_operation): Revert erosion_operation
* test(jaccard_similarity): Add doctest to test alternative_union
* types(newton_raphson): Add typehints to func bodies
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
.../erosion_operation.py | 1 +
digital_image_processing/rotation/rotation.py | 4 +-
maths/average_median.py | 4 +-
maths/euler_modified.py | 2 +-
maths/gaussian_error_linear_unit.py | 4 +-
maths/jaccard_similarity.py | 45 ++++++++++++-------
maths/newton_raphson.py | 33 +++++++++-----
maths/qr_decomposition.py | 2 +-
maths/sigmoid.py | 2 +-
maths/tanh.py | 4 +-
10 files changed, 65 insertions(+), 36 deletions(-)
diff --git a/digital_image_processing/morphological_operations/erosion_operation.py b/digital_image_processing/morphological_operations/erosion_operation.py
index c2cde2ea6990..c0e1ef847237 100644
--- a/digital_image_processing/morphological_operations/erosion_operation.py
+++ b/digital_image_processing/morphological_operations/erosion_operation.py
@@ -21,6 +21,7 @@ def rgb2gray(rgb: np.array) -> np.array:
def gray2binary(gray: np.array) -> np.array:
"""
Return binary image from gray image
+
>>> gray2binary(np.array([[127, 255, 0]]))
array([[False, True, False]])
>>> gray2binary(np.array([[0]]))
diff --git a/digital_image_processing/rotation/rotation.py b/digital_image_processing/rotation/rotation.py
index 958d16fafb91..0f5e36ddd5be 100644
--- a/digital_image_processing/rotation/rotation.py
+++ b/digital_image_processing/rotation/rotation.py
@@ -10,12 +10,12 @@ def get_rotation(
) -> np.ndarray:
"""
Get image rotation
- :param img: np.array
+ :param img: np.ndarray
:param pt1: 3x2 list
:param pt2: 3x2 list
:param rows: columns image shape
:param cols: rows image shape
- :return: np.array
+ :return: np.ndarray
"""
matrix = cv2.getAffineTransform(pt1, pt2)
return cv2.warpAffine(img, matrix, (rows, cols))
diff --git a/maths/average_median.py b/maths/average_median.py
index cd1ec1574893..f24e525736b3 100644
--- a/maths/average_median.py
+++ b/maths/average_median.py
@@ -19,7 +19,9 @@ def median(nums: list) -> int | float:
Returns:
Median.
"""
- sorted_list = sorted(nums)
+ # The sorted function returns list[SupportsRichComparisonT@sorted]
+ # which does not support `+`
+ sorted_list: list[int] = sorted(nums)
length = len(sorted_list)
mid_index = length >> 1
return (
diff --git a/maths/euler_modified.py b/maths/euler_modified.py
index 14bddadf4c53..d02123e1e2fb 100644
--- a/maths/euler_modified.py
+++ b/maths/euler_modified.py
@@ -5,7 +5,7 @@
def euler_modified(
ode_func: Callable, y0: float, x0: float, step_size: float, x_end: float
-) -> np.array:
+) -> np.ndarray:
"""
Calculate solution at each step to an ODE using Euler's Modified Method
The Euler Method is straightforward to implement, but can't give accurate solutions.
diff --git a/maths/gaussian_error_linear_unit.py b/maths/gaussian_error_linear_unit.py
index 7b5f875143b9..18384bb6c864 100644
--- a/maths/gaussian_error_linear_unit.py
+++ b/maths/gaussian_error_linear_unit.py
@@ -13,7 +13,7 @@
import numpy as np
-def sigmoid(vector: np.array) -> np.array:
+def sigmoid(vector: np.ndarray) -> np.ndarray:
"""
Mathematical function sigmoid takes a vector x of K real numbers as input and
returns 1/ (1 + e^-x).
@@ -25,7 +25,7 @@ def sigmoid(vector: np.array) -> np.array:
return 1 / (1 + np.exp(-vector))
-def gaussian_error_linear_unit(vector: np.array) -> np.array:
+def gaussian_error_linear_unit(vector: np.ndarray) -> np.ndarray:
"""
Implements the Gaussian Error Linear Unit (GELU) function
diff --git a/maths/jaccard_similarity.py b/maths/jaccard_similarity.py
index 32054414c0c2..6b6243458fa8 100644
--- a/maths/jaccard_similarity.py
+++ b/maths/jaccard_similarity.py
@@ -14,7 +14,11 @@
"""
-def jaccard_similarity(set_a, set_b, alternative_union=False):
+def jaccard_similarity(
+ set_a: set[str] | list[str] | tuple[str],
+ set_b: set[str] | list[str] | tuple[str],
+ alternative_union=False,
+):
"""
Finds the jaccard similarity between two sets.
Essentially, its intersection over union.
@@ -37,41 +41,52 @@ def jaccard_similarity(set_a, set_b, alternative_union=False):
>>> set_b = {'c', 'd', 'e', 'f', 'h', 'i'}
>>> jaccard_similarity(set_a, set_b)
0.375
-
>>> jaccard_similarity(set_a, set_a)
1.0
-
>>> jaccard_similarity(set_a, set_a, True)
0.5
-
>>> set_a = ['a', 'b', 'c', 'd', 'e']
>>> set_b = ('c', 'd', 'e', 'f', 'h', 'i')
>>> jaccard_similarity(set_a, set_b)
0.375
+ >>> set_a = ('c', 'd', 'e', 'f', 'h', 'i')
+ >>> set_b = ['a', 'b', 'c', 'd', 'e']
+ >>> jaccard_similarity(set_a, set_b)
+ 0.375
+ >>> set_a = ('c', 'd', 'e', 'f', 'h', 'i')
+ >>> set_b = ['a', 'b', 'c', 'd']
+ >>> jaccard_similarity(set_a, set_b, True)
+ 0.2
+ >>> set_a = {'a', 'b'}
+ >>> set_b = ['c', 'd']
+ >>> jaccard_similarity(set_a, set_b)
+ Traceback (most recent call last):
+ ...
+ ValueError: Set a and b must either both be sets or be either a list or a tuple.
"""
if isinstance(set_a, set) and isinstance(set_b, set):
- intersection = len(set_a.intersection(set_b))
+ intersection_length = len(set_a.intersection(set_b))
if alternative_union:
- union = len(set_a) + len(set_b)
+ union_length = len(set_a) + len(set_b)
else:
- union = len(set_a.union(set_b))
+ union_length = len(set_a.union(set_b))
- return intersection / union
+ return intersection_length / union_length
- if isinstance(set_a, (list, tuple)) and isinstance(set_b, (list, tuple)):
+ elif isinstance(set_a, (list, tuple)) and isinstance(set_b, (list, tuple)):
intersection = [element for element in set_a if element in set_b]
if alternative_union:
- union = len(set_a) + len(set_b)
- return len(intersection) / union
+ return len(intersection) / (len(set_a) + len(set_b))
else:
- union = set_a + [element for element in set_b if element not in set_a]
+ # Cast set_a to list because tuples cannot be mutated
+ union = list(set_a) + [element for element in set_b if element not in set_a]
return len(intersection) / len(union)
-
- return len(intersection) / len(union)
- return None
+ raise ValueError(
+ "Set a and b must either both be sets or be either a list or a tuple."
+ )
if __name__ == "__main__":
diff --git a/maths/newton_raphson.py b/maths/newton_raphson.py
index 2c9cd1de95b0..f6b227b5c9c1 100644
--- a/maths/newton_raphson.py
+++ b/maths/newton_raphson.py
@@ -1,16 +1,20 @@
"""
- Author: P Shreyas Shetty
- Implementation of Newton-Raphson method for solving equations of kind
- f(x) = 0. It is an iterative method where solution is found by the expression
- x[n+1] = x[n] + f(x[n])/f'(x[n])
- If no solution exists, then either the solution will not be found when iteration
- limit is reached or the gradient f'(x[n]) approaches zero. In both cases, exception
- is raised. If iteration limit is reached, try increasing maxiter.
- """
+Author: P Shreyas Shetty
+Implementation of Newton-Raphson method for solving equations of kind
+f(x) = 0. It is an iterative method where solution is found by the expression
+ x[n+1] = x[n] + f(x[n])/f'(x[n])
+If no solution exists, then either the solution will not be found when iteration
+limit is reached or the gradient f'(x[n]) approaches zero. In both cases, exception
+is raised. If iteration limit is reached, try increasing maxiter.
+"""
+
import math as m
+from collections.abc import Callable
+
+DerivativeFunc = Callable[[float], float]
-def calc_derivative(f, a, h=0.001):
+def calc_derivative(f: DerivativeFunc, a: float, h: float = 0.001) -> float:
"""
Calculates derivative at point a for function f using finite difference
method
@@ -18,7 +22,14 @@ def calc_derivative(f, a, h=0.001):
return (f(a + h) - f(a - h)) / (2 * h)
-def newton_raphson(f, x0=0, maxiter=100, step=0.0001, maxerror=1e-6, logsteps=False):
+def newton_raphson(
+ f: DerivativeFunc,
+ x0: float = 0,
+ maxiter: int = 100,
+ step: float = 0.0001,
+ maxerror: float = 1e-6,
+ logsteps: bool = False,
+) -> tuple[float, float, list[float]]:
a = x0 # set the initial guess
steps = [a]
error = abs(f(a))
@@ -36,7 +47,7 @@ def newton_raphson(f, x0=0, maxiter=100, step=0.0001, maxerror=1e-6, logsteps=Fa
if logsteps:
# If logstep is true, then log intermediate steps
return a, error, steps
- return a, error
+ return a, error, []
if __name__ == "__main__":
diff --git a/maths/qr_decomposition.py b/maths/qr_decomposition.py
index a8414fbece87..670b49206aa7 100644
--- a/maths/qr_decomposition.py
+++ b/maths/qr_decomposition.py
@@ -1,7 +1,7 @@
import numpy as np
-def qr_householder(a):
+def qr_householder(a: np.ndarray):
"""Return a QR-decomposition of the matrix A using Householder reflection.
The QR-decomposition decomposes the matrix A of shape (m, n) into an
diff --git a/maths/sigmoid.py b/maths/sigmoid.py
index 147588e8871f..cb45bde2702c 100644
--- a/maths/sigmoid.py
+++ b/maths/sigmoid.py
@@ -11,7 +11,7 @@
import numpy as np
-def sigmoid(vector: np.array) -> np.array:
+def sigmoid(vector: np.ndarray) -> np.ndarray:
"""
Implements the sigmoid function
diff --git a/maths/tanh.py b/maths/tanh.py
index ddab3e1ab717..38a369d9118d 100644
--- a/maths/tanh.py
+++ b/maths/tanh.py
@@ -12,12 +12,12 @@
import numpy as np
-def tangent_hyperbolic(vector: np.array) -> np.array:
+def tangent_hyperbolic(vector: np.ndarray) -> np.ndarray:
"""
Implements the tanh function
Parameters:
- vector: np.array
+ vector: np.ndarray
Returns:
tanh (np.array): The input numpy array after applying tanh.
From cecf1fdd529782d754e1aa4d6df099e391003c76 Mon Sep 17 00:00:00 2001
From: Juyoung Kim <61103343+JadeKim042386@users.noreply.github.com>
Date: Wed, 16 Aug 2023 07:52:51 +0900
Subject: [PATCH 161/808] Fix greedy_best_first (#8775)
* fix: typo
#8770
* refactor: delete unnecessary continue
* add test grids
* fix: add \_\_eq\_\_ in Node class
#8770
* fix: delete unnecessary code
- node in self.open_nodes is always better node
#8770
* fix: docstring
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix: docstring max length
* refactor: get the successors using a list comprehension
* Apply suggestions from code review
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
graphs/greedy_best_first.py | 120 ++++++++++++++++++++----------------
1 file changed, 67 insertions(+), 53 deletions(-)
diff --git a/graphs/greedy_best_first.py b/graphs/greedy_best_first.py
index 35f7ca9feeef..bb3160047e34 100644
--- a/graphs/greedy_best_first.py
+++ b/graphs/greedy_best_first.py
@@ -6,14 +6,32 @@
Path = list[tuple[int, int]]
-grid = [
- [0, 0, 0, 0, 0, 0, 0],
- [0, 1, 0, 0, 0, 0, 0], # 0 are free path whereas 1's are obstacles
- [0, 0, 0, 0, 0, 0, 0],
- [0, 0, 1, 0, 0, 0, 0],
- [1, 0, 1, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 1, 0, 0],
+# 0's are free path whereas 1's are obstacles
+TEST_GRIDS = [
+ [
+ [0, 0, 0, 0, 0, 0, 0],
+ [0, 1, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 1, 0, 0, 0, 0],
+ [1, 0, 1, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 1, 0, 0],
+ ],
+ [
+ [0, 0, 0, 1, 1, 0, 0],
+ [0, 0, 0, 0, 1, 0, 1],
+ [0, 0, 0, 1, 1, 0, 0],
+ [0, 1, 0, 0, 1, 0, 0],
+ [1, 0, 0, 1, 1, 0, 1],
+ [0, 0, 0, 0, 0, 0, 0],
+ ],
+ [
+ [0, 0, 1, 0, 0],
+ [0, 1, 0, 0, 0],
+ [0, 0, 1, 0, 1],
+ [1, 0, 0, 1, 1],
+ [0, 0, 0, 0, 0],
+ ],
]
delta = ([-1, 0], [0, -1], [1, 0], [0, 1]) # up, left, down, right
@@ -65,10 +83,14 @@ def calculate_heuristic(self) -> float:
def __lt__(self, other) -> bool:
return self.f_cost < other.f_cost
+ def __eq__(self, other) -> bool:
+ return self.pos == other.pos
+
class GreedyBestFirst:
"""
- >>> gbf = GreedyBestFirst((0, 0), (len(grid) - 1, len(grid[0]) - 1))
+ >>> grid = TEST_GRIDS[2]
+ >>> gbf = GreedyBestFirst(grid, (0, 0), (len(grid) - 1, len(grid[0]) - 1))
>>> [x.pos for x in gbf.get_successors(gbf.start)]
[(1, 0), (0, 1)]
>>> (gbf.start.pos_y + delta[3][0], gbf.start.pos_x + delta[3][1])
@@ -78,11 +100,14 @@ class GreedyBestFirst:
>>> gbf.retrace_path(gbf.start)
[(0, 0)]
>>> gbf.search() # doctest: +NORMALIZE_WHITESPACE
- [(0, 0), (1, 0), (2, 0), (3, 0), (3, 1), (4, 1), (5, 1), (6, 1),
- (6, 2), (6, 3), (5, 3), (5, 4), (5, 5), (6, 5), (6, 6)]
+ [(0, 0), (1, 0), (2, 0), (2, 1), (3, 1), (4, 1), (4, 2), (4, 3),
+ (4, 4)]
"""
- def __init__(self, start: tuple[int, int], goal: tuple[int, int]):
+ def __init__(
+ self, grid: list[list[int]], start: tuple[int, int], goal: tuple[int, int]
+ ):
+ self.grid = grid
self.start = Node(start[1], start[0], goal[1], goal[0], 0, None)
self.target = Node(goal[1], goal[0], goal[1], goal[0], 99999, None)
@@ -114,14 +139,6 @@ def search(self) -> Path | None:
if child_node not in self.open_nodes:
self.open_nodes.append(child_node)
- else:
- # retrieve the best current path
- better_node = self.open_nodes.pop(self.open_nodes.index(child_node))
-
- if child_node.g_cost < better_node.g_cost:
- self.open_nodes.append(child_node)
- else:
- self.open_nodes.append(better_node)
if not self.reached:
return [self.start.pos]
@@ -131,28 +148,22 @@ def get_successors(self, parent: Node) -> list[Node]:
"""
Returns a list of successors (both in the grid and free spaces)
"""
- successors = []
- for action in delta:
- pos_x = parent.pos_x + action[1]
- pos_y = parent.pos_y + action[0]
-
- if not (0 <= pos_x <= len(grid[0]) - 1 and 0 <= pos_y <= len(grid) - 1):
- continue
-
- if grid[pos_y][pos_x] != 0:
- continue
-
- successors.append(
- Node(
- pos_x,
- pos_y,
- self.target.pos_y,
- self.target.pos_x,
- parent.g_cost + 1,
- parent,
- )
+ return [
+ Node(
+ pos_x,
+ pos_y,
+ self.target.pos_x,
+ self.target.pos_y,
+ parent.g_cost + 1,
+ parent,
+ )
+ for action in delta
+ if (
+ 0 <= (pos_x := parent.pos_x + action[1]) < len(self.grid[0])
+ and 0 <= (pos_y := parent.pos_y + action[0]) < len(self.grid)
+ and self.grid[pos_y][pos_x] == 0
)
- return successors
+ ]
def retrace_path(self, node: Node | None) -> Path:
"""
@@ -168,18 +179,21 @@ def retrace_path(self, node: Node | None) -> Path:
if __name__ == "__main__":
- init = (0, 0)
- goal = (len(grid) - 1, len(grid[0]) - 1)
- for elem in grid:
- print(elem)
-
- print("------")
-
- greedy_bf = GreedyBestFirst(init, goal)
- path = greedy_bf.search()
- if path:
- for pos_x, pos_y in path:
- grid[pos_x][pos_y] = 2
+ for idx, grid in enumerate(TEST_GRIDS):
+ print(f"==grid-{idx + 1}==")
+ init = (0, 0)
+ goal = (len(grid) - 1, len(grid[0]) - 1)
for elem in grid:
print(elem)
+
+ print("------")
+
+ greedy_bf = GreedyBestFirst(grid, init, goal)
+ path = greedy_bf.search()
+ if path:
+ for pos_x, pos_y in path:
+ grid[pos_x][pos_y] = 2
+
+ for elem in grid:
+ print(elem)
From efaf526737a83815a609a00fd59370f25f6d2e09 Mon Sep 17 00:00:00 2001
From: isidroas
Date: Wed, 16 Aug 2023 01:04:53 +0200
Subject: [PATCH 162/808] BST and RSA doctest (#8693)
* rsa key doctest
* move doctest to module docstring
* all tests to doctest
* moved is_right to property
* is right test
* fixed rsa doctest import
* Test error when deleting non-existing element
* fixing ruff EM102
* convert property 'is_right' to one-liner
Also use 'is' instead of '=='
Co-authored-by: Tianyi Zheng
* child instead of children
Co-authored-by: Tianyi Zheng
* remove type hint
* Update data_structures/binary_tree/binary_search_tree.py
---------
Co-authored-by: Tianyi Zheng
---
ciphers/rsa_key_generator.py | 25 +--
.../binary_tree/binary_search_tree.py | 155 ++++++++++--------
2 files changed, 98 insertions(+), 82 deletions(-)
diff --git a/ciphers/rsa_key_generator.py b/ciphers/rsa_key_generator.py
index 2573ed01387b..eedc7336804a 100644
--- a/ciphers/rsa_key_generator.py
+++ b/ciphers/rsa_key_generator.py
@@ -2,8 +2,7 @@
import random
import sys
-from . import cryptomath_module as cryptoMath # noqa: N812
-from . import rabin_miller as rabinMiller # noqa: N812
+from . import cryptomath_module, rabin_miller
def main() -> None:
@@ -13,20 +12,26 @@ def main() -> None:
def generate_key(key_size: int) -> tuple[tuple[int, int], tuple[int, int]]:
- print("Generating prime p...")
- p = rabinMiller.generate_large_prime(key_size)
- print("Generating prime q...")
- q = rabinMiller.generate_large_prime(key_size)
+ """
+ >>> random.seed(0) # for repeatability
+ >>> public_key, private_key = generate_key(8)
+ >>> public_key
+ (26569, 239)
+ >>> private_key
+ (26569, 2855)
+ """
+ p = rabin_miller.generate_large_prime(key_size)
+ q = rabin_miller.generate_large_prime(key_size)
n = p * q
- print("Generating e that is relatively prime to (p - 1) * (q - 1)...")
+ # Generate e that is relatively prime to (p - 1) * (q - 1)
while True:
e = random.randrange(2 ** (key_size - 1), 2 ** (key_size))
- if cryptoMath.gcd(e, (p - 1) * (q - 1)) == 1:
+ if cryptomath_module.gcd(e, (p - 1) * (q - 1)) == 1:
break
- print("Calculating d that is mod inverse of e...")
- d = cryptoMath.find_mod_inverse(e, (p - 1) * (q - 1))
+ # Calculate d that is mod inverse of e
+ d = cryptomath_module.find_mod_inverse(e, (p - 1) * (q - 1))
public_key = (n, e)
private_key = (n, d)
diff --git a/data_structures/binary_tree/binary_search_tree.py b/data_structures/binary_tree/binary_search_tree.py
index c72195424c7c..a706d21e3bb2 100644
--- a/data_structures/binary_tree/binary_search_tree.py
+++ b/data_structures/binary_tree/binary_search_tree.py
@@ -1,5 +1,62 @@
-"""
+r"""
A binary search Tree
+
+Example
+ 8
+ / \
+ 3 10
+ / \ \
+ 1 6 14
+ / \ /
+ 4 7 13
+
+>>> t = BinarySearchTree()
+>>> t.insert(8, 3, 6, 1, 10, 14, 13, 4, 7)
+>>> print(" ".join(repr(i.value) for i in t.traversal_tree()))
+8 3 1 6 4 7 10 14 13
+>>> print(" ".join(repr(i.value) for i in t.traversal_tree(postorder)))
+1 4 7 6 3 13 14 10 8
+>>> t.remove(20)
+Traceback (most recent call last):
+ ...
+ValueError: Value 20 not found
+>>> BinarySearchTree().search(6)
+Traceback (most recent call last):
+ ...
+IndexError: Warning: Tree is empty! please use another.
+
+Other example:
+
+>>> testlist = (8, 3, 6, 1, 10, 14, 13, 4, 7)
+>>> t = BinarySearchTree()
+>>> for i in testlist:
+... t.insert(i)
+
+Prints all the elements of the list in order traversal
+>>> print(t)
+{'8': ({'3': (1, {'6': (4, 7)})}, {'10': (None, {'14': (13, None)})})}
+
+Test existence
+>>> t.search(6) is not None
+True
+>>> t.search(-1) is not None
+False
+
+>>> t.search(6).is_right
+True
+>>> t.search(1).is_right
+False
+
+>>> t.get_max().value
+14
+>>> t.get_min().value
+1
+>>> t.empty()
+False
+>>> for i in testlist:
+... t.remove(i)
+>>> t.empty()
+True
"""
from collections.abc import Iterable
@@ -20,6 +77,10 @@ def __repr__(self) -> str:
return str(self.value)
return pformat({f"{self.value}": (self.left, self.right)}, indent=1)
+ @property
+ def is_right(self) -> bool:
+ return self.parent is not None and self is self.parent.right
+
class BinarySearchTree:
def __init__(self, root: Node | None = None):
@@ -35,18 +96,13 @@ def __reassign_nodes(self, node: Node, new_children: Node | None) -> None:
if new_children is not None: # reset its kids
new_children.parent = node.parent
if node.parent is not None: # reset its parent
- if self.is_right(node): # If it is the right children
+ if node.is_right: # If it is the right child
node.parent.right = new_children
else:
node.parent.left = new_children
else:
self.root = new_children
- def is_right(self, node: Node) -> bool:
- if node.parent and node.parent.right:
- return node == node.parent.right
- return False
-
def empty(self) -> bool:
return self.root is None
@@ -119,22 +175,26 @@ def get_min(self, node: Node | None = None) -> Node | None:
return node
def remove(self, value: int) -> None:
- node = self.search(value) # Look for the node with that label
- if node is not None:
- if node.left is None and node.right is None: # If it has no children
- self.__reassign_nodes(node, None)
- elif node.left is None: # Has only right children
- self.__reassign_nodes(node, node.right)
- elif node.right is None: # Has only left children
- self.__reassign_nodes(node, node.left)
- else:
- tmp_node = self.get_max(
- node.left
- ) # Gets the max value of the left branch
- self.remove(tmp_node.value) # type: ignore
- node.value = (
- tmp_node.value # type: ignore
- ) # Assigns the value to the node to delete and keep tree structure
+ # Look for the node with that label
+ node = self.search(value)
+ if node is None:
+ msg = f"Value {value} not found"
+ raise ValueError(msg)
+
+ if node.left is None and node.right is None: # If it has no children
+ self.__reassign_nodes(node, None)
+ elif node.left is None: # Has only right children
+ self.__reassign_nodes(node, node.right)
+ elif node.right is None: # Has only left children
+ self.__reassign_nodes(node, node.left)
+ else:
+ predecessor = self.get_max(
+ node.left
+ ) # Gets the max value of the left branch
+ self.remove(predecessor.value) # type: ignore
+ node.value = (
+ predecessor.value # type: ignore
+ ) # Assigns the value to the node to delete and keep tree structure
def preorder_traverse(self, node: Node | None) -> Iterable:
if node is not None:
@@ -177,55 +237,6 @@ def postorder(curr_node: Node | None) -> list[Node]:
return node_list
-def binary_search_tree() -> None:
- r"""
- Example
- 8
- / \
- 3 10
- / \ \
- 1 6 14
- / \ /
- 4 7 13
-
- >>> t = BinarySearchTree()
- >>> t.insert(8, 3, 6, 1, 10, 14, 13, 4, 7)
- >>> print(" ".join(repr(i.value) for i in t.traversal_tree()))
- 8 3 1 6 4 7 10 14 13
- >>> print(" ".join(repr(i.value) for i in t.traversal_tree(postorder)))
- 1 4 7 6 3 13 14 10 8
- >>> BinarySearchTree().search(6)
- Traceback (most recent call last):
- ...
- IndexError: Warning: Tree is empty! please use another.
- """
- testlist = (8, 3, 6, 1, 10, 14, 13, 4, 7)
- t = BinarySearchTree()
- for i in testlist:
- t.insert(i)
-
- # Prints all the elements of the list in order traversal
- print(t)
-
- if t.search(6) is not None:
- print("The value 6 exists")
- else:
- print("The value 6 doesn't exist")
-
- if t.search(-1) is not None:
- print("The value -1 exists")
- else:
- print("The value -1 doesn't exist")
-
- if not t.empty():
- print("Max Value: ", t.get_max().value) # type: ignore
- print("Min Value: ", t.get_min().value) # type: ignore
-
- for i in testlist:
- t.remove(i)
- print(t)
-
-
if __name__ == "__main__":
import doctest
From f66568e981edf5e384fe28a357daee3e13f16de9 Mon Sep 17 00:00:00 2001
From: Maxim Smolskiy
Date: Wed, 16 Aug 2023 02:10:22 +0300
Subject: [PATCH 163/808] Reduce the complexity of
boolean_algebra/quine_mc_cluskey.py (#8604)
* Reduce the complexity of boolean_algebra/quine_mc_cluskey.py
* updating DIRECTORY.md
* Fix
* Fix review issues
* Fix
* Fix review issues
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
boolean_algebra/quine_mc_cluskey.py | 49 ++++++++++++-----------------
1 file changed, 20 insertions(+), 29 deletions(-)
diff --git a/boolean_algebra/quine_mc_cluskey.py b/boolean_algebra/quine_mc_cluskey.py
index 6788dfb28ba1..8e22e66726d4 100644
--- a/boolean_algebra/quine_mc_cluskey.py
+++ b/boolean_algebra/quine_mc_cluskey.py
@@ -74,10 +74,7 @@ def is_for_table(string1: str, string2: str, count: int) -> bool:
"""
list1 = list(string1)
list2 = list(string2)
- count_n = 0
- for i in range(len(list1)):
- if list1[i] != list2[i]:
- count_n += 1
+ count_n = sum(item1 != item2 for item1, item2 in zip(list1, list2))
return count_n == count
@@ -92,40 +89,34 @@ def selection(chart: list[list[int]], prime_implicants: list[str]) -> list[str]:
temp = []
select = [0] * len(chart)
for i in range(len(chart[0])):
- count = 0
- rem = -1
- for j in range(len(chart)):
- if chart[j][i] == 1:
- count += 1
- rem = j
+ count = sum(row[i] == 1 for row in chart)
if count == 1:
+ rem = max(j for j, row in enumerate(chart) if row[i] == 1)
select[rem] = 1
- for i in range(len(select)):
- if select[i] == 1:
- for j in range(len(chart[0])):
- if chart[i][j] == 1:
- for k in range(len(chart)):
- chart[k][j] = 0
- temp.append(prime_implicants[i])
+ for i, item in enumerate(select):
+ if item != 1:
+ continue
+ for j in range(len(chart[0])):
+ if chart[i][j] != 1:
+ continue
+ for row in chart:
+ row[j] = 0
+ temp.append(prime_implicants[i])
while True:
- max_n = 0
- rem = -1
- count_n = 0
- for i in range(len(chart)):
- count_n = chart[i].count(1)
- if count_n > max_n:
- max_n = count_n
- rem = i
+ counts = [chart[i].count(1) for i in range(len(chart))]
+ max_n = max(counts)
+ rem = counts.index(max_n)
if max_n == 0:
return temp
temp.append(prime_implicants[rem])
- for i in range(len(chart[0])):
- if chart[rem][i] == 1:
- for j in range(len(chart)):
- chart[j][i] = 0
+ for j in range(len(chart[0])):
+ if chart[rem][j] != 1:
+ continue
+ for i in range(len(chart)):
+ chart[i][j] = 0
def prime_implicant_chart(
From bfed2fb7883fb7c472cd09afea1aad4e3f87d71b Mon Sep 17 00:00:00 2001
From: Saksham1970 <45041294+Saksham1970@users.noreply.github.com>
Date: Wed, 16 Aug 2023 12:54:12 +0530
Subject: [PATCH 164/808] Added Continued fractions (#6846)
* updating DIRECTORY.md
* added continued fractions
* updating DIRECTORY.md
* Update maths/continued_fraction.py
Co-authored-by: Caeden Perelli-Harris
* Update maths/continued_fraction.py
Co-authored-by: Caeden Perelli-Harris
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Caeden Perelli-Harris
Co-authored-by: Tianyi Zheng
---
DIRECTORY.md | 1 +
maths/continued_fraction.py | 51 +++++++++++++++++++++++++++++++++++++
2 files changed, 52 insertions(+)
create mode 100644 maths/continued_fraction.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index be5fa3584a58..8d1567465fbc 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -555,6 +555,7 @@
* [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py)
* [Collatz Sequence](maths/collatz_sequence.py)
* [Combinations](maths/combinations.py)
+ * [Continued Fraction](maths/continued_fraction.py)
* [Decimal Isolate](maths/decimal_isolate.py)
* [Decimal To Fraction](maths/decimal_to_fraction.py)
* [Dodecahedron](maths/dodecahedron.py)
diff --git a/maths/continued_fraction.py b/maths/continued_fraction.py
new file mode 100644
index 000000000000..25ff649db77a
--- /dev/null
+++ b/maths/continued_fraction.py
@@ -0,0 +1,51 @@
+"""
+Finding the continuous fraction for a rational number using python
+
+https://en.wikipedia.org/wiki/Continued_fraction
+"""
+
+
+from fractions import Fraction
+
+
+def continued_fraction(num: Fraction) -> list[int]:
+ """
+ :param num:
+ Fraction of the number whose continued fractions to be found.
+ Use Fraction(str(number)) for more accurate results due to
+ float inaccuracies.
+
+ :return:
+ The continued fraction of rational number.
+ It is the all commas in the (n + 1)-tuple notation.
+
+ >>> continued_fraction(Fraction(2))
+ [2]
+ >>> continued_fraction(Fraction("3.245"))
+ [3, 4, 12, 4]
+ >>> continued_fraction(Fraction("2.25"))
+ [2, 4]
+ >>> continued_fraction(1/Fraction("2.25"))
+ [0, 2, 4]
+ >>> continued_fraction(Fraction("415/93"))
+ [4, 2, 6, 7]
+ """
+ numerator, denominator = num.as_integer_ratio()
+ continued_fraction_list: list[int] = []
+ while True:
+ integer_part = int(numerator / denominator)
+ continued_fraction_list.append(integer_part)
+ numerator -= integer_part * denominator
+ if numerator == 0:
+ break
+ numerator, denominator = denominator, numerator
+
+ return continued_fraction_list
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
+ print("Continued Fraction of 0.84375 is: ", continued_fraction(Fraction("0.84375")))
From 5c276a8377b9f4139dac9cfff83fd47b88511a40 Mon Sep 17 00:00:00 2001
From: homsim <103424895+homsim@users.noreply.github.com>
Date: Wed, 16 Aug 2023 10:07:50 +0200
Subject: [PATCH 165/808] Quick fix: fig.canvas.set_window_title deprecated
(#8961)
Co-authored-by: homsim
---
physics/n_body_simulation.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/physics/n_body_simulation.py b/physics/n_body_simulation.py
index 2b701283f166..46330844df61 100644
--- a/physics/n_body_simulation.py
+++ b/physics/n_body_simulation.py
@@ -226,7 +226,7 @@ def plot(
No doctest provided since this function does not have a return value.
"""
fig = plt.figure()
- fig.canvas.set_window_title(title)
+ fig.canvas.manager.set_window_title(title)
ax = plt.axes(
xlim=(x_start, x_end), ylim=(y_start, y_end)
) # Set section to be plotted
From beb43517c3552b72b9c8fc1710f681b0180418ec Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Wed, 16 Aug 2023 04:36:10 -0700
Subject: [PATCH 166/808] Fix `mypy` errors in
`maths/gaussian_error_linear_unit.py` (#8610)
* updating DIRECTORY.md
* Fix mypy errors in gaussian_error_linear_unit.py
* updating DIRECTORY.md
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
maths/gaussian_error_linear_unit.py | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/maths/gaussian_error_linear_unit.py b/maths/gaussian_error_linear_unit.py
index 18384bb6c864..b3cbd7810716 100644
--- a/maths/gaussian_error_linear_unit.py
+++ b/maths/gaussian_error_linear_unit.py
@@ -30,12 +30,10 @@ def gaussian_error_linear_unit(vector: np.ndarray) -> np.ndarray:
Implements the Gaussian Error Linear Unit (GELU) function
Parameters:
- vector (np.array): A numpy array of shape (1,n)
- consisting of real values
+ vector (np.ndarray): A numpy array of shape (1, n) consisting of real values
Returns:
- gelu_vec (np.array): The input numpy array, after applying
- gelu.
+ gelu_vec (np.ndarray): The input numpy array, after applying gelu
Examples:
>>> gaussian_error_linear_unit(np.array([-1.0, 1.0, 2.0]))
From fd7cc4cf8e731c16a5dd2cf30c4ddb0dd017d59e Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Thu, 17 Aug 2023 02:21:00 +0100
Subject: [PATCH 167/808] Rename norgate to nor_gate to keep consistency
(#8968)
* refactor(boolean-algebra): Rename norgate to nor_gate
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 2 +-
boolean_algebra/{norgate.py => nor_gate.py} | 0
2 files changed, 1 insertion(+), 1 deletion(-)
rename boolean_algebra/{norgate.py => nor_gate.py} (100%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 8d1567465fbc..d4a2bb48511a 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -62,7 +62,7 @@
## Boolean Algebra
* [And Gate](boolean_algebra/and_gate.py)
* [Nand Gate](boolean_algebra/nand_gate.py)
- * [Norgate](boolean_algebra/norgate.py)
+ * [Nor Gate](boolean_algebra/nor_gate.py)
* [Not Gate](boolean_algebra/not_gate.py)
* [Or Gate](boolean_algebra/or_gate.py)
* [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py)
diff --git a/boolean_algebra/norgate.py b/boolean_algebra/nor_gate.py
similarity index 100%
rename from boolean_algebra/norgate.py
rename to boolean_algebra/nor_gate.py
From f6b12420ce2a16ddf55c5226ea6f188936af33ad Mon Sep 17 00:00:00 2001
From: Kausthub Kannan <99611070+kausthub-kannan@users.noreply.github.com>
Date: Thu, 17 Aug 2023 06:52:15 +0530
Subject: [PATCH 168/808] Added Leaky ReLU Activation Function (#8962)
* Added Leaky ReLU activation function
* Added Leaky ReLU activation function
* Added Leaky ReLU activation function
* Formatting and spelling fixes done
---
.../leaky_rectified_linear_unit.py | 39 +++++++++++++++++++
1 file changed, 39 insertions(+)
create mode 100644 neural_network/activation_functions/leaky_rectified_linear_unit.py
diff --git a/neural_network/activation_functions/leaky_rectified_linear_unit.py b/neural_network/activation_functions/leaky_rectified_linear_unit.py
new file mode 100644
index 000000000000..019086fd9821
--- /dev/null
+++ b/neural_network/activation_functions/leaky_rectified_linear_unit.py
@@ -0,0 +1,39 @@
+"""
+Leaky Rectified Linear Unit (Leaky ReLU)
+
+Use Case: Leaky ReLU addresses the problem of the vanishing gradient.
+For more detailed information, you can refer to the following link:
+https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Leaky_ReLU
+"""
+
+import numpy as np
+
+
+def leaky_rectified_linear_unit(vector: np.ndarray, alpha: float) -> np.ndarray:
+ """
+ Implements the LeakyReLU activation function.
+
+ Parameters:
+ vector (np.ndarray): The input array for LeakyReLU activation.
+ alpha (float): The slope for negative values.
+
+ Returns:
+ np.ndarray: The input array after applying the LeakyReLU activation.
+
+ Formula: f(x) = x if x > 0 else f(x) = alpha * x
+
+ Examples:
+ >>> leaky_rectified_linear_unit(vector=np.array([2.3,0.6,-2,-3.8]), alpha=0.3)
+ array([ 2.3 , 0.6 , -0.6 , -1.14])
+
+ >>> leaky_rectified_linear_unit(np.array([-9.2, -0.3, 0.45, -4.56]), alpha=0.067)
+ array([-0.6164 , -0.0201 , 0.45 , -0.30552])
+
+ """
+ return np.where(vector > 0, vector, alpha * vector)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From a207187ddb368edb121153d4f6e190fcfb857427 Mon Sep 17 00:00:00 2001
From: Ilkin Mengusoglu <113149540+imengus@users.noreply.github.com>
Date: Thu, 17 Aug 2023 22:34:53 +0100
Subject: [PATCH 169/808] Fix simplex.py (#8843)
* changes to accommodate special case
* changed n_slack calculation method
* fix precommit typehints
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* n_art_vars inputs
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix: docstrings and typehints
* fix: doctest issues when running code
* additional check and doctests
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix ruff
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix whitespace
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
linear_programming/simplex.py | 229 +++++++++++++++++++---------------
1 file changed, 128 insertions(+), 101 deletions(-)
diff --git a/linear_programming/simplex.py b/linear_programming/simplex.py
index ba64add40b5f..bbc97d8e22bf 100644
--- a/linear_programming/simplex.py
+++ b/linear_programming/simplex.py
@@ -20,40 +20,60 @@
class Tableau:
"""Operate on simplex tableaus
- >>> t = Tableau(np.array([[-1,-1,0,0,-1],[1,3,1,0,4],[3,1,0,1,4.]]), 2)
+ >>> Tableau(np.array([[-1,-1,0,0,1],[1,3,1,0,4],[3,1,0,1,4]]), 2, 2)
+ Traceback (most recent call last):
+ ...
+ TypeError: Tableau must have type float64
+
+ >>> Tableau(np.array([[-1,-1,0,0,-1],[1,3,1,0,4],[3,1,0,1,4.]]), 2, 2)
Traceback (most recent call last):
...
ValueError: RHS must be > 0
+
+ >>> Tableau(np.array([[-1,-1,0,0,1],[1,3,1,0,4],[3,1,0,1,4.]]), -2, 2)
+ Traceback (most recent call last):
+ ...
+ ValueError: number of (artificial) variables must be a natural number
"""
- def __init__(self, tableau: np.ndarray, n_vars: int) -> None:
+ # Max iteration number to prevent cycling
+ maxiter = 100
+
+ def __init__(
+ self, tableau: np.ndarray, n_vars: int, n_artificial_vars: int
+ ) -> None:
+ if tableau.dtype != "float64":
+ raise TypeError("Tableau must have type float64")
+
# Check if RHS is negative
- if np.any(tableau[:, -1], where=tableau[:, -1] < 0):
+ if not (tableau[:, -1] >= 0).all():
raise ValueError("RHS must be > 0")
+ if n_vars < 2 or n_artificial_vars < 0:
+ raise ValueError(
+ "number of (artificial) variables must be a natural number"
+ )
+
self.tableau = tableau
- self.n_rows, _ = tableau.shape
+ self.n_rows, n_cols = tableau.shape
# Number of decision variables x1, x2, x3...
- self.n_vars = n_vars
-
- # Number of artificial variables to be minimised
- self.n_art_vars = len(np.where(tableau[self.n_vars : -1] == -1)[0])
+ self.n_vars, self.n_artificial_vars = n_vars, n_artificial_vars
# 2 if there are >= or == constraints (nonstandard), 1 otherwise (std)
- self.n_stages = (self.n_art_vars > 0) + 1
+ self.n_stages = (self.n_artificial_vars > 0) + 1
# Number of slack variables added to make inequalities into equalities
- self.n_slack = self.n_rows - self.n_stages
+ self.n_slack = n_cols - self.n_vars - self.n_artificial_vars - 1
# Objectives for each stage
self.objectives = ["max"]
# In two stage simplex, first minimise then maximise
- if self.n_art_vars:
+ if self.n_artificial_vars:
self.objectives.append("min")
- self.col_titles = [""]
+ self.col_titles = self.generate_col_titles()
# Index of current pivot row and column
self.row_idx = None
@@ -62,48 +82,39 @@ def __init__(self, tableau: np.ndarray, n_vars: int) -> None:
# Does objective row only contain (non)-negative values?
self.stop_iter = False
- @staticmethod
- def generate_col_titles(*args: int) -> list[str]:
+ def generate_col_titles(self) -> list[str]:
"""Generate column titles for tableau of specific dimensions
- >>> Tableau.generate_col_titles(2, 3, 1)
- ['x1', 'x2', 's1', 's2', 's3', 'a1', 'RHS']
-
- >>> Tableau.generate_col_titles()
- Traceback (most recent call last):
- ...
- ValueError: Must provide n_vars, n_slack, and n_art_vars
- >>> Tableau.generate_col_titles(-2, 3, 1)
- Traceback (most recent call last):
- ...
- ValueError: All arguments must be non-negative integers
- """
- if len(args) != 3:
- raise ValueError("Must provide n_vars, n_slack, and n_art_vars")
+ >>> Tableau(np.array([[-1,-1,0,0,1],[1,3,1,0,4],[3,1,0,1,4.]]),
+ ... 2, 0).generate_col_titles()
+ ['x1', 'x2', 's1', 's2', 'RHS']
- if not all(x >= 0 and isinstance(x, int) for x in args):
- raise ValueError("All arguments must be non-negative integers")
+ >>> Tableau(np.array([[-1,-1,0,0,1],[1,3,1,0,4],[3,1,0,1,4.]]),
+ ... 2, 2).generate_col_titles()
+ ['x1', 'x2', 'RHS']
+ """
+ args = (self.n_vars, self.n_slack)
- # decision | slack | artificial
- string_starts = ["x", "s", "a"]
+ # decision | slack
+ string_starts = ["x", "s"]
titles = []
- for i in range(3):
+ for i in range(2):
for j in range(args[i]):
titles.append(string_starts[i] + str(j + 1))
titles.append("RHS")
return titles
- def find_pivot(self, tableau: np.ndarray) -> tuple[Any, Any]:
+ def find_pivot(self) -> tuple[Any, Any]:
"""Finds the pivot row and column.
- >>> t = Tableau(np.array([[-2,1,0,0,0], [3,1,1,0,6], [1,2,0,1,7.]]), 2)
- >>> t.find_pivot(t.tableau)
+ >>> Tableau(np.array([[-2,1,0,0,0], [3,1,1,0,6], [1,2,0,1,7.]]),
+ ... 2, 0).find_pivot()
(1, 0)
"""
objective = self.objectives[-1]
# Find entries of highest magnitude in objective rows
sign = (objective == "min") - (objective == "max")
- col_idx = np.argmax(sign * tableau[0, : self.n_vars])
+ col_idx = np.argmax(sign * self.tableau[0, :-1])
# Choice is only valid if below 0 for maximise, and above for minimise
if sign * self.tableau[0, col_idx] <= 0:
@@ -117,15 +128,15 @@ def find_pivot(self, tableau: np.ndarray) -> tuple[Any, Any]:
s = slice(self.n_stages, self.n_rows)
# RHS
- dividend = tableau[s, -1]
+ dividend = self.tableau[s, -1]
# Elements of pivot column within slice
- divisor = tableau[s, col_idx]
+ divisor = self.tableau[s, col_idx]
# Array filled with nans
nans = np.full(self.n_rows - self.n_stages, np.nan)
- # If element in pivot column is greater than zeron_stages, return
+ # If element in pivot column is greater than zero, return
# quotient or nan otherwise
quotients = np.divide(dividend, divisor, out=nans, where=divisor > 0)
@@ -134,18 +145,18 @@ def find_pivot(self, tableau: np.ndarray) -> tuple[Any, Any]:
row_idx = np.nanargmin(quotients) + self.n_stages
return row_idx, col_idx
- def pivot(self, tableau: np.ndarray, row_idx: int, col_idx: int) -> np.ndarray:
+ def pivot(self, row_idx: int, col_idx: int) -> np.ndarray:
"""Pivots on value on the intersection of pivot row and column.
- >>> t = Tableau(np.array([[-2,-3,0,0,0],[1,3,1,0,4],[3,1,0,1,4.]]), 2)
- >>> t.pivot(t.tableau, 1, 0).tolist()
+ >>> Tableau(np.array([[-2,-3,0,0,0],[1,3,1,0,4],[3,1,0,1,4.]]),
+ ... 2, 2).pivot(1, 0).tolist()
... # doctest: +NORMALIZE_WHITESPACE
[[0.0, 3.0, 2.0, 0.0, 8.0],
[1.0, 3.0, 1.0, 0.0, 4.0],
[0.0, -8.0, -3.0, 1.0, -8.0]]
"""
# Avoid changes to original tableau
- piv_row = tableau[row_idx].copy()
+ piv_row = self.tableau[row_idx].copy()
piv_val = piv_row[col_idx]
@@ -153,48 +164,47 @@ def pivot(self, tableau: np.ndarray, row_idx: int, col_idx: int) -> np.ndarray:
piv_row *= 1 / piv_val
# Variable in pivot column becomes basic, ie the only non-zero entry
- for idx, coeff in enumerate(tableau[:, col_idx]):
- tableau[idx] += -coeff * piv_row
- tableau[row_idx] = piv_row
- return tableau
+ for idx, coeff in enumerate(self.tableau[:, col_idx]):
+ self.tableau[idx] += -coeff * piv_row
+ self.tableau[row_idx] = piv_row
+ return self.tableau
- def change_stage(self, tableau: np.ndarray) -> np.ndarray:
+ def change_stage(self) -> np.ndarray:
"""Exits first phase of the two-stage method by deleting artificial
rows and columns, or completes the algorithm if exiting the standard
case.
- >>> t = Tableau(np.array([
+ >>> Tableau(np.array([
... [3, 3, -1, -1, 0, 0, 4],
... [2, 1, 0, 0, 0, 0, 0.],
... [1, 2, -1, 0, 1, 0, 2],
... [2, 1, 0, -1, 0, 1, 2]
- ... ]), 2)
- >>> t.change_stage(t.tableau).tolist()
+ ... ]), 2, 2).change_stage().tolist()
... # doctest: +NORMALIZE_WHITESPACE
- [[2.0, 1.0, 0.0, 0.0, 0.0, 0.0],
- [1.0, 2.0, -1.0, 0.0, 1.0, 2.0],
- [2.0, 1.0, 0.0, -1.0, 0.0, 2.0]]
+ [[2.0, 1.0, 0.0, 0.0, 0.0],
+ [1.0, 2.0, -1.0, 0.0, 2.0],
+ [2.0, 1.0, 0.0, -1.0, 2.0]]
"""
# Objective of original objective row remains
self.objectives.pop()
if not self.objectives:
- return tableau
+ return self.tableau
# Slice containing ids for artificial columns
- s = slice(-self.n_art_vars - 1, -1)
+ s = slice(-self.n_artificial_vars - 1, -1)
# Delete the artificial variable columns
- tableau = np.delete(tableau, s, axis=1)
+ self.tableau = np.delete(self.tableau, s, axis=1)
# Delete the objective row of the first stage
- tableau = np.delete(tableau, 0, axis=0)
+ self.tableau = np.delete(self.tableau, 0, axis=0)
self.n_stages = 1
self.n_rows -= 1
- self.n_art_vars = 0
+ self.n_artificial_vars = 0
self.stop_iter = False
- return tableau
+ return self.tableau
def run_simplex(self) -> dict[Any, Any]:
"""Operate on tableau until objective function cannot be
@@ -205,15 +215,29 @@ def run_simplex(self) -> dict[Any, Any]:
ST: x1 + 3x2 <= 4
3x1 + x2 <= 4
>>> Tableau(np.array([[-1,-1,0,0,0],[1,3,1,0,4],[3,1,0,1,4.]]),
- ... 2).run_simplex()
+ ... 2, 0).run_simplex()
{'P': 2.0, 'x1': 1.0, 'x2': 1.0}
+ # Standard linear program with 3 variables:
+ Max: 3x1 + x2 + 3x3
+ ST: 2x1 + x2 + x3 ≤ 2
+ x1 + 2x2 + 3x3 ≤ 5
+ 2x1 + 2x2 + x3 ≤ 6
+ >>> Tableau(np.array([
+ ... [-3,-1,-3,0,0,0,0],
+ ... [2,1,1,1,0,0,2],
+ ... [1,2,3,0,1,0,5],
+ ... [2,2,1,0,0,1,6.]
+ ... ]),3,0).run_simplex() # doctest: +ELLIPSIS
+ {'P': 5.4, 'x1': 0.199..., 'x3': 1.6}
+
+
# Optimal tableau input:
>>> Tableau(np.array([
... [0, 0, 0.25, 0.25, 2],
... [0, 1, 0.375, -0.125, 1],
... [1, 0, -0.125, 0.375, 1]
- ... ]), 2).run_simplex()
+ ... ]), 2, 0).run_simplex()
{'P': 2.0, 'x1': 1.0, 'x2': 1.0}
# Non-standard: >= constraints
@@ -227,7 +251,7 @@ def run_simplex(self) -> dict[Any, Any]:
... [1, 1, 1, 1, 0, 0, 0, 0, 40],
... [2, 1, -1, 0, -1, 0, 1, 0, 10],
... [0, -1, 1, 0, 0, -1, 0, 1, 10.]
- ... ]), 3).run_simplex()
+ ... ]), 3, 2).run_simplex()
{'P': 70.0, 'x1': 10.0, 'x2': 10.0, 'x3': 20.0}
# Non standard: minimisation and equalities
@@ -235,73 +259,76 @@ def run_simplex(self) -> dict[Any, Any]:
ST: 2x1 + x2 = 12
6x1 + 5x2 = 40
>>> Tableau(np.array([
- ... [8, 6, 0, -1, 0, -1, 0, 0, 52],
- ... [1, 1, 0, 0, 0, 0, 0, 0, 0],
- ... [2, 1, 1, 0, 0, 0, 0, 0, 12],
- ... [2, 1, 0, -1, 0, 0, 1, 0, 12],
- ... [6, 5, 0, 0, 1, 0, 0, 0, 40],
- ... [6, 5, 0, 0, 0, -1, 0, 1, 40.]
- ... ]), 2).run_simplex()
+ ... [8, 6, 0, 0, 52],
+ ... [1, 1, 0, 0, 0],
+ ... [2, 1, 1, 0, 12],
+ ... [6, 5, 0, 1, 40.],
+ ... ]), 2, 2).run_simplex()
{'P': 7.0, 'x1': 5.0, 'x2': 2.0}
+
+
+ # Pivot on slack variables
+ Max: 8x1 + 6x2
+ ST: x1 + 3x2 <= 33
+ 4x1 + 2x2 <= 48
+ 2x1 + 4x2 <= 48
+ x1 + x2 >= 10
+ x1 >= 2
+ >>> Tableau(np.array([
+ ... [2, 1, 0, 0, 0, -1, -1, 0, 0, 12.0],
+ ... [-8, -6, 0, 0, 0, 0, 0, 0, 0, 0.0],
+ ... [1, 3, 1, 0, 0, 0, 0, 0, 0, 33.0],
+ ... [4, 2, 0, 1, 0, 0, 0, 0, 0, 60.0],
+ ... [2, 4, 0, 0, 1, 0, 0, 0, 0, 48.0],
+ ... [1, 1, 0, 0, 0, -1, 0, 1, 0, 10.0],
+ ... [1, 0, 0, 0, 0, 0, -1, 0, 1, 2.0]
+ ... ]), 2, 2).run_simplex() # doctest: +ELLIPSIS
+ {'P': 132.0, 'x1': 12.000... 'x2': 5.999...}
"""
# Stop simplex algorithm from cycling.
- for _ in range(100):
+ for _ in range(Tableau.maxiter):
# Completion of each stage removes an objective. If both stages
# are complete, then no objectives are left
if not self.objectives:
- self.col_titles = self.generate_col_titles(
- self.n_vars, self.n_slack, self.n_art_vars
- )
-
# Find the values of each variable at optimal solution
- return self.interpret_tableau(self.tableau, self.col_titles)
+ return self.interpret_tableau()
- row_idx, col_idx = self.find_pivot(self.tableau)
+ row_idx, col_idx = self.find_pivot()
# If there are no more negative values in objective row
if self.stop_iter:
# Delete artificial variable columns and rows. Update attributes
- self.tableau = self.change_stage(self.tableau)
+ self.tableau = self.change_stage()
else:
- self.tableau = self.pivot(self.tableau, row_idx, col_idx)
+ self.tableau = self.pivot(row_idx, col_idx)
return {}
- def interpret_tableau(
- self, tableau: np.ndarray, col_titles: list[str]
- ) -> dict[str, float]:
+ def interpret_tableau(self) -> dict[str, float]:
"""Given the final tableau, add the corresponding values of the basic
decision variables to the `output_dict`
- >>> tableau = np.array([
+ >>> Tableau(np.array([
... [0,0,0.875,0.375,5],
... [0,1,0.375,-0.125,1],
... [1,0,-0.125,0.375,1]
- ... ])
- >>> t = Tableau(tableau, 2)
- >>> t.interpret_tableau(tableau, ["x1", "x2", "s1", "s2", "RHS"])
+ ... ]),2, 0).interpret_tableau()
{'P': 5.0, 'x1': 1.0, 'x2': 1.0}
"""
# P = RHS of final tableau
- output_dict = {"P": abs(tableau[0, -1])}
+ output_dict = {"P": abs(self.tableau[0, -1])}
for i in range(self.n_vars):
- # Gives ids of nonzero entries in the ith column
- nonzero = np.nonzero(tableau[:, i])
+ # Gives indices of nonzero entries in the ith column
+ nonzero = np.nonzero(self.tableau[:, i])
n_nonzero = len(nonzero[0])
- # First entry in the nonzero ids
+ # First entry in the nonzero indices
nonzero_rowidx = nonzero[0][0]
- nonzero_val = tableau[nonzero_rowidx, i]
+ nonzero_val = self.tableau[nonzero_rowidx, i]
# If there is only one nonzero value in column, which is one
- if n_nonzero == nonzero_val == 1:
- rhs_val = tableau[nonzero_rowidx, -1]
- output_dict[col_titles[i]] = rhs_val
-
- # Check for basic variables
- for title in col_titles:
- # Don't add RHS or slack variables to output dict
- if title[0] not in "R-s-a":
- output_dict.setdefault(title, 0)
+ if n_nonzero == 1 and nonzero_val == 1:
+ rhs_val = self.tableau[nonzero_rowidx, -1]
+ output_dict[self.col_titles[i]] = rhs_val
return output_dict
From 72c7b05caa7e5b109b7b42c796a8af39f99a5100 Mon Sep 17 00:00:00 2001
From: Boris Galochkin
Date: Fri, 18 Aug 2023 04:38:19 +0300
Subject: [PATCH 170/808] Fix `sorts/bucket_sort.py` implementation (#5786)
* Fix sorts/bucket_sort.py
* updating DIRECTORY.md
* Remove unused var in bucket_sort.py
* Fix list index in bucket_sort.py
---------
Co-authored-by: Tianyi Zheng
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 +
sorts/bucket_sort.py | 18 ++++++++++++------
2 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index d4a2bb48511a..e39a0674743a 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -710,6 +710,7 @@
* [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py)
* Activation Functions
* [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py)
+ * [Leaky Rectified Linear Unit](neural_network/activation_functions/leaky_rectified_linear_unit.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Perceptron](neural_network/perceptron.py)
diff --git a/sorts/bucket_sort.py b/sorts/bucket_sort.py
index 7bcbe61a4526..c016e9e26e73 100644
--- a/sorts/bucket_sort.py
+++ b/sorts/bucket_sort.py
@@ -30,7 +30,7 @@
from __future__ import annotations
-def bucket_sort(my_list: list) -> list:
+def bucket_sort(my_list: list, bucket_count: int = 10) -> list:
"""
>>> data = [-1, 2, -5, 0]
>>> bucket_sort(data) == sorted(data)
@@ -43,21 +43,27 @@ def bucket_sort(my_list: list) -> list:
True
>>> bucket_sort([]) == sorted([])
True
+ >>> data = [-1e10, 1e10]
+ >>> bucket_sort(data) == sorted(data)
+ True
>>> import random
>>> collection = random.sample(range(-50, 50), 50)
>>> bucket_sort(collection) == sorted(collection)
True
"""
- if len(my_list) == 0:
+
+ if len(my_list) == 0 or bucket_count <= 0:
return []
+
min_value, max_value = min(my_list), max(my_list)
- bucket_count = int(max_value - min_value) + 1
+ bucket_size = (max_value - min_value) / bucket_count
buckets: list[list] = [[] for _ in range(bucket_count)]
- for i in my_list:
- buckets[int(i - min_value)].append(i)
+ for val in my_list:
+ index = min(int((val - min_value) / bucket_size), bucket_count - 1)
+ buckets[index].append(val)
- return [v for bucket in buckets for v in sorted(bucket)]
+ return [val for bucket in buckets for val in sorted(bucket)]
if __name__ == "__main__":
From 5f7819e1cd192ecc89a7b7b929db63e045a47b45 Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Fri, 18 Aug 2023 13:13:38 +0100
Subject: [PATCH 171/808] Fix get top billionaires BROKEN file (#8970)
* updating DIRECTORY.md
* fix(get-top-billionaires): Handle timestamp before epoch
* updating DIRECTORY.md
* revert(pyproject): Re-implement ignore lru_cache
* fix(age): Update age to current year
* fix(doctest): Make years since dynamic
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 +
...es.py.disabled => get_top_billionaires.py} | 27 ++++++++++++++-----
2 files changed, 21 insertions(+), 7 deletions(-)
rename web_programming/{get_top_billionaires.py.disabled => get_top_billionaires.py} (72%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index e39a0674743a..1ff093d88766 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -1221,6 +1221,7 @@
* [Get Amazon Product Data](web_programming/get_amazon_product_data.py)
* [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py)
* [Get Imdbtop](web_programming/get_imdbtop.py)
+ * [Get Top Billionaires](web_programming/get_top_billionaires.py)
* [Get Top Hn Posts](web_programming/get_top_hn_posts.py)
* [Get User Tweets](web_programming/get_user_tweets.py)
* [Giphy](web_programming/giphy.py)
diff --git a/web_programming/get_top_billionaires.py.disabled b/web_programming/get_top_billionaires.py
similarity index 72%
rename from web_programming/get_top_billionaires.py.disabled
rename to web_programming/get_top_billionaires.py
index 6a8054e26270..6f986acb9181 100644
--- a/web_programming/get_top_billionaires.py.disabled
+++ b/web_programming/get_top_billionaires.py
@@ -3,7 +3,7 @@
This works for some of us but fails for others.
"""
-from datetime import datetime
+from datetime import UTC, datetime, timedelta
import requests
from rich import box
@@ -20,18 +20,31 @@
)
-def calculate_age(unix_date: int) -> str:
+def calculate_age(unix_date: float) -> str:
"""Calculates age from given unix time format.
Returns:
Age as string
- >>> calculate_age(-657244800000)
- '73'
- >>> calculate_age(46915200000)
- '51'
+ >>> from datetime import datetime, UTC
+ >>> years_since_create = datetime.now(tz=UTC).year - 2022
+ >>> int(calculate_age(-657244800000)) - years_since_create
+ 73
+ >>> int(calculate_age(46915200000)) - years_since_create
+ 51
"""
- birthdate = datetime.fromtimestamp(unix_date / 1000).date()
+ # Convert date from milliseconds to seconds
+ unix_date /= 1000
+
+ if unix_date < 0:
+ # Handle timestamp before epoch
+ epoch = datetime.fromtimestamp(0, tz=UTC)
+ seconds_since_epoch = (datetime.now(tz=UTC) - epoch).seconds
+ birthdate = (
+ epoch - timedelta(seconds=abs(unix_date) - seconds_since_epoch)
+ ).date()
+ else:
+ birthdate = datetime.fromtimestamp(unix_date, tz=UTC).date()
return str(
TODAY.year
- birthdate.year
From 945803f65d79d0277c663a0e043228ed10996a92 Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Fri, 18 Aug 2023 13:19:25 +0100
Subject: [PATCH 172/808] Unmark fetch anime and play as BROKEN and fix type
errors (#8988)
* updating DIRECTORY.md
* type(fetch-anime-and-play): Fix type errors and re-enable
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 +
...play.py.BROKEN => fetch_anime_and_play.py} | 71 ++++++++++---------
2 files changed, 38 insertions(+), 34 deletions(-)
rename web_programming/{fetch_anime_and_play.py.BROKEN => fetch_anime_and_play.py} (70%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 1ff093d88766..6af4ead56ebd 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -1213,6 +1213,7 @@
* [Daily Horoscope](web_programming/daily_horoscope.py)
* [Download Images From Google Query](web_programming/download_images_from_google_query.py)
* [Emails From Url](web_programming/emails_from_url.py)
+ * [Fetch Anime And Play](web_programming/fetch_anime_and_play.py)
* [Fetch Bbc News](web_programming/fetch_bbc_news.py)
* [Fetch Github Info](web_programming/fetch_github_info.py)
* [Fetch Jobs](web_programming/fetch_jobs.py)
diff --git a/web_programming/fetch_anime_and_play.py.BROKEN b/web_programming/fetch_anime_and_play.py
similarity index 70%
rename from web_programming/fetch_anime_and_play.py.BROKEN
rename to web_programming/fetch_anime_and_play.py
index 3bd4f704dd8d..366807785e85 100644
--- a/web_programming/fetch_anime_and_play.py.BROKEN
+++ b/web_programming/fetch_anime_and_play.py
@@ -1,7 +1,5 @@
-from xml.dom import NotFoundErr
-
import requests
-from bs4 import BeautifulSoup, NavigableString
+from bs4 import BeautifulSoup, NavigableString, Tag
from fake_useragent import UserAgent
BASE_URL = "https://ww1.gogoanime2.org"
@@ -41,25 +39,23 @@ def search_scraper(anime_name: str) -> list:
# get list of anime
anime_ul = soup.find("ul", {"class": "items"})
+ if anime_ul is None or isinstance(anime_ul, NavigableString):
+ msg = f"Could not find and anime with name {anime_name}"
+ raise ValueError(msg)
anime_li = anime_ul.children
# for each anime, insert to list. the name and url.
anime_list = []
for anime in anime_li:
- if not isinstance(anime, NavigableString):
- try:
- anime_url, anime_title = (
- anime.find("a")["href"],
- anime.find("a")["title"],
- )
- anime_list.append(
- {
- "title": anime_title,
- "url": anime_url,
- }
- )
- except (NotFoundErr, KeyError):
- pass
+ if isinstance(anime, Tag):
+ anime_url = anime.find("a")
+ if anime_url is None or isinstance(anime_url, NavigableString):
+ continue
+ anime_title = anime.find("a")
+ if anime_title is None or isinstance(anime_title, NavigableString):
+ continue
+
+ anime_list.append({"title": anime_title["title"], "url": anime_url["href"]})
return anime_list
@@ -93,22 +89,24 @@ def search_anime_episode_list(episode_endpoint: str) -> list:
# With this id. get the episode list.
episode_page_ul = soup.find("ul", {"id": "episode_related"})
+ if episode_page_ul is None or isinstance(episode_page_ul, NavigableString):
+ msg = f"Could not find any anime eposiodes with name {anime_name}"
+ raise ValueError(msg)
episode_page_li = episode_page_ul.children
episode_list = []
for episode in episode_page_li:
- try:
- if not isinstance(episode, NavigableString):
- episode_list.append(
- {
- "title": episode.find("div", {"class": "name"}).text.replace(
- " ", ""
- ),
- "url": episode.find("a")["href"],
- }
- )
- except (KeyError, NotFoundErr):
- pass
+ if isinstance(episode, Tag):
+ url = episode.find("a")
+ if url is None or isinstance(url, NavigableString):
+ continue
+ title = episode.find("div", {"class": "name"})
+ if title is None or isinstance(title, NavigableString):
+ continue
+
+ episode_list.append(
+ {"title": title.text.replace(" ", ""), "url": url["href"]}
+ )
return episode_list
@@ -140,11 +138,16 @@ def get_anime_episode(episode_endpoint: str) -> list:
soup = BeautifulSoup(response.text, "html.parser")
- try:
- episode_url = soup.find("iframe", {"id": "playerframe"})["src"]
- download_url = episode_url.replace("/embed/", "/playlist/") + ".m3u8"
- except (KeyError, NotFoundErr) as e:
- raise e
+ url = soup.find("iframe", {"id": "playerframe"})
+ if url is None or isinstance(url, NavigableString):
+ msg = f"Could not find url and download url from {episode_endpoint}"
+ raise RuntimeError(msg)
+
+ episode_url = url["src"]
+ if not isinstance(episode_url, str):
+ msg = f"Could not find url and download url from {episode_endpoint}"
+ raise RuntimeError(msg)
+ download_url = episode_url.replace("/embed/", "/playlist/") + ".m3u8"
return [f"{BASE_URL}{episode_url}", f"{BASE_URL}{download_url}"]
From e887c14f1252cd7de3d99ef0553c448c8c9711df Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Fri, 18 Aug 2023 13:53:17 -0700
Subject: [PATCH 173/808] Fix continued_fraction.py to work for negative
numbers (#8985)
* Add doctests to continued_fraction.py for 0 and neg nums
* Fix continued_fraction.py to work for negative nums
Fix continued_fraction.py to work for negative nums by replacing int() call with floor()
* Move comment in doctest
---
maths/continued_fraction.py | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/maths/continued_fraction.py b/maths/continued_fraction.py
index 25ff649db77a..04ff0b6ff0d2 100644
--- a/maths/continued_fraction.py
+++ b/maths/continued_fraction.py
@@ -6,6 +6,7 @@
from fractions import Fraction
+from math import floor
def continued_fraction(num: Fraction) -> list[int]:
@@ -29,11 +30,17 @@ def continued_fraction(num: Fraction) -> list[int]:
[0, 2, 4]
>>> continued_fraction(Fraction("415/93"))
[4, 2, 6, 7]
+ >>> continued_fraction(Fraction(0))
+ [0]
+ >>> continued_fraction(Fraction(0.75))
+ [0, 1, 3]
+ >>> continued_fraction(Fraction("-2.25")) # -2.25 = -3 + 0.75
+ [-3, 1, 3]
"""
numerator, denominator = num.as_integer_ratio()
continued_fraction_list: list[int] = []
while True:
- integer_part = int(numerator / denominator)
+ integer_part = floor(numerator / denominator)
continued_fraction_list.append(integer_part)
numerator -= integer_part * denominator
if numerator == 0:
From 5ecb6baef8bf52f9bb99a1bb7cec4899b6df7ab4 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Sun, 20 Aug 2023 05:36:00 -0700
Subject: [PATCH 174/808] Move and reimplement `convert_number_to_words.py`
(#8998)
* Move and reimplement convert_number_to_words.py
- Move convert_number_to_words.py from web_programming/ to conversions/
- Reimplement the algorithm from scratch because the logic was very
opaque and too heavily nested
- Add support for the Western numbering system (both short and long)
because the original implementation only supported the Indian
numbering system
- Add extensive doctests and error handling
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 2 +-
conversions/convert_number_to_words.py | 205 +++++++++++++++++++++
web_programming/convert_number_to_words.py | 109 -----------
3 files changed, 206 insertions(+), 110 deletions(-)
create mode 100644 conversions/convert_number_to_words.py
delete mode 100644 web_programming/convert_number_to_words.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 6af4ead56ebd..653c1831d820 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -143,6 +143,7 @@
* [Binary To Decimal](conversions/binary_to_decimal.py)
* [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py)
* [Binary To Octal](conversions/binary_to_octal.py)
+ * [Convert Number To Words](conversions/convert_number_to_words.py)
* [Decimal To Any](conversions/decimal_to_any.py)
* [Decimal To Binary](conversions/decimal_to_binary.py)
* [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py)
@@ -1203,7 +1204,6 @@
## Web Programming
* [Co2 Emission](web_programming/co2_emission.py)
- * [Convert Number To Words](web_programming/convert_number_to_words.py)
* [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py)
* [Crawl Google Results](web_programming/crawl_google_results.py)
* [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py)
diff --git a/conversions/convert_number_to_words.py b/conversions/convert_number_to_words.py
new file mode 100644
index 000000000000..0e4405319f1f
--- /dev/null
+++ b/conversions/convert_number_to_words.py
@@ -0,0 +1,205 @@
+from enum import Enum
+from typing import ClassVar, Literal
+
+
+class NumberingSystem(Enum):
+ SHORT = (
+ (15, "quadrillion"),
+ (12, "trillion"),
+ (9, "billion"),
+ (6, "million"),
+ (3, "thousand"),
+ (2, "hundred"),
+ )
+
+ LONG = (
+ (15, "billiard"),
+ (9, "milliard"),
+ (6, "million"),
+ (3, "thousand"),
+ (2, "hundred"),
+ )
+
+ INDIAN = (
+ (14, "crore crore"),
+ (12, "lakh crore"),
+ (7, "crore"),
+ (5, "lakh"),
+ (3, "thousand"),
+ (2, "hundred"),
+ )
+
+ @classmethod
+ def max_value(cls, system: str) -> int:
+ """
+ Gets the max value supported by the given number system.
+
+ >>> NumberingSystem.max_value("short") == 10**18 - 1
+ True
+ >>> NumberingSystem.max_value("long") == 10**21 - 1
+ True
+ >>> NumberingSystem.max_value("indian") == 10**19 - 1
+ True
+ """
+ match (system_enum := cls[system.upper()]):
+ case cls.SHORT:
+ max_exp = system_enum.value[0][0] + 3
+ case cls.LONG:
+ max_exp = system_enum.value[0][0] + 6
+ case cls.INDIAN:
+ max_exp = 19
+ case _:
+ raise ValueError("Invalid numbering system")
+ return 10**max_exp - 1
+
+
+class NumberWords(Enum):
+ ONES: ClassVar = {
+ 0: "",
+ 1: "one",
+ 2: "two",
+ 3: "three",
+ 4: "four",
+ 5: "five",
+ 6: "six",
+ 7: "seven",
+ 8: "eight",
+ 9: "nine",
+ }
+
+ TEENS: ClassVar = {
+ 0: "ten",
+ 1: "eleven",
+ 2: "twelve",
+ 3: "thirteen",
+ 4: "fourteen",
+ 5: "fifteen",
+ 6: "sixteen",
+ 7: "seventeen",
+ 8: "eighteen",
+ 9: "nineteen",
+ }
+
+ TENS: ClassVar = {
+ 2: "twenty",
+ 3: "thirty",
+ 4: "forty",
+ 5: "fifty",
+ 6: "sixty",
+ 7: "seventy",
+ 8: "eighty",
+ 9: "ninety",
+ }
+
+
+def convert_small_number(num: int) -> str:
+ """
+ Converts small, non-negative integers with irregular constructions in English (i.e.,
+ numbers under 100) into words.
+
+ >>> convert_small_number(0)
+ 'zero'
+ >>> convert_small_number(5)
+ 'five'
+ >>> convert_small_number(10)
+ 'ten'
+ >>> convert_small_number(15)
+ 'fifteen'
+ >>> convert_small_number(20)
+ 'twenty'
+ >>> convert_small_number(25)
+ 'twenty-five'
+ >>> convert_small_number(-1)
+ Traceback (most recent call last):
+ ...
+ ValueError: This function only accepts non-negative integers
+ >>> convert_small_number(123)
+ Traceback (most recent call last):
+ ...
+ ValueError: This function only converts numbers less than 100
+ """
+ if num < 0:
+ raise ValueError("This function only accepts non-negative integers")
+ if num >= 100:
+ raise ValueError("This function only converts numbers less than 100")
+ tens, ones = divmod(num, 10)
+ if tens == 0:
+ return NumberWords.ONES.value[ones] or "zero"
+ if tens == 1:
+ return NumberWords.TEENS.value[ones]
+ return (
+ NumberWords.TENS.value[tens]
+ + ("-" if NumberWords.ONES.value[ones] else "")
+ + NumberWords.ONES.value[ones]
+ )
+
+
+def convert_number(
+ num: int, system: Literal["short", "long", "indian"] = "short"
+) -> str:
+ """
+ Converts an integer to English words.
+
+ :param num: The integer to be converted
+ :param system: The numbering system (short, long, or Indian)
+
+ >>> convert_number(0)
+ 'zero'
+ >>> convert_number(1)
+ 'one'
+ >>> convert_number(100)
+ 'one hundred'
+ >>> convert_number(-100)
+ 'negative one hundred'
+ >>> convert_number(123_456_789_012_345) # doctest: +NORMALIZE_WHITESPACE
+ 'one hundred twenty-three trillion four hundred fifty-six billion
+ seven hundred eighty-nine million twelve thousand three hundred forty-five'
+ >>> convert_number(123_456_789_012_345, "long") # doctest: +NORMALIZE_WHITESPACE
+ 'one hundred twenty-three thousand four hundred fifty-six milliard
+ seven hundred eighty-nine million twelve thousand three hundred forty-five'
+ >>> convert_number(12_34_56_78_90_12_345, "indian") # doctest: +NORMALIZE_WHITESPACE
+ 'one crore crore twenty-three lakh crore
+ forty-five thousand six hundred seventy-eight crore
+ ninety lakh twelve thousand three hundred forty-five'
+ >>> convert_number(10**18)
+ Traceback (most recent call last):
+ ...
+ ValueError: Input number is too large
+ >>> convert_number(10**21, "long")
+ Traceback (most recent call last):
+ ...
+ ValueError: Input number is too large
+ >>> convert_number(10**19, "indian")
+ Traceback (most recent call last):
+ ...
+ ValueError: Input number is too large
+ """
+ word_groups = []
+
+ if num < 0:
+ word_groups.append("negative")
+ num *= -1
+
+ if num > NumberingSystem.max_value(system):
+ raise ValueError("Input number is too large")
+
+ for power, unit in NumberingSystem[system.upper()].value:
+ digit_group, num = divmod(num, 10**power)
+ if digit_group > 0:
+ word_group = (
+ convert_number(digit_group, system)
+ if digit_group >= 100
+ else convert_small_number(digit_group)
+ )
+ word_groups.append(f"{word_group} {unit}")
+ if num > 0 or not word_groups: # word_groups is only empty if input num was 0
+ word_groups.append(convert_small_number(num))
+ return " ".join(word_groups)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
+ print(f"{convert_number(123456789) = }")
diff --git a/web_programming/convert_number_to_words.py b/web_programming/convert_number_to_words.py
deleted file mode 100644
index dac9e3e38e7c..000000000000
--- a/web_programming/convert_number_to_words.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import math
-
-
-def convert(number: int) -> str:
- """
- Given a number return the number in words.
-
- >>> convert(123)
- 'OneHundred,TwentyThree'
- """
- if number == 0:
- words = "Zero"
- return words
- else:
- digits = math.log10(number)
- digits = digits + 1
- singles = {}
- singles[0] = ""
- singles[1] = "One"
- singles[2] = "Two"
- singles[3] = "Three"
- singles[4] = "Four"
- singles[5] = "Five"
- singles[6] = "Six"
- singles[7] = "Seven"
- singles[8] = "Eight"
- singles[9] = "Nine"
-
- doubles = {}
- doubles[0] = ""
- doubles[2] = "Twenty"
- doubles[3] = "Thirty"
- doubles[4] = "Forty"
- doubles[5] = "Fifty"
- doubles[6] = "Sixty"
- doubles[7] = "Seventy"
- doubles[8] = "Eighty"
- doubles[9] = "Ninety"
-
- teens = {}
- teens[0] = "Ten"
- teens[1] = "Eleven"
- teens[2] = "Twelve"
- teens[3] = "Thirteen"
- teens[4] = "Fourteen"
- teens[5] = "Fifteen"
- teens[6] = "Sixteen"
- teens[7] = "Seventeen"
- teens[8] = "Eighteen"
- teens[9] = "Nineteen"
-
- placevalue = {}
- placevalue[2] = "Hundred,"
- placevalue[3] = "Thousand,"
- placevalue[5] = "Lakh,"
- placevalue[7] = "Crore,"
-
- temp_num = number
- words = ""
- counter = 0
- digits = int(digits)
- while counter < digits:
- current = temp_num % 10
- if counter % 2 == 0:
- addition = ""
- if counter in placevalue and current != 0:
- addition = placevalue[counter]
- if counter == 2:
- words = singles[current] + addition + words
- elif counter == 0:
- if ((temp_num % 100) // 10) == 1:
- words = teens[current] + addition + words
- temp_num = temp_num // 10
- counter += 1
- else:
- words = singles[current] + addition + words
-
- else:
- words = doubles[current] + addition + words
-
- else:
- if counter == 1:
- if current == 1:
- words = teens[number % 10] + words
- else:
- addition = ""
- if counter in placevalue:
- addition = placevalue[counter]
- words = doubles[current] + addition + words
- else:
- addition = ""
- if counter in placevalue:
- if current != 0 and ((temp_num % 100) // 10) != 0:
- addition = placevalue[counter]
- if ((temp_num % 100) // 10) == 1:
- words = teens[current] + addition + words
- temp_num = temp_num // 10
- counter += 1
- else:
- words = singles[current] + addition + words
- counter += 1
- temp_num = temp_num // 10
- return words
-
-
-if __name__ == "__main__":
- import doctest
-
- doctest.testmod()
From 062957ef27fcaaf59753e3739052928ec37f220e Mon Sep 17 00:00:00 2001
From: Bama Charan Chhandogi
Date: Sun, 20 Aug 2023 18:10:23 +0530
Subject: [PATCH 175/808] Octal to Binary Convert (#8949)
* Octal to Binary Convert
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* mention return type
* code scratch
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* mentioned return type
* remove comment
* added documention and some test cases
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* add another test case
* fixes documention
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Documention and test cases added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* documention problem solved
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* error in exit 1
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Apply suggestions from code review
---------
Co-authored-by: BamaCharanChhandogi
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
conversions/octal_to_binary.py | 54 ++++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)
create mode 100644 conversions/octal_to_binary.py
diff --git a/conversions/octal_to_binary.py b/conversions/octal_to_binary.py
new file mode 100644
index 000000000000..84e1e85f33ca
--- /dev/null
+++ b/conversions/octal_to_binary.py
@@ -0,0 +1,54 @@
+"""
+* Author: Bama Charan Chhandogi (https://github.com/BamaCharanChhandogi)
+* Description: Convert a Octal number to Binary.
+
+References for better understanding:
+https://en.wikipedia.org/wiki/Binary_number
+https://en.wikipedia.org/wiki/Octal
+"""
+
+
+def octal_to_binary(octal_number: str) -> str:
+ """
+ Convert an Octal number to Binary.
+
+ >>> octal_to_binary("17")
+ '001111'
+ >>> octal_to_binary("7")
+ '111'
+ >>> octal_to_binary("Av")
+ Traceback (most recent call last):
+ ...
+ ValueError: Non-octal value was passed to the function
+ >>> octal_to_binary("@#")
+ Traceback (most recent call last):
+ ...
+ ValueError: Non-octal value was passed to the function
+ >>> octal_to_binary("")
+ Traceback (most recent call last):
+ ...
+ ValueError: Empty string was passed to the function
+ """
+ if not octal_number:
+ raise ValueError("Empty string was passed to the function")
+
+ binary_number = ""
+ octal_digits = "01234567"
+ for digit in octal_number:
+ if digit not in octal_digits:
+ raise ValueError("Non-octal value was passed to the function")
+
+ binary_digit = ""
+ value = int(digit)
+ for _ in range(3):
+ binary_digit = str(value % 2) + binary_digit
+ value //= 2
+ binary_number += binary_digit
+
+ return binary_number
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 672e7bde2e5fad38a3bc4038d11a9c343e3667f7 Mon Sep 17 00:00:00 2001
From: Guduly <133545858+Guduly@users.noreply.github.com>
Date: Sun, 20 Aug 2023 18:39:29 -0500
Subject: [PATCH 176/808] Update arc_length.py (#8964)
* Update arc_length.py
Wrote the output of testcase
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update arc_length.py
Added the requested changes
* Update arc_length.py
followed the change request
* Update arc_length.py
followed suggestions
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
maths/arc_length.py | 2 ++
1 file changed, 2 insertions(+)
diff --git a/maths/arc_length.py b/maths/arc_length.py
index 9e87ca38cc7d..4c518f321dc7 100644
--- a/maths/arc_length.py
+++ b/maths/arc_length.py
@@ -7,6 +7,8 @@ def arc_length(angle: int, radius: int) -> float:
3.9269908169872414
>>> arc_length(120, 15)
31.415926535897928
+ >>> arc_length(90, 10)
+ 15.707963267948966
"""
return 2 * pi * radius * (angle / 360)
From 1984d9717158c89f9acca2b635a373bad7048633 Mon Sep 17 00:00:00 2001
From: Dom <97384583+tosemml@users.noreply.github.com>
Date: Sun, 20 Aug 2023 16:43:09 -0700
Subject: [PATCH 177/808] Refactorings (#8987)
* use np.dot
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* further improvements using array slicing
Co-authored-by: Tianyi Zheng
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
arithmetic_analysis/gaussian_elimination.py | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arithmetic_analysis/gaussian_elimination.py b/arithmetic_analysis/gaussian_elimination.py
index f0f20af8e417..13f509a4f117 100644
--- a/arithmetic_analysis/gaussian_elimination.py
+++ b/arithmetic_analysis/gaussian_elimination.py
@@ -33,10 +33,7 @@ def retroactive_resolution(
x: NDArray[float64] = np.zeros((rows, 1), dtype=float)
for row in reversed(range(rows)):
- total = 0
- for col in range(row + 1, columns):
- total += coefficients[row, col] * x[col]
-
+ total = np.dot(coefficients[row, row + 1 :], x[row + 1 :])
x[row, 0] = (vector[row] - total) / coefficients[row, row]
return x
From 1210559deb60b44cb9f57ce16c9bf6d79c0f443c Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Mon, 21 Aug 2023 14:25:20 +0100
Subject: [PATCH 178/808] Consolidate decimal to binary iterative and recursive
(#8999)
* updating DIRECTORY.md
* refactor(decimal-to-binary): Consolidate implementations
* updating DIRECTORY.md
* refactor(decimal-to-binary): Rename main and helper recursive
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 -
conversions/decimal_to_binary.py | 67 +++++++++++++++++++---
conversions/decimal_to_binary_recursion.py | 53 -----------------
3 files changed, 59 insertions(+), 62 deletions(-)
delete mode 100644 conversions/decimal_to_binary_recursion.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 653c1831d820..dd4404edd364 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -146,7 +146,6 @@
* [Convert Number To Words](conversions/convert_number_to_words.py)
* [Decimal To Any](conversions/decimal_to_any.py)
* [Decimal To Binary](conversions/decimal_to_binary.py)
- * [Decimal To Binary Recursion](conversions/decimal_to_binary_recursion.py)
* [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py)
* [Decimal To Octal](conversions/decimal_to_octal.py)
* [Energy Conversions](conversions/energy_conversions.py)
diff --git a/conversions/decimal_to_binary.py b/conversions/decimal_to_binary.py
index 973c47c8af67..cf2b6040ec2a 100644
--- a/conversions/decimal_to_binary.py
+++ b/conversions/decimal_to_binary.py
@@ -1,27 +1,27 @@
"""Convert a Decimal Number to a Binary Number."""
-def decimal_to_binary(num: int) -> str:
+def decimal_to_binary_iterative(num: int) -> str:
"""
Convert an Integer Decimal Number to a Binary Number as str.
- >>> decimal_to_binary(0)
+ >>> decimal_to_binary_iterative(0)
'0b0'
- >>> decimal_to_binary(2)
+ >>> decimal_to_binary_iterative(2)
'0b10'
- >>> decimal_to_binary(7)
+ >>> decimal_to_binary_iterative(7)
'0b111'
- >>> decimal_to_binary(35)
+ >>> decimal_to_binary_iterative(35)
'0b100011'
>>> # negatives work too
- >>> decimal_to_binary(-2)
+ >>> decimal_to_binary_iterative(-2)
'-0b10'
>>> # other floats will error
- >>> decimal_to_binary(16.16) # doctest: +ELLIPSIS
+ >>> decimal_to_binary_iterative(16.16) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> # strings will error as well
- >>> decimal_to_binary('0xfffff') # doctest: +ELLIPSIS
+ >>> decimal_to_binary_iterative('0xfffff') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'str' object cannot be interpreted as an integer
@@ -52,7 +52,58 @@ def decimal_to_binary(num: int) -> str:
return "0b" + "".join(str(e) for e in binary)
+def decimal_to_binary_recursive_helper(decimal: int) -> str:
+ """
+ Take a positive integer value and return its binary equivalent.
+ >>> decimal_to_binary_recursive_helper(1000)
+ '1111101000'
+ >>> decimal_to_binary_recursive_helper("72")
+ '1001000'
+ >>> decimal_to_binary_recursive_helper("number")
+ Traceback (most recent call last):
+ ...
+ ValueError: invalid literal for int() with base 10: 'number'
+ """
+ decimal = int(decimal)
+ if decimal in (0, 1): # Exit cases for the recursion
+ return str(decimal)
+ div, mod = divmod(decimal, 2)
+ return decimal_to_binary_recursive_helper(div) + str(mod)
+
+
+def decimal_to_binary_recursive(number: str) -> str:
+ """
+ Take an integer value and raise ValueError for wrong inputs,
+ call the function above and return the output with prefix "0b" & "-0b"
+ for positive and negative integers respectively.
+ >>> decimal_to_binary_recursive(0)
+ '0b0'
+ >>> decimal_to_binary_recursive(40)
+ '0b101000'
+ >>> decimal_to_binary_recursive(-40)
+ '-0b101000'
+ >>> decimal_to_binary_recursive(40.8)
+ Traceback (most recent call last):
+ ...
+ ValueError: Input value is not an integer
+ >>> decimal_to_binary_recursive("forty")
+ Traceback (most recent call last):
+ ...
+ ValueError: Input value is not an integer
+ """
+ number = str(number).strip()
+ if not number:
+ raise ValueError("No input value was provided")
+ negative = "-" if number.startswith("-") else ""
+ number = number.lstrip("-")
+ if not number.isnumeric():
+ raise ValueError("Input value is not an integer")
+ return f"{negative}0b{decimal_to_binary_recursive_helper(int(number))}"
+
+
if __name__ == "__main__":
import doctest
doctest.testmod()
+
+ print(decimal_to_binary_recursive(input("Input a decimal number: ")))
diff --git a/conversions/decimal_to_binary_recursion.py b/conversions/decimal_to_binary_recursion.py
deleted file mode 100644
index 05833ca670c3..000000000000
--- a/conversions/decimal_to_binary_recursion.py
+++ /dev/null
@@ -1,53 +0,0 @@
-def binary_recursive(decimal: int) -> str:
- """
- Take a positive integer value and return its binary equivalent.
- >>> binary_recursive(1000)
- '1111101000'
- >>> binary_recursive("72")
- '1001000'
- >>> binary_recursive("number")
- Traceback (most recent call last):
- ...
- ValueError: invalid literal for int() with base 10: 'number'
- """
- decimal = int(decimal)
- if decimal in (0, 1): # Exit cases for the recursion
- return str(decimal)
- div, mod = divmod(decimal, 2)
- return binary_recursive(div) + str(mod)
-
-
-def main(number: str) -> str:
- """
- Take an integer value and raise ValueError for wrong inputs,
- call the function above and return the output with prefix "0b" & "-0b"
- for positive and negative integers respectively.
- >>> main(0)
- '0b0'
- >>> main(40)
- '0b101000'
- >>> main(-40)
- '-0b101000'
- >>> main(40.8)
- Traceback (most recent call last):
- ...
- ValueError: Input value is not an integer
- >>> main("forty")
- Traceback (most recent call last):
- ...
- ValueError: Input value is not an integer
- """
- number = str(number).strip()
- if not number:
- raise ValueError("No input value was provided")
- negative = "-" if number.startswith("-") else ""
- number = number.lstrip("-")
- if not number.isnumeric():
- raise ValueError("Input value is not an integer")
- return f"{negative}0b{binary_recursive(int(number))}"
-
-
-if __name__ == "__main__":
- from doctest import testmod
-
- testmod()
From b3dc6ef035f097c9eb91911d8970668049e47d62 Mon Sep 17 00:00:00 2001
From: AmirSoroush
Date: Tue, 22 Aug 2023 02:17:02 +0300
Subject: [PATCH 179/808] fixes #9002; improve insertion_sort algorithm (#9005)
* fixes #9002; improve insertion_sort algorithm
* add type hints to sorts/insertion_sort.py
---
sorts/insertion_sort.py | 24 +++++++++++++++++-------
1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/sorts/insertion_sort.py b/sorts/insertion_sort.py
index 6d5bb2b46013..f11ddac349a0 100644
--- a/sorts/insertion_sort.py
+++ b/sorts/insertion_sort.py
@@ -13,8 +13,19 @@
python3 insertion_sort.py
"""
+from collections.abc import MutableSequence
+from typing import Any, Protocol, TypeVar
-def insertion_sort(collection: list) -> list:
+
+class Comparable(Protocol):
+ def __lt__(self, other: Any, /) -> bool:
+ ...
+
+
+T = TypeVar("T", bound=Comparable)
+
+
+def insertion_sort(collection: MutableSequence[T]) -> MutableSequence[T]:
"""A pure Python implementation of the insertion sort algorithm
:param collection: some mutable ordered collection with heterogeneous
@@ -40,13 +51,12 @@ def insertion_sort(collection: list) -> list:
True
"""
- for insert_index, insert_value in enumerate(collection[1:]):
- temp_index = insert_index
- while insert_index >= 0 and insert_value < collection[insert_index]:
- collection[insert_index + 1] = collection[insert_index]
+ for insert_index in range(1, len(collection)):
+ insert_value = collection[insert_index]
+ while insert_index > 0 and insert_value < collection[insert_index - 1]:
+ collection[insert_index] = collection[insert_index - 1]
insert_index -= 1
- if insert_index != temp_index:
- collection[insert_index + 1] = insert_value
+ collection[insert_index] = insert_value
return collection
From 04fd5c1b5e7880017d874f4305ca3396f868ee37 Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Tue, 22 Aug 2023 00:20:51 +0100
Subject: [PATCH 180/808] Create langtons ant algorithm (#8967)
* updating DIRECTORY.md
* feat(cellular_automata): Langonts ant algorithm
* updating DIRECTORY.md
* Update cellular_automata/langtons_ant.py
Co-authored-by: Tianyi Zheng
* Apply suggestions from code review
Co-authored-by: Tianyi Zheng
* fix(langtons-ant): Set funcanimation interval to 1
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
DIRECTORY.md | 1 +
cellular_automata/langtons_ant.py | 106 ++++++++++++++++++++++++++++++
2 files changed, 107 insertions(+)
create mode 100644 cellular_automata/langtons_ant.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index dd4404edd364..866a3084f67b 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -72,6 +72,7 @@
## Cellular Automata
* [Conways Game Of Life](cellular_automata/conways_game_of_life.py)
* [Game Of Life](cellular_automata/game_of_life.py)
+ * [Langtons Ant](cellular_automata/langtons_ant.py)
* [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py)
* [One Dimensional](cellular_automata/one_dimensional.py)
* [Wa Tor](cellular_automata/wa_tor.py)
diff --git a/cellular_automata/langtons_ant.py b/cellular_automata/langtons_ant.py
new file mode 100644
index 000000000000..983c626546ad
--- /dev/null
+++ b/cellular_automata/langtons_ant.py
@@ -0,0 +1,106 @@
+"""
+Langton's ant
+
+@ https://en.wikipedia.org/wiki/Langton%27s_ant
+@ https://upload.wikimedia.org/wikipedia/commons/0/09/LangtonsAntAnimated.gif
+"""
+
+from functools import partial
+
+from matplotlib import pyplot as plt
+from matplotlib.animation import FuncAnimation
+
+WIDTH = 80
+HEIGHT = 80
+
+
+class LangtonsAnt:
+ """
+ Represents the main LangonsAnt algorithm.
+
+ >>> la = LangtonsAnt(2, 2)
+ >>> la.board
+ [[True, True], [True, True]]
+ >>> la.ant_position
+ (1, 1)
+ """
+
+ def __init__(self, width: int, height: int) -> None:
+ # Each square is either True or False where True is white and False is black
+ self.board = [[True] * width for _ in range(height)]
+ self.ant_position: tuple[int, int] = (width // 2, height // 2)
+
+ # Initially pointing left (similar to the the wikipedia image)
+ # (0 = 0° | 1 = 90° | 2 = 180 ° | 3 = 270°)
+ self.ant_direction: int = 3
+
+ def move_ant(self, axes: plt.Axes | None, display: bool, _frame: int) -> None:
+ """
+ Performs three tasks:
+ 1. The ant turns either clockwise or anti-clockwise according to the colour
+ of the square that it is currently on. If the square is white, the ant
+ turns clockwise, and if the square is black the ant turns anti-clockwise
+ 2. The ant moves one square in the direction that it is currently facing
+ 3. The square the ant was previously on is inverted (White -> Black and
+ Black -> White)
+
+ If display is True, the board will also be displayed on the axes
+
+ >>> la = LangtonsAnt(2, 2)
+ >>> la.move_ant(None, True, 0)
+ >>> la.board
+ [[True, True], [True, False]]
+ >>> la.move_ant(None, True, 0)
+ >>> la.board
+ [[True, False], [True, False]]
+ """
+ directions = {
+ 0: (-1, 0), # 0°
+ 1: (0, 1), # 90°
+ 2: (1, 0), # 180°
+ 3: (0, -1), # 270°
+ }
+ x, y = self.ant_position
+
+ # Turn clockwise or anti-clockwise according to colour of square
+ if self.board[x][y] is True:
+ # The square is white so turn 90° clockwise
+ self.ant_direction = (self.ant_direction + 1) % 4
+ else:
+ # The square is black so turn 90° anti-clockwise
+ self.ant_direction = (self.ant_direction - 1) % 4
+
+ # Move ant
+ move_x, move_y = directions[self.ant_direction]
+ self.ant_position = (x + move_x, y + move_y)
+
+ # Flip colour of square
+ self.board[x][y] = not self.board[x][y]
+
+ if display and axes:
+ # Display the board on the axes
+ axes.get_xaxis().set_ticks([])
+ axes.get_yaxis().set_ticks([])
+ axes.imshow(self.board, cmap="gray", interpolation="nearest")
+
+ def display(self, frames: int = 100_000) -> None:
+ """
+ Displays the board without delay in a matplotlib plot
+ to visually understand and track the ant.
+
+ >>> _ = LangtonsAnt(WIDTH, HEIGHT)
+ """
+ fig, ax = plt.subplots()
+ # Assign animation to a variable to prevent it from getting garbage collected
+ self.animation = FuncAnimation(
+ fig, partial(self.move_ant, ax, True), frames=frames, interval=1
+ )
+ plt.show()
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
+ LangtonsAnt(WIDTH, HEIGHT).display()
From c7aeaa3fd8a114ecf9b1e800dfb8cc3cc7a3cbaa Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 22 Aug 2023 07:42:14 +0200
Subject: [PATCH 181/808] [pre-commit.ci] pre-commit autoupdate (#9006)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.284 → v0.0.285](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.284...v0.0.285)
- [github.com/abravalheri/validate-pyproject: v0.13 → v0.14](https://github.com/abravalheri/validate-pyproject/compare/v0.13...v0.14)
- [github.com/pre-commit/mirrors-mypy: v1.5.0 → v1.5.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.5.0...v1.5.1)
* updating DIRECTORY.md
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.pre-commit-config.yaml | 6 +++---
DIRECTORY.md | 1 +
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index b08139561639..ad3e0cd87f2e 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.284
+ rev: v0.0.285
hooks:
- id: ruff
@@ -46,12 +46,12 @@ repos:
pass_filenames: false
- repo: https://github.com/abravalheri/validate-pyproject
- rev: v0.13
+ rev: v0.14
hooks:
- id: validate-pyproject
- repo: https://github.com/pre-commit/mirrors-mypy
- rev: v1.5.0
+ rev: v1.5.1
hooks:
- id: mypy
args:
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 866a3084f67b..ebb164d0496c 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -155,6 +155,7 @@
* [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py)
* [Length Conversion](conversions/length_conversion.py)
* [Molecular Chemistry](conversions/molecular_chemistry.py)
+ * [Octal To Binary](conversions/octal_to_binary.py)
* [Octal To Decimal](conversions/octal_to_decimal.py)
* [Prefix Conversions](conversions/prefix_conversions.py)
* [Prefix Conversions String](conversions/prefix_conversions_string.py)
From fceacf977f0e4567d00f297686527ac9b4e5561f Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Tue, 22 Aug 2023 10:33:47 +0100
Subject: [PATCH 182/808] Fix type errors in permutations (#9007)
* updating DIRECTORY.md
* types(permuations): Rename permute2
* Apply suggestions from code review
Co-authored-by: Tianyi Zheng
* fix(permutations): Call permute_recursive
* fix(permutations): Correct permutations order
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
data_structures/arrays/permutations.py | 28 ++++++++++++--------------
1 file changed, 13 insertions(+), 15 deletions(-)
diff --git a/data_structures/arrays/permutations.py b/data_structures/arrays/permutations.py
index 4558bd8d468a..0f029187b92b 100644
--- a/data_structures/arrays/permutations.py
+++ b/data_structures/arrays/permutations.py
@@ -1,17 +1,16 @@
-def permute(nums: list[int]) -> list[list[int]]:
+def permute_recursive(nums: list[int]) -> list[list[int]]:
"""
Return all permutations.
- >>> from itertools import permutations
- >>> numbers= [1,2,3]
- >>> all(list(nums) in permute(numbers) for nums in permutations(numbers))
- True
+
+ >>> permute_recursive([1, 2, 3])
+ [[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]
"""
- result = []
- if len(nums) == 1:
- return [nums.copy()]
+ result: list[list[int]] = []
+ if len(nums) == 0:
+ return [[]]
for _ in range(len(nums)):
n = nums.pop(0)
- permutations = permute(nums)
+ permutations = permute_recursive(nums)
for perm in permutations:
perm.append(n)
result.extend(permutations)
@@ -19,15 +18,15 @@ def permute(nums: list[int]) -> list[list[int]]:
return result
-def permute2(nums):
+def permute_backtrack(nums: list[int]) -> list[list[int]]:
"""
Return all permutations of the given list.
- >>> permute2([1, 2, 3])
+ >>> permute_backtrack([1, 2, 3])
[[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 2, 1], [3, 1, 2]]
"""
- def backtrack(start):
+ def backtrack(start: int) -> None:
if start == len(nums) - 1:
output.append(nums[:])
else:
@@ -36,7 +35,7 @@ def backtrack(start):
backtrack(start + 1)
nums[start], nums[i] = nums[i], nums[start] # backtrack
- output = []
+ output: list[list[int]] = []
backtrack(0)
return output
@@ -44,7 +43,6 @@ def backtrack(start):
if __name__ == "__main__":
import doctest
- # use res to print the data in permute2 function
- res = permute2([1, 2, 3])
+ res = permute_backtrack([1, 2, 3])
print(res)
doctest.testmod()
From 0a9438071ee08121f069c77a5cb662206a4d348f Mon Sep 17 00:00:00 2001
From: Arijit De
Date: Wed, 23 Aug 2023 18:06:59 +0530
Subject: [PATCH 183/808] Updated postfix_evaluation.py to support Unary
operators (#8787)
* Updated postfix_evaluation.py to support Unary operators and floating point numbers Fixes #8754 and #8724
Also merged evaluate_postfix_notations.py and postfix_evaluation.py into postfix_evaluation.py
Signed-off-by: Arijit De
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Updated postfix_evaluation.py to support Unary operators and floating point numbers. Fixes #8754 and formatted code to pass ruff and black test.
Also merged evaluate_postfix_notations.py and postfix_evaluation.py into postfix_evaluation.py which fixes #8724 and made sure it passes doctest
Signed-off-by: Arijit De
* Fixed return type hinting required by pre commit for evaluate function
Signed-off-by: Arijit De
* Changed line 186 to return only top of stack instead of calling the get_number function as it was converting float values to int, resulting in data loss. Fixes #8754 and #8724
Signed-off-by: Arijit De
* Made the requested changes
Also changed the code to make the evaluate function first convert all the numbers and then process the valid expression.
* Fixes #8754, #8724 Updated postfix_evaluation.py
postfix_evaluation.py now supports Unary operators and floating point numbers.
Also merged evaluate_postfix_notations.py and postfix_evaluation.py into postfix_evaluation.py which fixes #8724. Added a doctest example with unary operator.
* Fixes #8754, #8724 Updated postfix_evaluation.py
postfix_evaluation.py now supports Unary operators and floating point numbers.
Also merged evaluate_postfix_notations.py and postfix_evaluation.py into postfix_evaluation.py which fixes #8724. Added a doctest example with unary operator.
* Fixes #8754, #8724 Updated the parse_token function of postfix_evaluation.py
ostfix_evaluation.py now supports Unary operators and floating point numbers.
Also merged evaluate_postfix_notations.py and postfix_evaluation.py into postfix_evaluation.py which fixes #8724. Added a doctest example with unary operator and invalid expression.
* Fixes #8754, #8724 Updated postfix_evaluation.py
postfix_evaluation.py now supports Unary operators and floating point numbers.
Also merged evaluate_postfix_notations.py and postfix_evaluation.py into postfix_evaluation.py which fixes #8724. Added a doctest example with unary operator and invalid expression.
* Update postfix_evaluation.py
* Update postfix_evaluation.py
* Update postfix_evaluation.py
* Update postfix_evaluation.py
* Update postfix_evaluation.py
---------
Signed-off-by: Arijit De
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.../stacks/evaluate_postfix_notations.py | 52 -----
data_structures/stacks/postfix_evaluation.py | 200 +++++++++++++++---
2 files changed, 166 insertions(+), 86 deletions(-)
delete mode 100644 data_structures/stacks/evaluate_postfix_notations.py
diff --git a/data_structures/stacks/evaluate_postfix_notations.py b/data_structures/stacks/evaluate_postfix_notations.py
deleted file mode 100644
index 51ea353b17de..000000000000
--- a/data_structures/stacks/evaluate_postfix_notations.py
+++ /dev/null
@@ -1,52 +0,0 @@
-"""
-The Reverse Polish Nation also known as Polish postfix notation
-or simply postfix notation.
-https://en.wikipedia.org/wiki/Reverse_Polish_notation
-Classic examples of simple stack implementations
-Valid operators are +, -, *, /.
-Each operand may be an integer or another expression.
-"""
-from __future__ import annotations
-
-from typing import Any
-
-
-def evaluate_postfix(postfix_notation: list) -> int:
- """
- >>> evaluate_postfix(["2", "1", "+", "3", "*"])
- 9
- >>> evaluate_postfix(["4", "13", "5", "/", "+"])
- 6
- >>> evaluate_postfix([])
- 0
- """
- if not postfix_notation:
- return 0
-
- operations = {"+", "-", "*", "/"}
- stack: list[Any] = []
-
- for token in postfix_notation:
- if token in operations:
- b, a = stack.pop(), stack.pop()
- if token == "+":
- stack.append(a + b)
- elif token == "-":
- stack.append(a - b)
- elif token == "*":
- stack.append(a * b)
- else:
- if a * b < 0 and a % b != 0:
- stack.append(a // b + 1)
- else:
- stack.append(a // b)
- else:
- stack.append(int(token))
-
- return stack.pop()
-
-
-if __name__ == "__main__":
- import doctest
-
- doctest.testmod()
diff --git a/data_structures/stacks/postfix_evaluation.py b/data_structures/stacks/postfix_evaluation.py
index 28128f82ec19..03a87b9e0fa3 100644
--- a/data_structures/stacks/postfix_evaluation.py
+++ b/data_structures/stacks/postfix_evaluation.py
@@ -1,4 +1,11 @@
"""
+Reverse Polish Nation is also known as Polish postfix notation or simply postfix
+notation.
+https://en.wikipedia.org/wiki/Reverse_Polish_notation
+Classic examples of simple stack implementations.
+Valid operators are +, -, *, /.
+Each operand may be an integer or another expression.
+
Output:
Enter a Postfix Equation (space separated) = 5 6 9 * +
@@ -17,52 +24,177 @@
Result = 59
"""
-import operator as op
+# Defining valid unary operator symbols
+UNARY_OP_SYMBOLS = ("-", "+")
+
+# operators & their respective operation
+OPERATORS = {
+ "^": lambda p, q: p**q,
+ "*": lambda p, q: p * q,
+ "/": lambda p, q: p / q,
+ "+": lambda p, q: p + q,
+ "-": lambda p, q: p - q,
+}
+
+
+def parse_token(token: str | float) -> float | str:
+ """
+ Converts the given data to the appropriate number if it is indeed a number, else
+ returns the data as it is with a False flag. This function also serves as a check
+ of whether the input is a number or not.
+
+ Parameters
+ ----------
+ token: The data that needs to be converted to the appropriate operator or number.
+
+ Returns
+ -------
+ float or str
+ Returns a float if `token` is a number or a str if `token` is an operator
+ """
+ if token in OPERATORS:
+ return token
+ try:
+ return float(token)
+ except ValueError:
+ msg = f"{token} is neither a number nor a valid operator"
+ raise ValueError(msg)
+
+
+def evaluate(post_fix: list[str], verbose: bool = False) -> float:
+ """
+ Evaluate postfix expression using a stack.
+ >>> evaluate(["0"])
+ 0.0
+ >>> evaluate(["-0"])
+ -0.0
+ >>> evaluate(["1"])
+ 1.0
+ >>> evaluate(["-1"])
+ -1.0
+ >>> evaluate(["-1.1"])
+ -1.1
+ >>> evaluate(["2", "1", "+", "3", "*"])
+ 9.0
+ >>> evaluate(["2", "1.9", "+", "3", "*"])
+ 11.7
+ >>> evaluate(["2", "-1.9", "+", "3", "*"])
+ 0.30000000000000027
+ >>> evaluate(["4", "13", "5", "/", "+"])
+ 6.6
+ >>> evaluate(["2", "-", "3", "+"])
+ 1.0
+ >>> evaluate(["-4", "5", "*", "6", "-"])
+ -26.0
+ >>> evaluate([])
+ 0
+ >>> evaluate(["4", "-", "6", "7", "/", "9", "8"])
+ Traceback (most recent call last):
+ ...
+ ArithmeticError: Input is not a valid postfix expression
+
+ Parameters
+ ----------
+ post_fix:
+ The postfix expression is tokenized into operators and operands and stored
+ as a Python list
+ verbose:
+ Display stack contents while evaluating the expression if verbose is True
-def solve(post_fix):
+ Returns
+ -------
+ float
+ The evaluated value
+ """
+ if not post_fix:
+ return 0
+ # Checking the list to find out whether the postfix expression is valid
+ valid_expression = [parse_token(token) for token in post_fix]
+ if verbose:
+ # print table header
+ print("Symbol".center(8), "Action".center(12), "Stack", sep=" | ")
+ print("-" * (30 + len(post_fix)))
stack = []
- div = lambda x, y: int(x / y) # noqa: E731 integer division operation
- opr = {
- "^": op.pow,
- "*": op.mul,
- "/": div,
- "+": op.add,
- "-": op.sub,
- } # operators & their respective operation
-
- # print table header
- print("Symbol".center(8), "Action".center(12), "Stack", sep=" | ")
- print("-" * (30 + len(post_fix)))
-
- for x in post_fix:
- if x.isdigit(): # if x in digit
+ for x in valid_expression:
+ if x not in OPERATORS:
stack.append(x) # append x to stack
- # output in tabular format
- print(x.rjust(8), ("push(" + x + ")").ljust(12), ",".join(stack), sep=" | ")
- else:
+ if verbose:
+ # output in tabular format
+ print(
+ f"{x}".rjust(8),
+ f"push({x})".ljust(12),
+ stack,
+ sep=" | ",
+ )
+ continue
+ # If x is operator
+ # If only 1 value is inside the stack and + or - is encountered
+ # then this is unary + or - case
+ if x in UNARY_OP_SYMBOLS and len(stack) < 2:
b = stack.pop() # pop stack
+ if x == "-":
+ b *= -1 # negate b
+ stack.append(b)
+ if verbose:
+ # output in tabular format
+ print(
+ "".rjust(8),
+ f"pop({b})".ljust(12),
+ stack,
+ sep=" | ",
+ )
+ print(
+ str(x).rjust(8),
+ f"push({x}{b})".ljust(12),
+ stack,
+ sep=" | ",
+ )
+ continue
+ b = stack.pop() # pop stack
+ if verbose:
# output in tabular format
- print("".rjust(8), ("pop(" + b + ")").ljust(12), ",".join(stack), sep=" | ")
+ print(
+ "".rjust(8),
+ f"pop({b})".ljust(12),
+ stack,
+ sep=" | ",
+ )
- a = stack.pop() # pop stack
+ a = stack.pop() # pop stack
+ if verbose:
# output in tabular format
- print("".rjust(8), ("pop(" + a + ")").ljust(12), ",".join(stack), sep=" | ")
-
- stack.append(
- str(opr[x](int(a), int(b)))
- ) # evaluate the 2 values popped from stack & push result to stack
+ print(
+ "".rjust(8),
+ f"pop({a})".ljust(12),
+ stack,
+ sep=" | ",
+ )
+ # evaluate the 2 values popped from stack & push result to stack
+ stack.append(OPERATORS[x](a, b)) # type: ignore[index]
+ if verbose:
# output in tabular format
print(
- x.rjust(8),
- ("push(" + a + x + b + ")").ljust(12),
- ",".join(stack),
+ f"{x}".rjust(8),
+ f"push({a}{x}{b})".ljust(12),
+ stack,
sep=" | ",
)
-
- return int(stack[0])
+ # If everything is executed correctly, the stack will contain
+ # only one element which is the result
+ if len(stack) != 1:
+ raise ArithmeticError("Input is not a valid postfix expression")
+ return float(stack[0])
if __name__ == "__main__":
- Postfix = input("\n\nEnter a Postfix Equation (space separated) = ").split(" ")
- print("\n\tResult = ", solve(Postfix))
+ # Create a loop so that the user can evaluate postfix expressions multiple times
+ while True:
+ expression = input("Enter a Postfix Expression (space separated): ").split(" ")
+ prompt = "Do you want to see stack contents while evaluating? [y/N]: "
+ verbose = input(prompt).strip().lower() == "y"
+ output = evaluate(expression, verbose)
+ print("Result = ", output)
+ prompt = "Do you want to enter another expression? [y/N]: "
+ if input(prompt).strip().lower() != "y":
+ break
From 421ace81edb0d9af3a173f4ca7e66cc900078c1d Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 29 Aug 2023 15:18:10 +0200
Subject: [PATCH 184/808] [pre-commit.ci] pre-commit autoupdate (#9013)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.285 → v0.0.286](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.285...v0.0.286)
- [github.com/tox-dev/pyproject-fmt: 0.13.1 → 1.1.0](https://github.com/tox-dev/pyproject-fmt/compare/0.13.1...1.1.0)
* updating DIRECTORY.md
* Fis ruff rules PIE808,PLR1714
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.pre-commit-config.yaml | 4 ++--
DIRECTORY.md | 1 -
arithmetic_analysis/jacobi_iteration_method.py | 4 ++--
arithmetic_analysis/secant_method.py | 2 +-
backtracking/hamiltonian_cycle.py | 2 +-
backtracking/sudoku.py | 2 +-
bit_manipulation/reverse_bits.py | 2 +-
ciphers/trafid_cipher.py | 2 +-
data_structures/binary_tree/lazy_segment_tree.py | 6 +++---
data_structures/linked_list/circular_linked_list.py | 2 +-
data_structures/linked_list/doubly_linked_list.py | 6 +++---
data_structures/linked_list/is_palindrome.py | 2 +-
data_structures/linked_list/singly_linked_list.py | 8 ++++----
data_structures/stacks/stock_span_problem.py | 2 +-
digital_image_processing/filters/bilateral_filter.py | 4 ++--
digital_image_processing/filters/convolve.py | 4 ++--
.../filters/local_binary_pattern.py | 4 ++--
.../test_digital_image_processing.py | 4 ++--
divide_and_conquer/strassen_matrix_multiplication.py | 4 ++--
dynamic_programming/floyd_warshall.py | 10 +++++-----
hashes/chaos_machine.py | 2 +-
hashes/hamming_code.py | 4 ++--
hashes/sha1.py | 2 +-
hashes/sha256.py | 2 +-
machine_learning/gradient_descent.py | 2 +-
machine_learning/linear_regression.py | 4 ++--
machine_learning/lstm/lstm_prediction.py | 4 ++--
maths/entropy.py | 2 +-
maths/eulers_totient.py | 2 +-
maths/greedy_coin_change.py | 2 +-
maths/persistence.py | 4 ++--
maths/series/harmonic.py | 2 +-
matrix/spiral_print.py | 2 +-
other/magicdiamondpattern.py | 6 +++---
project_euler/problem_070/sol1.py | 2 +-
project_euler/problem_112/sol1.py | 2 +-
quantum/q_full_adder.py | 2 +-
scheduling/highest_response_ratio_next.py | 6 +++---
sorts/counting_sort.py | 2 +-
sorts/cycle_sort.py | 2 +-
sorts/double_sort.py | 4 ++--
sorts/odd_even_transposition_parallel.py | 4 ++--
strings/rabin_karp.py | 2 +-
43 files changed, 70 insertions(+), 71 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index ad3e0cd87f2e..5c4e8579e116 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.285
+ rev: v0.0.286
hooks:
- id: ruff
@@ -33,7 +33,7 @@ repos:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
- rev: "0.13.1"
+ rev: "1.1.0"
hooks:
- id: pyproject-fmt
diff --git a/DIRECTORY.md b/DIRECTORY.md
index ebb164d0496c..43da91cb818e 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -245,7 +245,6 @@
* Stacks
* [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py)
* [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py)
- * [Evaluate Postfix Notations](data_structures/stacks/evaluate_postfix_notations.py)
* [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py)
* [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py)
* [Next Greater Element](data_structures/stacks/next_greater_element.py)
diff --git a/arithmetic_analysis/jacobi_iteration_method.py b/arithmetic_analysis/jacobi_iteration_method.py
index 17edf4bf4b8b..dba8a9ff44d3 100644
--- a/arithmetic_analysis/jacobi_iteration_method.py
+++ b/arithmetic_analysis/jacobi_iteration_method.py
@@ -152,9 +152,9 @@ def strictly_diagonally_dominant(table: NDArray[float64]) -> bool:
is_diagonally_dominant = True
- for i in range(0, rows):
+ for i in range(rows):
total = 0
- for j in range(0, cols - 1):
+ for j in range(cols - 1):
if i == j:
continue
else:
diff --git a/arithmetic_analysis/secant_method.py b/arithmetic_analysis/secant_method.py
index d28a46206d40..d39cb0ff30ef 100644
--- a/arithmetic_analysis/secant_method.py
+++ b/arithmetic_analysis/secant_method.py
@@ -20,7 +20,7 @@ def secant_method(lower_bound: float, upper_bound: float, repeats: int) -> float
"""
x0 = lower_bound
x1 = upper_bound
- for _ in range(0, repeats):
+ for _ in range(repeats):
x0, x1 = x1, x1 - (f(x1) * (x1 - x0)) / (f(x1) - f(x0))
return x1
diff --git a/backtracking/hamiltonian_cycle.py b/backtracking/hamiltonian_cycle.py
index 4a4156d70b32..e9916f83f861 100644
--- a/backtracking/hamiltonian_cycle.py
+++ b/backtracking/hamiltonian_cycle.py
@@ -95,7 +95,7 @@ def util_hamilton_cycle(graph: list[list[int]], path: list[int], curr_ind: int)
return graph[path[curr_ind - 1]][path[0]] == 1
# Recursive Step
- for next_ver in range(0, len(graph)):
+ for next_ver in range(len(graph)):
if valid_connection(graph, next_ver, curr_ind, path):
# Insert current vertex into path as next transition
path[curr_ind] = next_ver
diff --git a/backtracking/sudoku.py b/backtracking/sudoku.py
index 698dedcc2125..6e4e3e8780f2 100644
--- a/backtracking/sudoku.py
+++ b/backtracking/sudoku.py
@@ -48,7 +48,7 @@ def is_safe(grid: Matrix, row: int, column: int, n: int) -> bool:
is found) else returns True if it is 'safe'
"""
for i in range(9):
- if grid[row][i] == n or grid[i][column] == n:
+ if n in {grid[row][i], grid[i][column]}:
return False
for i in range(3):
diff --git a/bit_manipulation/reverse_bits.py b/bit_manipulation/reverse_bits.py
index a8c77c11bfdd..74b4f2563234 100644
--- a/bit_manipulation/reverse_bits.py
+++ b/bit_manipulation/reverse_bits.py
@@ -20,7 +20,7 @@ def get_reverse_bit_string(number: int) -> str:
)
raise TypeError(msg)
bit_string = ""
- for _ in range(0, 32):
+ for _ in range(32):
bit_string += str(number % 2)
number = number >> 1
return bit_string
diff --git a/ciphers/trafid_cipher.py b/ciphers/trafid_cipher.py
index 108ac652f0e4..8aa2263ca5ac 100644
--- a/ciphers/trafid_cipher.py
+++ b/ciphers/trafid_cipher.py
@@ -119,7 +119,7 @@ def decrypt_message(
for i in range(0, len(message) + 1, period):
a, b, c = __decrypt_part(message[i : i + period], character_to_number)
- for j in range(0, len(a)):
+ for j in range(len(a)):
decrypted_numeric.append(a[j] + b[j] + c[j])
for each in decrypted_numeric:
diff --git a/data_structures/binary_tree/lazy_segment_tree.py b/data_structures/binary_tree/lazy_segment_tree.py
index 050dfe0a6f2f..c26b0619380c 100644
--- a/data_structures/binary_tree/lazy_segment_tree.py
+++ b/data_structures/binary_tree/lazy_segment_tree.py
@@ -7,10 +7,10 @@ class SegmentTree:
def __init__(self, size: int) -> None:
self.size = size
# approximate the overall size of segment tree with given value
- self.segment_tree = [0 for i in range(0, 4 * size)]
+ self.segment_tree = [0 for i in range(4 * size)]
# create array to store lazy update
- self.lazy = [0 for i in range(0, 4 * size)]
- self.flag = [0 for i in range(0, 4 * size)] # flag for lazy update
+ self.lazy = [0 for i in range(4 * size)]
+ self.flag = [0 for i in range(4 * size)] # flag for lazy update
def left(self, idx: int) -> int:
"""
diff --git a/data_structures/linked_list/circular_linked_list.py b/data_structures/linked_list/circular_linked_list.py
index 325d91026137..d9544f4263a6 100644
--- a/data_structures/linked_list/circular_linked_list.py
+++ b/data_structures/linked_list/circular_linked_list.py
@@ -125,7 +125,7 @@ def test_circular_linked_list() -> None:
circular_linked_list.insert_tail(6)
assert str(circular_linked_list) == "->".join(str(i) for i in range(1, 7))
circular_linked_list.insert_head(0)
- assert str(circular_linked_list) == "->".join(str(i) for i in range(0, 7))
+ assert str(circular_linked_list) == "->".join(str(i) for i in range(7))
assert circular_linked_list.delete_front() == 0
assert circular_linked_list.delete_tail() == 6
diff --git a/data_structures/linked_list/doubly_linked_list.py b/data_structures/linked_list/doubly_linked_list.py
index 1a6c48191c4e..bd3445f9f6c5 100644
--- a/data_structures/linked_list/doubly_linked_list.py
+++ b/data_structures/linked_list/doubly_linked_list.py
@@ -98,7 +98,7 @@ def insert_at_nth(self, index: int, data):
self.tail = new_node
else:
temp = self.head
- for _ in range(0, index):
+ for _ in range(index):
temp = temp.next
temp.previous.next = new_node
new_node.previous = temp.previous
@@ -149,7 +149,7 @@ def delete_at_nth(self, index: int):
self.tail.next = None
else:
temp = self.head
- for _ in range(0, index):
+ for _ in range(index):
temp = temp.next
delete_node = temp
temp.next.previous = temp.previous
@@ -215,7 +215,7 @@ def test_doubly_linked_list() -> None:
linked_list.insert_at_head(0)
linked_list.insert_at_tail(11)
- assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
+ assert str(linked_list) == "->".join(str(i) for i in range(12))
assert linked_list.delete_head() == 0
assert linked_list.delete_at_nth(9) == 10
diff --git a/data_structures/linked_list/is_palindrome.py b/data_structures/linked_list/is_palindrome.py
index ec19e99f78c0..d540fb69f36b 100644
--- a/data_structures/linked_list/is_palindrome.py
+++ b/data_structures/linked_list/is_palindrome.py
@@ -68,7 +68,7 @@ def is_palindrome_dict(head):
middle += 1
else:
step = 0
- for i in range(0, len(v)):
+ for i in range(len(v)):
if v[i] + v[len(v) - 1 - step] != checksum:
return False
step += 1
diff --git a/data_structures/linked_list/singly_linked_list.py b/data_structures/linked_list/singly_linked_list.py
index 890e21c9b404..f4b2ddce12d7 100644
--- a/data_structures/linked_list/singly_linked_list.py
+++ b/data_structures/linked_list/singly_linked_list.py
@@ -370,7 +370,7 @@ def test_singly_linked_list() -> None:
linked_list.insert_head(0)
linked_list.insert_tail(11)
- assert str(linked_list) == "->".join(str(i) for i in range(0, 12))
+ assert str(linked_list) == "->".join(str(i) for i in range(12))
assert linked_list.delete_head() == 0
assert linked_list.delete_nth(9) == 10
@@ -378,11 +378,11 @@ def test_singly_linked_list() -> None:
assert len(linked_list) == 9
assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
- assert all(linked_list[i] == i + 1 for i in range(0, 9)) is True
+ assert all(linked_list[i] == i + 1 for i in range(9)) is True
- for i in range(0, 9):
+ for i in range(9):
linked_list[i] = -i
- assert all(linked_list[i] == -i for i in range(0, 9)) is True
+ assert all(linked_list[i] == -i for i in range(9)) is True
linked_list.reverse()
assert str(linked_list) == "->".join(str(i) for i in range(-8, 1))
diff --git a/data_structures/stacks/stock_span_problem.py b/data_structures/stacks/stock_span_problem.py
index de423c1ebf66..5efe58d25798 100644
--- a/data_structures/stacks/stock_span_problem.py
+++ b/data_structures/stacks/stock_span_problem.py
@@ -36,7 +36,7 @@ def calculation_span(price, s):
# A utility function to print elements of array
def print_array(arr, n):
- for i in range(0, n):
+ for i in range(n):
print(arr[i], end=" ")
diff --git a/digital_image_processing/filters/bilateral_filter.py b/digital_image_processing/filters/bilateral_filter.py
index 565da73f6b0e..199ac4d9939a 100644
--- a/digital_image_processing/filters/bilateral_filter.py
+++ b/digital_image_processing/filters/bilateral_filter.py
@@ -31,8 +31,8 @@ def get_slice(img: np.ndarray, x: int, y: int, kernel_size: int) -> np.ndarray:
def get_gauss_kernel(kernel_size: int, spatial_variance: float) -> np.ndarray:
# Creates a gaussian kernel of given dimension.
arr = np.zeros((kernel_size, kernel_size))
- for i in range(0, kernel_size):
- for j in range(0, kernel_size):
+ for i in range(kernel_size):
+ for j in range(kernel_size):
arr[i, j] = math.sqrt(
abs(i - kernel_size // 2) ** 2 + abs(j - kernel_size // 2) ** 2
)
diff --git a/digital_image_processing/filters/convolve.py b/digital_image_processing/filters/convolve.py
index 299682010da6..004402f29ba9 100644
--- a/digital_image_processing/filters/convolve.py
+++ b/digital_image_processing/filters/convolve.py
@@ -11,8 +11,8 @@ def im2col(image, block_size):
dst_width = rows - block_size[0] + 1
image_array = zeros((dst_height * dst_width, block_size[1] * block_size[0]))
row = 0
- for i in range(0, dst_height):
- for j in range(0, dst_width):
+ for i in range(dst_height):
+ for j in range(dst_width):
window = ravel(image[i : i + block_size[0], j : j + block_size[1]])
image_array[row, :] = window
row += 1
diff --git a/digital_image_processing/filters/local_binary_pattern.py b/digital_image_processing/filters/local_binary_pattern.py
index 907fe2cb0555..861369ba6a32 100644
--- a/digital_image_processing/filters/local_binary_pattern.py
+++ b/digital_image_processing/filters/local_binary_pattern.py
@@ -71,8 +71,8 @@ def local_binary_value(image: np.ndarray, x_coordinate: int, y_coordinate: int)
# Iterating through the image and calculating the
# local binary pattern value for each pixel.
- for i in range(0, image.shape[0]):
- for j in range(0, image.shape[1]):
+ for i in range(image.shape[0]):
+ for j in range(image.shape[1]):
lbp_image[i][j] = local_binary_value(image, i, j)
cv2.imshow("local binary pattern", lbp_image)
diff --git a/digital_image_processing/test_digital_image_processing.py b/digital_image_processing/test_digital_image_processing.py
index fee7ab247b55..528b4bc3b74c 100644
--- a/digital_image_processing/test_digital_image_processing.py
+++ b/digital_image_processing/test_digital_image_processing.py
@@ -118,8 +118,8 @@ def test_local_binary_pattern():
# Iterating through the image and calculating the local binary pattern value
# for each pixel.
- for i in range(0, image.shape[0]):
- for j in range(0, image.shape[1]):
+ for i in range(image.shape[0]):
+ for j in range(image.shape[1]):
lbp_image[i][j] = lbp.local_binary_value(image, i, j)
assert lbp_image.any()
diff --git a/divide_and_conquer/strassen_matrix_multiplication.py b/divide_and_conquer/strassen_matrix_multiplication.py
index cbfc7e5655db..1d03950ef9fe 100644
--- a/divide_and_conquer/strassen_matrix_multiplication.py
+++ b/divide_and_conquer/strassen_matrix_multiplication.py
@@ -131,7 +131,7 @@ def strassen(matrix1: list, matrix2: list) -> list:
# Adding zeros to the matrices so that the arrays dimensions are the same and also
# power of 2
- for i in range(0, maxim):
+ for i in range(maxim):
if i < dimension1[0]:
for _ in range(dimension1[1], maxim):
new_matrix1[i].append(0)
@@ -146,7 +146,7 @@ def strassen(matrix1: list, matrix2: list) -> list:
final_matrix = actual_strassen(new_matrix1, new_matrix2)
# Removing the additional zeros
- for i in range(0, maxim):
+ for i in range(maxim):
if i < dimension1[0]:
for _ in range(dimension2[1], maxim):
final_matrix[i].pop()
diff --git a/dynamic_programming/floyd_warshall.py b/dynamic_programming/floyd_warshall.py
index 614a3c72a992..2331f3e65483 100644
--- a/dynamic_programming/floyd_warshall.py
+++ b/dynamic_programming/floyd_warshall.py
@@ -5,19 +5,19 @@ class Graph:
def __init__(self, n=0): # a graph with Node 0,1,...,N-1
self.n = n
self.w = [
- [math.inf for j in range(0, n)] for i in range(0, n)
+ [math.inf for j in range(n)] for i in range(n)
] # adjacency matrix for weight
self.dp = [
- [math.inf for j in range(0, n)] for i in range(0, n)
+ [math.inf for j in range(n)] for i in range(n)
] # dp[i][j] stores minimum distance from i to j
def add_edge(self, u, v, w):
self.dp[u][v] = w
def floyd_warshall(self):
- for k in range(0, self.n):
- for i in range(0, self.n):
- for j in range(0, self.n):
+ for k in range(self.n):
+ for i in range(self.n):
+ for j in range(self.n):
self.dp[i][j] = min(self.dp[i][j], self.dp[i][k] + self.dp[k][j])
def show_min(self, u, v):
diff --git a/hashes/chaos_machine.py b/hashes/chaos_machine.py
index 238fdb1c0634..d2fde2f5e371 100644
--- a/hashes/chaos_machine.py
+++ b/hashes/chaos_machine.py
@@ -53,7 +53,7 @@ def xorshift(x, y):
key = machine_time % m
# Evolution (Time Length)
- for _ in range(0, t):
+ for _ in range(t):
# Variables (Position + Parameters)
r = params_space[key]
value = buffer_space[key]
diff --git a/hashes/hamming_code.py b/hashes/hamming_code.py
index dc93032183e0..8498ca920b36 100644
--- a/hashes/hamming_code.py
+++ b/hashes/hamming_code.py
@@ -135,7 +135,7 @@ def emitter_converter(size_par, data):
# Mount the message
cont_bp = 0 # parity bit counter
- for x in range(0, size_par + len(data)):
+ for x in range(size_par + len(data)):
if data_ord[x] is None:
data_out.append(str(parity[cont_bp]))
cont_bp += 1
@@ -228,7 +228,7 @@ def receptor_converter(size_par, data):
# Mount the message
cont_bp = 0 # Parity bit counter
- for x in range(0, size_par + len(data_output)):
+ for x in range(size_par + len(data_output)):
if data_ord[x] is None:
data_out.append(str(parity[cont_bp]))
cont_bp += 1
diff --git a/hashes/sha1.py b/hashes/sha1.py
index b325ce3e43bb..8a03673f3c9f 100644
--- a/hashes/sha1.py
+++ b/hashes/sha1.py
@@ -97,7 +97,7 @@ def final_hash(self):
for block in self.blocks:
expanded_block = self.expand_block(block)
a, b, c, d, e = self.h
- for i in range(0, 80):
+ for i in range(80):
if 0 <= i < 20:
f = (b & c) | ((~b) & d)
k = 0x5A827999
diff --git a/hashes/sha256.py b/hashes/sha256.py
index 98f7c096e3b6..ba9aff8dbf41 100644
--- a/hashes/sha256.py
+++ b/hashes/sha256.py
@@ -138,7 +138,7 @@ def final_hash(self) -> None:
a, b, c, d, e, f, g, h = self.hashes
- for index in range(0, 64):
+ for index in range(64):
if index > 15:
# modify the zero-ed indexes at the end of the array
s0 = (
diff --git a/machine_learning/gradient_descent.py b/machine_learning/gradient_descent.py
index 5b74dad082e7..9ffc02bbc284 100644
--- a/machine_learning/gradient_descent.py
+++ b/machine_learning/gradient_descent.py
@@ -110,7 +110,7 @@ def run_gradient_descent():
while True:
j += 1
temp_parameter_vector = [0, 0, 0, 0]
- for i in range(0, len(parameter_vector)):
+ for i in range(len(parameter_vector)):
cost_derivative = get_cost_derivative(i - 1)
temp_parameter_vector[i] = (
parameter_vector[i] - LEARNING_RATE * cost_derivative
diff --git a/machine_learning/linear_regression.py b/machine_learning/linear_regression.py
index 75943ac9f2ad..0847112ad538 100644
--- a/machine_learning/linear_regression.py
+++ b/machine_learning/linear_regression.py
@@ -78,7 +78,7 @@ def run_linear_regression(data_x, data_y):
theta = np.zeros((1, no_features))
- for i in range(0, iterations):
+ for i in range(iterations):
theta = run_steep_gradient_descent(data_x, data_y, len_data, alpha, theta)
error = sum_of_square_error(data_x, data_y, len_data, theta)
print(f"At Iteration {i + 1} - Error is {error:.5f}")
@@ -107,7 +107,7 @@ def main():
theta = run_linear_regression(data_x, data_y)
len_result = theta.shape[1]
print("Resultant Feature vector : ")
- for i in range(0, len_result):
+ for i in range(len_result):
print(f"{theta[0, i]:.5f}")
diff --git a/machine_learning/lstm/lstm_prediction.py b/machine_learning/lstm/lstm_prediction.py
index 74197c46a0ad..16530e935ea7 100644
--- a/machine_learning/lstm/lstm_prediction.py
+++ b/machine_learning/lstm/lstm_prediction.py
@@ -32,10 +32,10 @@
train_x, train_y = [], []
test_x, test_y = [], []
- for i in range(0, len(train_data) - forward_days - look_back + 1):
+ for i in range(len(train_data) - forward_days - look_back + 1):
train_x.append(train_data[i : i + look_back])
train_y.append(train_data[i + look_back : i + look_back + forward_days])
- for i in range(0, len(test_data) - forward_days - look_back + 1):
+ for i in range(len(test_data) - forward_days - look_back + 1):
test_x.append(test_data[i : i + look_back])
test_y.append(test_data[i + look_back : i + look_back + forward_days])
x_train = np.array(train_x)
diff --git a/maths/entropy.py b/maths/entropy.py
index 498c28f31bc4..23753d884484 100644
--- a/maths/entropy.py
+++ b/maths/entropy.py
@@ -101,7 +101,7 @@ def analyze_text(text: str) -> tuple[dict, dict]:
# first case when we have space at start.
two_char_strings[" " + text[0]] += 1
- for i in range(0, len(text) - 1):
+ for i in range(len(text) - 1):
single_char_strings[text[i]] += 1
two_char_strings[text[i : i + 2]] += 1
return single_char_strings, two_char_strings
diff --git a/maths/eulers_totient.py b/maths/eulers_totient.py
index a156647037b4..00f0254c215a 100644
--- a/maths/eulers_totient.py
+++ b/maths/eulers_totient.py
@@ -21,7 +21,7 @@ def totient(n: int) -> list:
for i in range(2, n + 1):
if is_prime[i]:
primes.append(i)
- for j in range(0, len(primes)):
+ for j in range(len(primes)):
if i * primes[j] >= n:
break
is_prime[i * primes[j]] = False
diff --git a/maths/greedy_coin_change.py b/maths/greedy_coin_change.py
index 7cf669bcb8cb..db2c381bc84a 100644
--- a/maths/greedy_coin_change.py
+++ b/maths/greedy_coin_change.py
@@ -81,7 +81,7 @@ def find_minimum_change(denominations: list[int], value: str) -> list[int]:
):
n = int(input("Enter the number of denominations you want to add: ").strip())
- for i in range(0, n):
+ for i in range(n):
denominations.append(int(input(f"Denomination {i}: ").strip()))
value = input("Enter the change you want to make in Indian Currency: ").strip()
else:
diff --git a/maths/persistence.py b/maths/persistence.py
index 607641e67200..c61a69a7c27d 100644
--- a/maths/persistence.py
+++ b/maths/persistence.py
@@ -28,7 +28,7 @@ def multiplicative_persistence(num: int) -> int:
numbers = [int(i) for i in num_string]
total = 1
- for i in range(0, len(numbers)):
+ for i in range(len(numbers)):
total *= numbers[i]
num_string = str(total)
@@ -67,7 +67,7 @@ def additive_persistence(num: int) -> int:
numbers = [int(i) for i in num_string]
total = 0
- for i in range(0, len(numbers)):
+ for i in range(len(numbers)):
total += numbers[i]
num_string = str(total)
diff --git a/maths/series/harmonic.py b/maths/series/harmonic.py
index 50f29c93dd5f..35792d38af9b 100644
--- a/maths/series/harmonic.py
+++ b/maths/series/harmonic.py
@@ -45,7 +45,7 @@ def is_harmonic_series(series: list) -> bool:
return True
rec_series = []
series_len = len(series)
- for i in range(0, series_len):
+ for i in range(series_len):
if series[i] == 0:
raise ValueError("Input series cannot have 0 as an element")
rec_series.append(1 / series[i])
diff --git a/matrix/spiral_print.py b/matrix/spiral_print.py
index 0d0be1527aec..5eef263f7aef 100644
--- a/matrix/spiral_print.py
+++ b/matrix/spiral_print.py
@@ -54,7 +54,7 @@ def spiral_print_clockwise(a: list[list[int]]) -> None:
return
# horizotal printing increasing
- for i in range(0, mat_col):
+ for i in range(mat_col):
print(a[0][i])
# vertical printing down
for i in range(1, mat_row):
diff --git a/other/magicdiamondpattern.py b/other/magicdiamondpattern.py
index 0fc41d7a25d8..89b973bb41e8 100644
--- a/other/magicdiamondpattern.py
+++ b/other/magicdiamondpattern.py
@@ -7,10 +7,10 @@ def floyd(n):
Parameters:
n : size of pattern
"""
- for i in range(0, n):
- for _ in range(0, n - i - 1): # printing spaces
+ for i in range(n):
+ for _ in range(n - i - 1): # printing spaces
print(" ", end="")
- for _ in range(0, i + 1): # printing stars
+ for _ in range(i + 1): # printing stars
print("* ", end="")
print()
diff --git a/project_euler/problem_070/sol1.py b/project_euler/problem_070/sol1.py
index 273f37efc5fc..57a6c1916374 100644
--- a/project_euler/problem_070/sol1.py
+++ b/project_euler/problem_070/sol1.py
@@ -44,7 +44,7 @@ def get_totients(max_one: int) -> list[int]:
"""
totients = [0] * max_one
- for i in range(0, max_one):
+ for i in range(max_one):
totients[i] = i
for i in range(2, max_one):
diff --git a/project_euler/problem_112/sol1.py b/project_euler/problem_112/sol1.py
index b3ea6b35654a..31996d070771 100644
--- a/project_euler/problem_112/sol1.py
+++ b/project_euler/problem_112/sol1.py
@@ -49,7 +49,7 @@ def check_bouncy(n: int) -> bool:
raise ValueError("check_bouncy() accepts only integer arguments")
str_n = str(n)
sorted_str_n = "".join(sorted(str_n))
- return sorted_str_n != str_n and sorted_str_n[::-1] != str_n
+ return str_n not in {sorted_str_n, sorted_str_n[::-1]}
def solution(percent: float = 99) -> int:
diff --git a/quantum/q_full_adder.py b/quantum/q_full_adder.py
index 66d93198519e..ec4efa4346a5 100644
--- a/quantum/q_full_adder.py
+++ b/quantum/q_full_adder.py
@@ -88,7 +88,7 @@ def quantum_full_adder(
quantum_circuit = qiskit.QuantumCircuit(qr, cr)
- for i in range(0, 3):
+ for i in range(3):
if entry[i] == 2:
quantum_circuit.h(i) # for hadamard entries
elif entry[i] == 1:
diff --git a/scheduling/highest_response_ratio_next.py b/scheduling/highest_response_ratio_next.py
index 9c999ec65053..057bd64cc729 100644
--- a/scheduling/highest_response_ratio_next.py
+++ b/scheduling/highest_response_ratio_next.py
@@ -53,7 +53,7 @@ def calculate_turn_around_time(
loc = 0
# Saves the current response ratio.
temp = 0
- for i in range(0, no_of_process):
+ for i in range(no_of_process):
if finished_process[i] == 0 and arrival_time[i] <= current_time:
temp = (burst_time[i] + (current_time - arrival_time[i])) / burst_time[
i
@@ -87,7 +87,7 @@ def calculate_waiting_time(
"""
waiting_time = [0] * no_of_process
- for i in range(0, no_of_process):
+ for i in range(no_of_process):
waiting_time[i] = turn_around_time[i] - burst_time[i]
return waiting_time
@@ -106,7 +106,7 @@ def calculate_waiting_time(
)
print("Process name \tArrival time \tBurst time \tTurn around time \tWaiting time")
- for i in range(0, no_of_process):
+ for i in range(no_of_process):
print(
f"{process_name[i]}\t\t{arrival_time[i]}\t\t{burst_time[i]}\t\t"
f"{turn_around_time[i]}\t\t\t{waiting_time[i]}"
diff --git a/sorts/counting_sort.py b/sorts/counting_sort.py
index 18c4b0323dcb..256952df52d2 100644
--- a/sorts/counting_sort.py
+++ b/sorts/counting_sort.py
@@ -49,7 +49,7 @@ def counting_sort(collection):
# place the elements in the output, respecting the original order (stable
# sort) from end to begin, updating counting_arr
- for i in reversed(range(0, coll_len)):
+ for i in reversed(range(coll_len)):
ordered[counting_arr[collection[i] - coll_min] - 1] = collection[i]
counting_arr[collection[i] - coll_min] -= 1
diff --git a/sorts/cycle_sort.py b/sorts/cycle_sort.py
index 806f40441d79..7177c8ea110d 100644
--- a/sorts/cycle_sort.py
+++ b/sorts/cycle_sort.py
@@ -19,7 +19,7 @@ def cycle_sort(array: list) -> list:
[]
"""
array_len = len(array)
- for cycle_start in range(0, array_len - 1):
+ for cycle_start in range(array_len - 1):
item = array[cycle_start]
pos = cycle_start
diff --git a/sorts/double_sort.py b/sorts/double_sort.py
index 5ca88a6745d5..a19641d94752 100644
--- a/sorts/double_sort.py
+++ b/sorts/double_sort.py
@@ -16,9 +16,9 @@ def double_sort(lst):
"""
no_of_elements = len(lst)
for _ in range(
- 0, int(((no_of_elements - 1) / 2) + 1)
+ int(((no_of_elements - 1) / 2) + 1)
): # we don't need to traverse to end of list as
- for j in range(0, no_of_elements - 1):
+ for j in range(no_of_elements - 1):
if (
lst[j + 1] < lst[j]
): # applying bubble sort algorithm from left to right (or forwards)
diff --git a/sorts/odd_even_transposition_parallel.py b/sorts/odd_even_transposition_parallel.py
index 87b0e4d1e20f..9e0d228bdc5b 100644
--- a/sorts/odd_even_transposition_parallel.py
+++ b/sorts/odd_even_transposition_parallel.py
@@ -33,7 +33,7 @@ def oe_process(position, value, l_send, r_send, lr_cv, rr_cv, result_pipe):
# we perform n swaps since after n swaps we know we are sorted
# we *could* stop early if we are sorted already, but it takes as long to
# find out we are sorted as it does to sort the list with this algorithm
- for i in range(0, 10):
+ for i in range(10):
if (i + position) % 2 == 0 and r_send is not None:
# send your value to your right neighbor
process_lock.acquire()
@@ -123,7 +123,7 @@ def odd_even_transposition(arr):
p.start()
# wait for the processes to end and write their values to the list
- for p in range(0, len(result_pipe)):
+ for p in range(len(result_pipe)):
arr[p] = result_pipe[p][0].recv()
process_array_[p].join()
return arr
diff --git a/strings/rabin_karp.py b/strings/rabin_karp.py
index 81ca611a76b3..532c689f8a97 100644
--- a/strings/rabin_karp.py
+++ b/strings/rabin_karp.py
@@ -38,7 +38,7 @@ def rabin_karp(pattern: str, text: str) -> bool:
continue
modulus_power = (modulus_power * alphabet_size) % modulus
- for i in range(0, t_len - p_len + 1):
+ for i in range(t_len - p_len + 1):
if text_hash == p_hash and text[i : i + p_len] == pattern:
return True
if i == t_len - p_len:
From 5a4ea233cd30723628fb184bc05f969ad463b0af Mon Sep 17 00:00:00 2001
From: Kotmin <70173732+Kotmin@users.noreply.github.com>
Date: Mon, 4 Sep 2023 19:38:26 +0200
Subject: [PATCH 185/808] Style sigmoid function in harmony with pep guideness
(#6677)
* Style sigmoid function in harmony with pep guideness
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Apply suggestions from code review
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
neural_network/back_propagation_neural_network.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/neural_network/back_propagation_neural_network.py b/neural_network/back_propagation_neural_network.py
index 9dd112115f5e..bdd096b3f653 100644
--- a/neural_network/back_propagation_neural_network.py
+++ b/neural_network/back_propagation_neural_network.py
@@ -21,8 +21,8 @@
from matplotlib import pyplot as plt
-def sigmoid(x):
- return 1 / (1 + np.exp(-1 * x))
+def sigmoid(x: np.ndarray) -> np.ndarray:
+ return 1 / (1 + np.exp(-x))
class DenseLayer:
From ac73be217863cc78af97bb86a9156ac38c4ae1e5 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 5 Sep 2023 08:27:05 +0530
Subject: [PATCH 186/808] [pre-commit.ci] pre-commit autoupdate (#9042)
---
.pre-commit-config.yaml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 5c4e8579e116..c046789463cc 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.286
+ rev: v0.0.287
hooks:
- id: ruff
From 79b043d35ca266cf5053f5b62b2fe0f7bc6344d9 Mon Sep 17 00:00:00 2001
From: Rafael Zimmer
Date: Tue, 5 Sep 2023 01:04:36 -0300
Subject: [PATCH 187/808] Texture analysis using Haralick Descriptors for
Computer Vision tasks (#8004)
* Create haralick_descriptors
* Working on creating Unit Testing for Haralick Descriptors module
* Type hinting for Haralick descriptors
* Fixed docstrings, unit testing and formatting choices
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fixed line size formatting
* Added final doctests
* Changed main callable
* Updated requirements.txt
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update computer_vision/haralick_descriptors.py
No! What if the Kernel is empty?
Example:
>>> kernel = np.zeros((1))
>>> kernel or np.ones((3, 3))
array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
Co-authored-by: Christian Clauss
* Undone wrong commit
* Update haralick_descriptors.py
* Apply suggestions from code review
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix ruff errors in haralick_descriptors.py
* Add type hint to haralick_descriptors.py to fix ruff error
* Update haralick_descriptors.py
* Update haralick_descriptors.py
* Update haralick_descriptors.py
* Update haralick_descriptors.py
* Try to fix mypy errors in haralick_descriptors.py
* Update haralick_descriptors.py
* Fix type hint in haralick_descriptors.py
---------
Co-authored-by: Rafael Zimmer
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
Co-authored-by: Tianyi Zheng
---
computer_vision/haralick_descriptors.py | 431 ++++++++++++++++++++++++
requirements.txt | 1 +
2 files changed, 432 insertions(+)
create mode 100644 computer_vision/haralick_descriptors.py
diff --git a/computer_vision/haralick_descriptors.py b/computer_vision/haralick_descriptors.py
new file mode 100644
index 000000000000..1a86d84ea14b
--- /dev/null
+++ b/computer_vision/haralick_descriptors.py
@@ -0,0 +1,431 @@
+"""
+https://en.wikipedia.org/wiki/Image_texture
+https://en.wikipedia.org/wiki/Co-occurrence_matrix#Application_to_image_analysis
+"""
+import imageio.v2 as imageio
+import numpy as np
+
+
+def root_mean_square_error(original: np.ndarray, reference: np.ndarray) -> float:
+ """Simple implementation of Root Mean Squared Error
+ for two N dimensional numpy arrays.
+
+ Examples:
+ >>> root_mean_square_error(np.array([1, 2, 3]), np.array([1, 2, 3]))
+ 0.0
+ >>> root_mean_square_error(np.array([1, 2, 3]), np.array([2, 2, 2]))
+ 0.816496580927726
+ >>> root_mean_square_error(np.array([1, 2, 3]), np.array([6, 4, 2]))
+ 3.1622776601683795
+ """
+ return np.sqrt(((original - reference) ** 2).mean())
+
+
+def normalize_image(
+ image: np.ndarray, cap: float = 255.0, data_type: np.dtype = np.uint8
+) -> np.ndarray:
+ """
+ Normalizes image in Numpy 2D array format, between ranges 0-cap,
+ as to fit uint8 type.
+
+ Args:
+ image: 2D numpy array representing image as matrix, with values in any range
+ cap: Maximum cap amount for normalization
+ data_type: numpy data type to set output variable to
+ Returns:
+ return 2D numpy array of type uint8, corresponding to limited range matrix
+
+ Examples:
+ >>> normalize_image(np.array([[1, 2, 3], [4, 5, 10]]),
+ ... cap=1.0, data_type=np.float64)
+ array([[0. , 0.11111111, 0.22222222],
+ [0.33333333, 0.44444444, 1. ]])
+ >>> normalize_image(np.array([[4, 4, 3], [1, 7, 2]]))
+ array([[127, 127, 85],
+ [ 0, 255, 42]], dtype=uint8)
+ """
+ normalized = (image - np.min(image)) / (np.max(image) - np.min(image)) * cap
+ return normalized.astype(data_type)
+
+
+def normalize_array(array: np.ndarray, cap: float = 1) -> np.ndarray:
+ """Normalizes a 1D array, between ranges 0-cap.
+
+ Args:
+ array: List containing values to be normalized between cap range.
+ cap: Maximum cap amount for normalization.
+ Returns:
+ return 1D numpy array, corresponding to limited range array
+
+ Examples:
+ >>> normalize_array(np.array([2, 3, 5, 7]))
+ array([0. , 0.2, 0.6, 1. ])
+ >>> normalize_array(np.array([[5], [7], [11], [13]]))
+ array([[0. ],
+ [0.25],
+ [0.75],
+ [1. ]])
+ """
+ diff = np.max(array) - np.min(array)
+ return (array - np.min(array)) / (1 if diff == 0 else diff) * cap
+
+
+def grayscale(image: np.ndarray) -> np.ndarray:
+ """
+ Uses luminance weights to transform RGB channel to greyscale, by
+ taking the dot product between the channel and the weights.
+
+ Example:
+ >>> grayscale(np.array([[[108, 201, 72], [255, 11, 127]],
+ ... [[56, 56, 56], [128, 255, 107]]]))
+ array([[158, 97],
+ [ 56, 200]], dtype=uint8)
+ """
+ return np.dot(image[:, :, 0:3], [0.299, 0.587, 0.114]).astype(np.uint8)
+
+
+def binarize(image: np.ndarray, threshold: float = 127.0) -> np.ndarray:
+ """
+ Binarizes a grayscale image based on a given threshold value,
+ setting values to 1 or 0 accordingly.
+
+ Examples:
+ >>> binarize(np.array([[128, 255], [101, 156]]))
+ array([[1, 1],
+ [0, 1]])
+ >>> binarize(np.array([[0.07, 1], [0.51, 0.3]]), threshold=0.5)
+ array([[0, 1],
+ [1, 0]])
+ """
+ return np.where(image > threshold, 1, 0)
+
+
+def transform(image: np.ndarray, kind: str, kernel: np.ndarray = None) -> np.ndarray:
+ """
+ Simple image transformation using one of two available filter functions:
+ Erosion and Dilation.
+
+ Args:
+ image: binarized input image, onto which to apply transformation
+ kind: Can be either 'erosion', in which case the :func:np.max
+ function is called, or 'dilation', when :func:np.min is used instead.
+ kernel: n x n kernel with shape < :attr:image.shape,
+ to be used when applying convolution to original image
+
+ Returns:
+ returns a numpy array with same shape as input image,
+ corresponding to applied binary transformation.
+
+ Examples:
+ >>> img = np.array([[1, 0.5], [0.2, 0.7]])
+ >>> img = binarize(img, threshold=0.5)
+ >>> transform(img, 'erosion')
+ array([[1, 1],
+ [1, 1]], dtype=uint8)
+ >>> transform(img, 'dilation')
+ array([[0, 0],
+ [0, 0]], dtype=uint8)
+ """
+ if kernel is None:
+ kernel = np.ones((3, 3))
+
+ if kind == "erosion":
+ constant = 1
+ apply = np.max
+ else:
+ constant = 0
+ apply = np.min
+
+ center_x, center_y = (x // 2 for x in kernel.shape)
+
+ # Use padded image when applying convolotion
+ # to not go out of bounds of the original the image
+ transformed = np.zeros(image.shape, dtype=np.uint8)
+ padded = np.pad(image, 1, "constant", constant_values=constant)
+
+ for x in range(center_x, padded.shape[0] - center_x):
+ for y in range(center_y, padded.shape[1] - center_y):
+ center = padded[
+ x - center_x : x + center_x + 1, y - center_y : y + center_y + 1
+ ]
+ # Apply transformation method to the centered section of the image
+ transformed[x - center_x, y - center_y] = apply(center[kernel == 1])
+
+ return transformed
+
+
+def opening_filter(image: np.ndarray, kernel: np.ndarray = None) -> np.ndarray:
+ """
+ Opening filter, defined as the sequence of
+ erosion and then a dilation filter on the same image.
+
+ Examples:
+ >>> img = np.array([[1, 0.5], [0.2, 0.7]])
+ >>> img = binarize(img, threshold=0.5)
+ >>> opening_filter(img)
+ array([[1, 1],
+ [1, 1]], dtype=uint8)
+ """
+ if kernel is None:
+ np.ones((3, 3))
+
+ return transform(transform(image, "dilation", kernel), "erosion", kernel)
+
+
+def closing_filter(image: np.ndarray, kernel: np.ndarray = None) -> np.ndarray:
+ """
+ Opening filter, defined as the sequence of
+ dilation and then erosion filter on the same image.
+
+ Examples:
+ >>> img = np.array([[1, 0.5], [0.2, 0.7]])
+ >>> img = binarize(img, threshold=0.5)
+ >>> closing_filter(img)
+ array([[0, 0],
+ [0, 0]], dtype=uint8)
+ """
+ if kernel is None:
+ kernel = np.ones((3, 3))
+ return transform(transform(image, "erosion", kernel), "dilation", kernel)
+
+
+def binary_mask(
+ image_gray: np.ndarray, image_map: np.ndarray
+) -> tuple[np.ndarray, np.ndarray]:
+ """
+ Apply binary mask, or thresholding based
+ on bit mask value (mapping mask is binary).
+
+ Returns the mapped true value mask and its complementary false value mask.
+
+ Example:
+ >>> img = np.array([[[108, 201, 72], [255, 11, 127]],
+ ... [[56, 56, 56], [128, 255, 107]]])
+ >>> gray = grayscale(img)
+ >>> binary = binarize(gray)
+ >>> morphological = opening_filter(binary)
+ >>> binary_mask(gray, morphological)
+ (array([[1, 1],
+ [1, 1]], dtype=uint8), array([[158, 97],
+ [ 56, 200]], dtype=uint8))
+ """
+ true_mask, false_mask = image_gray.copy(), image_gray.copy()
+ true_mask[image_map == 1] = 1
+ false_mask[image_map == 0] = 0
+
+ return true_mask, false_mask
+
+
+def matrix_concurrency(image: np.ndarray, coordinate: tuple[int, int]) -> np.ndarray:
+ """
+ Calculate sample co-occurrence matrix based on input image
+ as well as selected coordinates on image.
+
+ Implementation is made using basic iteration,
+ as function to be performed (np.max) is non-linear and therefore
+ not callable on the frequency domain.
+
+ Example:
+ >>> img = np.array([[[108, 201, 72], [255, 11, 127]],
+ ... [[56, 56, 56], [128, 255, 107]]])
+ >>> gray = grayscale(img)
+ >>> binary = binarize(gray)
+ >>> morphological = opening_filter(binary)
+ >>> mask_1 = binary_mask(gray, morphological)[0]
+ >>> matrix_concurrency(mask_1, (0, 1))
+ array([[0., 0.],
+ [0., 0.]])
+ """
+ matrix = np.zeros([np.max(image) + 1, np.max(image) + 1])
+
+ offset_x, offset_y = coordinate
+
+ for x in range(1, image.shape[0] - 1):
+ for y in range(1, image.shape[1] - 1):
+ base_pixel = image[x, y]
+ offset_pixel = image[x + offset_x, y + offset_y]
+
+ matrix[base_pixel, offset_pixel] += 1
+ matrix_sum = np.sum(matrix)
+ return matrix / (1 if matrix_sum == 0 else matrix_sum)
+
+
+def haralick_descriptors(matrix: np.ndarray) -> list[float]:
+ """Calculates all 8 Haralick descriptors based on co-occurence input matrix.
+ All descriptors are as follows:
+ Maximum probability, Inverse Difference, Homogeneity, Entropy,
+ Energy, Dissimilarity, Contrast and Correlation
+
+ Args:
+ matrix: Co-occurence matrix to use as base for calculating descriptors.
+
+ Returns:
+ Reverse ordered list of resulting descriptors
+
+ Example:
+ >>> img = np.array([[[108, 201, 72], [255, 11, 127]],
+ ... [[56, 56, 56], [128, 255, 107]]])
+ >>> gray = grayscale(img)
+ >>> binary = binarize(gray)
+ >>> morphological = opening_filter(binary)
+ >>> mask_1 = binary_mask(gray, morphological)[0]
+ >>> concurrency = matrix_concurrency(mask_1, (0, 1))
+ >>> haralick_descriptors(concurrency)
+ [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
+ """
+ # Function np.indices could be used for bigger input types,
+ # but np.ogrid works just fine
+ i, j = np.ogrid[0 : matrix.shape[0], 0 : matrix.shape[1]] # np.indices()
+
+ # Pre-calculate frequent multiplication and subtraction
+ prod = np.multiply(i, j)
+ sub = np.subtract(i, j)
+
+ # Calculate numerical value of Maximum Probability
+ maximum_prob = np.max(matrix)
+ # Using the definition for each descriptor individually to calculate its matrix
+ correlation = prod * matrix
+ energy = np.power(matrix, 2)
+ contrast = matrix * np.power(sub, 2)
+
+ dissimilarity = matrix * np.abs(sub)
+ inverse_difference = matrix / (1 + np.abs(sub))
+ homogeneity = matrix / (1 + np.power(sub, 2))
+ entropy = -(matrix[matrix > 0] * np.log(matrix[matrix > 0]))
+
+ # Sum values for descriptors ranging from the first one to the last,
+ # as all are their respective origin matrix and not the resulting value yet.
+ return [
+ maximum_prob,
+ correlation.sum(),
+ energy.sum(),
+ contrast.sum(),
+ dissimilarity.sum(),
+ inverse_difference.sum(),
+ homogeneity.sum(),
+ entropy.sum(),
+ ]
+
+
+def get_descriptors(
+ masks: tuple[np.ndarray, np.ndarray], coordinate: tuple[int, int]
+) -> np.ndarray:
+ """
+ Calculate all Haralick descriptors for a sequence of
+ different co-occurrence matrices, given input masks and coordinates.
+
+ Example:
+ >>> img = np.array([[[108, 201, 72], [255, 11, 127]],
+ ... [[56, 56, 56], [128, 255, 107]]])
+ >>> gray = grayscale(img)
+ >>> binary = binarize(gray)
+ >>> morphological = opening_filter(binary)
+ >>> get_descriptors(binary_mask(gray, morphological), (0, 1))
+ array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
+ """
+ descriptors = np.array(
+ [haralick_descriptors(matrix_concurrency(mask, coordinate)) for mask in masks]
+ )
+
+ # Concatenate each individual descriptor into
+ # one single list containing sequence of descriptors
+ return np.concatenate(descriptors, axis=None)
+
+
+def euclidean(point_1: np.ndarray, point_2: np.ndarray) -> np.float32:
+ """
+ Simple method for calculating the euclidean distance between two points,
+ with type np.ndarray.
+
+ Example:
+ >>> a = np.array([1, 0, -2])
+ >>> b = np.array([2, -1, 1])
+ >>> euclidean(a, b)
+ 3.3166247903554
+ """
+ return np.sqrt(np.sum(np.square(point_1 - point_2)))
+
+
+def get_distances(descriptors: np.ndarray, base: int) -> list[tuple[int, float]]:
+ """
+ Calculate all Euclidean distances between a selected base descriptor
+ and all other Haralick descriptors
+ The resulting comparison is return in decreasing order,
+ showing which descriptor is the most similar to the selected base.
+
+ Args:
+ descriptors: Haralick descriptors to compare with base index
+ base: Haralick descriptor index to use as base when calculating respective
+ euclidean distance to other descriptors.
+
+ Returns:
+ Ordered distances between descriptors
+
+ Example:
+ >>> index = 1
+ >>> img = np.array([[[108, 201, 72], [255, 11, 127]],
+ ... [[56, 56, 56], [128, 255, 107]]])
+ >>> gray = grayscale(img)
+ >>> binary = binarize(gray)
+ >>> morphological = opening_filter(binary)
+ >>> get_distances(get_descriptors(
+ ... binary_mask(gray, morphological), (0, 1)),
+ ... index)
+ [(0, 0.0), (1, 0.0), (2, 0.0), (3, 0.0), (4, 0.0), (5, 0.0), \
+(6, 0.0), (7, 0.0), (8, 0.0), (9, 0.0), (10, 0.0), (11, 0.0), (12, 0.0), \
+(13, 0.0), (14, 0.0), (15, 0.0)]
+ """
+ distances = np.array(
+ [euclidean(descriptor, descriptors[base]) for descriptor in descriptors]
+ )
+ # Normalize distances between range [0, 1]
+ normalized_distances: list[float] = normalize_array(distances, 1).tolist()
+ enum_distances = list(enumerate(normalized_distances))
+ enum_distances.sort(key=lambda tup: tup[1], reverse=True)
+ return enum_distances
+
+
+if __name__ == "__main__":
+ # Index to compare haralick descriptors to
+ index = int(input())
+ q_value_list = [int(value) for value in input().split()]
+ q_value = (q_value_list[0], q_value_list[1])
+
+ # Format is the respective filter to apply,
+ # can be either 1 for the opening filter or else for the closing
+ parameters = {"format": int(input()), "threshold": int(input())}
+
+ # Number of images to perform methods on
+ b_number = int(input())
+
+ files, descriptors = [], []
+
+ for _ in range(b_number):
+ file = input().rstrip()
+ files.append(file)
+
+ # Open given image and calculate morphological filter,
+ # respective masks and correspondent Harralick Descriptors.
+ image = imageio.imread(file).astype(np.float32)
+ gray = grayscale(image)
+ threshold = binarize(gray, parameters["threshold"])
+
+ morphological = (
+ opening_filter(threshold)
+ if parameters["format"] == 1
+ else closing_filter(threshold)
+ )
+ masks = binary_mask(gray, morphological)
+ descriptors.append(get_descriptors(masks, q_value))
+
+ # Transform ordered distances array into a sequence of indexes
+ # corresponding to original file position
+ distances = get_distances(np.array(descriptors), index)
+ indexed_distances = np.array(distances).astype(np.uint8)[:, 0]
+
+ # Finally, print distances considering the Haralick descriptions from the base
+ # file to all other images using the morphology method of choice.
+ print(f"Query: {files[index]}")
+ print("Ranking:")
+ for idx, file_idx in enumerate(indexed_distances):
+ print(f"({idx}) {files[file_idx]}", end="\n")
diff --git a/requirements.txt b/requirements.txt
index 2702523d542e..1128e9d66820 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,5 +1,6 @@
beautifulsoup4
fake_useragent
+imageio
keras
lxml
matplotlib
From 72f600036511c4999fa56bf007bf92ec465e94d7 Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Tue, 5 Sep 2023 05:49:00 +0100
Subject: [PATCH 188/808] Fix get amazon product data erroring due to
whitespace in headers (#9009)
* updating DIRECTORY.md
* fix(get-amazon-product-data): Remove whitespace in headers
* refactor(get-amazon-product-data): Don't print to_csv
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
web_programming/get_amazon_product_data.py | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/web_programming/get_amazon_product_data.py b/web_programming/get_amazon_product_data.py
index c796793f2205..a16175688667 100644
--- a/web_programming/get_amazon_product_data.py
+++ b/web_programming/get_amazon_product_data.py
@@ -19,11 +19,13 @@ def get_amazon_product_data(product: str = "laptop") -> DataFrame:
"""
url = f"https://www.amazon.in/laptop/s?k={product}"
header = {
- "User-Agent": """Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36
- (KHTML, like Gecko)Chrome/44.0.2403.157 Safari/537.36""",
+ "User-Agent": (
+ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36"
+ "(KHTML, like Gecko)Chrome/44.0.2403.157 Safari/537.36"
+ ),
"Accept-Language": "en-US, en;q=0.5",
}
- soup = BeautifulSoup(requests.get(url, headers=header).text)
+ soup = BeautifulSoup(requests.get(url, headers=header).text, features="lxml")
# Initialize a Pandas dataframe with the column titles
data_frame = DataFrame(
columns=[
@@ -74,8 +76,8 @@ def get_amazon_product_data(product: str = "laptop") -> DataFrame:
except ValueError:
discount = float("nan")
except AttributeError:
- pass
- data_frame.loc[len(data_frame.index)] = [
+ continue
+ data_frame.loc[str(len(data_frame.index))] = [
product_title,
product_link,
product_price,
From 9e4f9962a02ae584b392670a13d54ef8731e8f7f Mon Sep 17 00:00:00 2001
From: David Ekong <66387173+davidekong@users.noreply.github.com>
Date: Wed, 6 Sep 2023 15:00:09 +0100
Subject: [PATCH 189/808] Created harshad_numbers.py (#9023)
* Created harshad_numbers.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update harshad_numbers.py
Fixed a few errors
* Update harshad_numbers.py
Added function type hints
* Update harshad_numbers.py
Fixed depreciated Tuple and List usage
* Update harshad_numbers.py
Fixed incompatible types in assignments
* Update harshad_numbers.py
Fixed incompatible type assignments
* Update maths/harshad_numbers.py
Co-authored-by: Tianyi Zheng
* Update maths/harshad_numbers.py
Co-authored-by: Tianyi Zheng
* Raised Value Error for negative inputs
* Update maths/harshad_numbers.py
Co-authored-by: Tianyi Zheng
* Update maths/harshad_numbers.py
Co-authored-by: Tianyi Zheng
* Update maths/harshad_numbers.py
Co-authored-by: Tianyi Zheng
* Update harshad_numbers.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update harshad_numbers.py
* Update harshad_numbers.py
* Update harshad_numbers.py
* Update harshad_numbers.py
Added doc test to int_to_base, fixed nested loop, other minor changes
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
maths/harshad_numbers.py | 158 +++++++++++++++++++++++++++++++++++++++
1 file changed, 158 insertions(+)
create mode 100644 maths/harshad_numbers.py
diff --git a/maths/harshad_numbers.py b/maths/harshad_numbers.py
new file mode 100644
index 000000000000..050c69e0bd15
--- /dev/null
+++ b/maths/harshad_numbers.py
@@ -0,0 +1,158 @@
+"""
+A harshad number (or more specifically an n-harshad number) is a number that's
+divisible by the sum of its digits in some given base n.
+Reference: https://en.wikipedia.org/wiki/Harshad_number
+"""
+
+
+def int_to_base(number: int, base: int) -> str:
+ """
+ Convert a given positive decimal integer to base 'base'.
+ Where 'base' ranges from 2 to 36.
+
+ Examples:
+ >>> int_to_base(23, 2)
+ '10111'
+ >>> int_to_base(58, 5)
+ '213'
+ >>> int_to_base(167, 16)
+ 'A7'
+ >>> # bases below 2 and beyond 36 will error
+ >>> int_to_base(98, 1)
+ Traceback (most recent call last):
+ ...
+ ValueError: 'base' must be between 2 and 36 inclusive
+ >>> int_to_base(98, 37)
+ Traceback (most recent call last):
+ ...
+ ValueError: 'base' must be between 2 and 36 inclusive
+ """
+
+ if base < 2 or base > 36:
+ raise ValueError("'base' must be between 2 and 36 inclusive")
+
+ digits = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+ result = ""
+
+ if number < 0:
+ raise ValueError("number must be a positive integer")
+
+ while number > 0:
+ number, remainder = divmod(number, base)
+ result = digits[remainder] + result
+
+ if result == "":
+ result = "0"
+
+ return result
+
+
+def sum_of_digits(num: int, base: int) -> str:
+ """
+ Calculate the sum of digit values in a positive integer
+ converted to the given 'base'.
+ Where 'base' ranges from 2 to 36.
+
+ Examples:
+ >>> sum_of_digits(103, 12)
+ '13'
+ >>> sum_of_digits(1275, 4)
+ '30'
+ >>> sum_of_digits(6645, 2)
+ '1001'
+ >>> # bases below 2 and beyond 36 will error
+ >>> sum_of_digits(543, 1)
+ Traceback (most recent call last):
+ ...
+ ValueError: 'base' must be between 2 and 36 inclusive
+ >>> sum_of_digits(543, 37)
+ Traceback (most recent call last):
+ ...
+ ValueError: 'base' must be between 2 and 36 inclusive
+ """
+
+ if base < 2 or base > 36:
+ raise ValueError("'base' must be between 2 and 36 inclusive")
+
+ num_str = int_to_base(num, base)
+ res = sum(int(char, base) for char in num_str)
+ res_str = int_to_base(res, base)
+ return res_str
+
+
+def harshad_numbers_in_base(limit: int, base: int) -> list[str]:
+ """
+ Finds all Harshad numbers smaller than num in base 'base'.
+ Where 'base' ranges from 2 to 36.
+
+ Examples:
+ >>> harshad_numbers_in_base(15, 2)
+ ['1', '10', '100', '110', '1000', '1010', '1100']
+ >>> harshad_numbers_in_base(12, 34)
+ ['1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B']
+ >>> harshad_numbers_in_base(12, 4)
+ ['1', '2', '3', '10', '12', '20', '21']
+ >>> # bases below 2 and beyond 36 will error
+ >>> harshad_numbers_in_base(234, 37)
+ Traceback (most recent call last):
+ ...
+ ValueError: 'base' must be between 2 and 36 inclusive
+ >>> harshad_numbers_in_base(234, 1)
+ Traceback (most recent call last):
+ ...
+ ValueError: 'base' must be between 2 and 36 inclusive
+ """
+
+ if base < 2 or base > 36:
+ raise ValueError("'base' must be between 2 and 36 inclusive")
+
+ if limit < 0:
+ return []
+
+ numbers = [
+ int_to_base(i, base)
+ for i in range(1, limit)
+ if i % int(sum_of_digits(i, base), base) == 0
+ ]
+
+ return numbers
+
+
+def is_harshad_number_in_base(num: int, base: int) -> bool:
+ """
+ Determines whether n in base 'base' is a harshad number.
+ Where 'base' ranges from 2 to 36.
+
+ Examples:
+ >>> is_harshad_number_in_base(18, 10)
+ True
+ >>> is_harshad_number_in_base(21, 10)
+ True
+ >>> is_harshad_number_in_base(-21, 5)
+ False
+ >>> # bases below 2 and beyond 36 will error
+ >>> is_harshad_number_in_base(45, 37)
+ Traceback (most recent call last):
+ ...
+ ValueError: 'base' must be between 2 and 36 inclusive
+ >>> is_harshad_number_in_base(45, 1)
+ Traceback (most recent call last):
+ ...
+ ValueError: 'base' must be between 2 and 36 inclusive
+ """
+
+ if base < 2 or base > 36:
+ raise ValueError("'base' must be between 2 and 36 inclusive")
+
+ if num < 0:
+ return False
+
+ n = int_to_base(num, base)
+ d = sum_of_digits(num, base)
+ return int(n, base) % int(d, base) == 0
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 153c35eac02b5f043824dfa72e071d2b3f756607 Mon Sep 17 00:00:00 2001
From: Adarsh Acharya <132294330+AdarshAcharya5@users.noreply.github.com>
Date: Thu, 7 Sep 2023 00:46:51 +0530
Subject: [PATCH 190/808] Added Scaled Exponential Linear Unit Activation
Function (#9027)
* Added Scaled Exponential Linear Unit Activation Function
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update scaled_exponential_linear_unit.py
* Update scaled_exponential_linear_unit.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update scaled_exponential_linear_unit.py
* Update scaled_exponential_linear_unit.py
* Update scaled_exponential_linear_unit.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update scaled_exponential_linear_unit.py
* Update scaled_exponential_linear_unit.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.../scaled_exponential_linear_unit.py | 44 +++++++++++++++++++
1 file changed, 44 insertions(+)
create mode 100644 neural_network/activation_functions/scaled_exponential_linear_unit.py
diff --git a/neural_network/activation_functions/scaled_exponential_linear_unit.py b/neural_network/activation_functions/scaled_exponential_linear_unit.py
new file mode 100644
index 000000000000..f91dc6852136
--- /dev/null
+++ b/neural_network/activation_functions/scaled_exponential_linear_unit.py
@@ -0,0 +1,44 @@
+"""
+Implements the Scaled Exponential Linear Unit or SELU function.
+The function takes a vector of K real numbers and two real numbers
+alpha(default = 1.6732) & lambda (default = 1.0507) as input and
+then applies the SELU function to each element of the vector.
+SELU is a self-normalizing activation function. It is a variant
+of the ELU. The main advantage of SELU is that we can be sure
+that the output will always be standardized due to its
+self-normalizing behavior. That means there is no need to
+include Batch-Normalization layers.
+References :
+https://iq.opengenus.org/scaled-exponential-linear-unit/
+"""
+
+import numpy as np
+
+
+def scaled_exponential_linear_unit(
+ vector: np.ndarray, alpha: float = 1.6732, lambda_: float = 1.0507
+) -> np.ndarray:
+ """
+ Applies the Scaled Exponential Linear Unit function to each element of the vector.
+ Parameters :
+ vector : np.ndarray
+ alpha : float (default = 1.6732)
+ lambda_ : float (default = 1.0507)
+
+ Returns : np.ndarray
+ Formula : f(x) = lambda_ * x if x > 0
+ lambda_ * alpha * (e**x - 1) if x <= 0
+ Examples :
+ >>> scaled_exponential_linear_unit(vector=np.array([1.3, 3.7, 2.4]))
+ array([1.36591, 3.88759, 2.52168])
+
+ >>> scaled_exponential_linear_unit(vector=np.array([1.3, 4.7, 8.2]))
+ array([1.36591, 4.93829, 8.61574])
+ """
+ return lambda_ * np.where(vector > 0, vector, alpha * (np.exp(vector) - 1))
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 0cae02451a214cd70b36f2bf0b7a043c25aea99d Mon Sep 17 00:00:00 2001
From: Rohan Saraogi <62804340+r0sa2@users.noreply.github.com>
Date: Thu, 7 Sep 2023 03:52:36 -0400
Subject: [PATCH 191/808] Added nth_sgonal_num.py (#8753)
* Added nth_sgonal_num.py
* Update and rename nth_sgonal_num.py to polygonal_numbers.py
---------
Co-authored-by: Tianyi Zheng
---
maths/polygonal_numbers.py | 32 ++++++++++++++++++++++++++++++++
1 file changed, 32 insertions(+)
create mode 100644 maths/polygonal_numbers.py
diff --git a/maths/polygonal_numbers.py b/maths/polygonal_numbers.py
new file mode 100644
index 000000000000..7a7dc91acb26
--- /dev/null
+++ b/maths/polygonal_numbers.py
@@ -0,0 +1,32 @@
+def polygonal_num(num: int, sides: int) -> int:
+ """
+ Returns the `num`th `sides`-gonal number. It is assumed that `num` >= 0 and
+ `sides` >= 3 (see for reference https://en.wikipedia.org/wiki/Polygonal_number).
+
+ >>> polygonal_num(0, 3)
+ 0
+ >>> polygonal_num(3, 3)
+ 6
+ >>> polygonal_num(5, 4)
+ 25
+ >>> polygonal_num(2, 5)
+ 5
+ >>> polygonal_num(-1, 0)
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid input: num must be >= 0 and sides must be >= 3.
+ >>> polygonal_num(0, 2)
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid input: num must be >= 0 and sides must be >= 3.
+ """
+ if num < 0 or sides < 3:
+ raise ValueError("Invalid input: num must be >= 0 and sides must be >= 3.")
+
+ return ((sides - 2) * num**2 - (sides - 4) * num) // 2
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From c9b4b8002f24a33ea49c16dff5ef9cbebbd64b1d Mon Sep 17 00:00:00 2001
From: Saksham Saha
Date: Fri, 8 Sep 2023 17:50:28 +0530
Subject: [PATCH 192/808] Added an add at position subroutiune to linked list
(#9020)
* added addAtPosition to simple linked list
* added addAtPosition to simple linked list
* modified the add function to take an optional position command
* fixed type safety errors:
* fixed type safety errors:
* fixed type safety errors:
* fixed type safety errors:
* fixed size error
* fixed size error
* added doctest and updates the else after checking if posiiton argument less than 0 or not
* added doctest and updates the else after checking if posiiton argument less than 0 or not
* fixed the contributing.md mistake
* added doctest for out of bounds position value, both negative and positive
---
data_structures/linked_list/__init__.py | 52 ++++++++++++++++++++++++-
1 file changed, 50 insertions(+), 2 deletions(-)
diff --git a/data_structures/linked_list/__init__.py b/data_structures/linked_list/__init__.py
index 56b0e51baa93..225113f72cee 100644
--- a/data_structures/linked_list/__init__.py
+++ b/data_structures/linked_list/__init__.py
@@ -21,8 +21,56 @@ def __init__(self) -> None:
self.head: Node | None = None
self.size = 0
- def add(self, item: Any) -> None:
- self.head = Node(item, self.head)
+ def add(self, item: Any, position: int = 0) -> None:
+ """
+ Add an item to the LinkedList at the specified position.
+ Default position is 0 (the head).
+
+ Args:
+ item (Any): The item to add to the LinkedList.
+ position (int, optional): The position at which to add the item.
+ Defaults to 0.
+
+ Raises:
+ ValueError: If the position is negative or out of bounds.
+
+ >>> linked_list = LinkedList()
+ >>> linked_list.add(1)
+ >>> linked_list.add(2)
+ >>> linked_list.add(3)
+ >>> linked_list.add(4, 2)
+ >>> print(linked_list)
+ 3 --> 2 --> 4 --> 1
+
+ # Test adding to a negative position
+ >>> linked_list.add(5, -3)
+ Traceback (most recent call last):
+ ...
+ ValueError: Position must be non-negative
+
+ # Test adding to an out-of-bounds position
+ >>> linked_list.add(5,7)
+ Traceback (most recent call last):
+ ...
+ ValueError: Out of bounds
+ >>> linked_list.add(5, 4)
+ >>> print(linked_list)
+ 3 --> 2 --> 4 --> 1 --> 5
+ """
+ if position < 0:
+ raise ValueError("Position must be non-negative")
+
+ if position == 0 or self.head is None:
+ new_node = Node(item, self.head)
+ self.head = new_node
+ else:
+ current = self.head
+ for _ in range(position - 1):
+ current = current.next
+ if current is None:
+ raise ValueError("Out of bounds")
+ new_node = Node(item, current.next)
+ current.next = new_node
self.size += 1
def remove(self) -> Any:
From 5a5ca06944148ad7232dd61dcf7c609c0c74c252 Mon Sep 17 00:00:00 2001
From: Saransh Chopra
Date: Sat, 9 Sep 2023 23:28:43 +0530
Subject: [PATCH 193/808] Update `actions/checkout` with `fetch-depth: 0`
(#9046)
* Update `actions/checkout` with `fetch-depth: 0`
* Update directory_writer.yml
* Create junk.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update directory_writer.yml
* Update directory_writer.yml
---------
Co-authored-by: Christian Clauss
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.github/workflows/directory_writer.yml | 4 +++-
arithmetic_analysis/junk.py | 0
2 files changed, 3 insertions(+), 1 deletion(-)
create mode 100644 arithmetic_analysis/junk.py
diff --git a/.github/workflows/directory_writer.yml b/.github/workflows/directory_writer.yml
index 331962cef11e..702c15f1e29b 100644
--- a/.github/workflows/directory_writer.yml
+++ b/.github/workflows/directory_writer.yml
@@ -6,7 +6,9 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- - uses: actions/checkout@v1 # v1, NOT v2 or v3
+ - uses: actions/checkout@v4
+ with:
+ fetch-depth: 0
- uses: actions/setup-python@v4
with:
python-version: 3.x
diff --git a/arithmetic_analysis/junk.py b/arithmetic_analysis/junk.py
new file mode 100644
index 000000000000..e69de29bb2d1
From 97e2de0763d75b1875428d87818ef111481d5953 Mon Sep 17 00:00:00 2001
From: Kamil <32775019+quant12345@users.noreply.github.com>
Date: Mon, 11 Sep 2023 15:11:22 +0500
Subject: [PATCH 194/808] Euler 070 partial replacement of numpy loops. (#9055)
* Euler 070 partial replacement of numpy loops.
* Update project_euler/problem_070/sol1.py
* project_euler.yml: Upgrade actions/checkout@v4 and add numpy
* Update project_euler.yml
---------
Co-authored-by: Christian Clauss
---
.github/workflows/project_euler.yml | 8 ++++----
project_euler/problem_070/sol1.py | 13 ++++++-------
2 files changed, 10 insertions(+), 11 deletions(-)
diff --git a/.github/workflows/project_euler.yml b/.github/workflows/project_euler.yml
index 460938219c14..7bbccf76e192 100644
--- a/.github/workflows/project_euler.yml
+++ b/.github/workflows/project_euler.yml
@@ -14,26 +14,26 @@ jobs:
project-euler:
runs-on: ubuntu-latest
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Install pytest and pytest-cov
run: |
python -m pip install --upgrade pip
- python -m pip install --upgrade pytest pytest-cov
+ python -m pip install --upgrade numpy pytest pytest-cov
- run: pytest --doctest-modules --cov-report=term-missing:skip-covered --cov=project_euler/ project_euler/
validate-solutions:
runs-on: ubuntu-latest
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Install pytest and requests
run: |
python -m pip install --upgrade pip
- python -m pip install --upgrade pytest requests
+ python -m pip install --upgrade numpy pytest requests
- run: pytest scripts/validate_solutions.py
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
diff --git a/project_euler/problem_070/sol1.py b/project_euler/problem_070/sol1.py
index 57a6c1916374..f1114a280a31 100644
--- a/project_euler/problem_070/sol1.py
+++ b/project_euler/problem_070/sol1.py
@@ -30,6 +30,8 @@
"""
from __future__ import annotations
+import numpy as np
+
def get_totients(max_one: int) -> list[int]:
"""
@@ -42,17 +44,14 @@ def get_totients(max_one: int) -> list[int]:
>>> get_totients(10)
[0, 1, 1, 2, 2, 4, 2, 6, 4, 6]
"""
- totients = [0] * max_one
-
- for i in range(max_one):
- totients[i] = i
+ totients = np.arange(max_one)
for i in range(2, max_one):
if totients[i] == i:
- for j in range(i, max_one, i):
- totients[j] -= totients[j] // i
+ x = np.arange(i, max_one, i) # array of indexes to select
+ totients[x] -= totients[x] // i
- return totients
+ return totients.tolist()
def has_same_digits(num1: int, num2: int) -> bool:
From 4246da387f8b48da5147320344d336886787aea1 Mon Sep 17 00:00:00 2001
From: Kamil <32775019+quant12345@users.noreply.github.com>
Date: Mon, 11 Sep 2023 16:05:32 +0500
Subject: [PATCH 195/808] jacobi_iteration_method.py the use of vector
operations, which reduces the calculation time by dozens of times (#8938)
* Replaced loops in jacobi_iteration_method function with vector operations. That gives a reduction in the time for calculating the algorithm.
* Replaced loops in jacobi_iteration_method function with vector operations. That gives a reduction in the time for calculating the algorithm.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Delete main.py
* Update jacobi_iteration_method.py
Changed a line that was too long.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update jacobi_iteration_method.py
Changed the type of the returned list as required.
* Update jacobi_iteration_method.py
Replaced init_val with new_val.
* Update jacobi_iteration_method.py
Fixed bug: init_val: list[int] to list[float].
Since the numbers are fractional: init_val = [0.5, -0.5, -0.5].
* Update jacobi_iteration_method.py
Changed comments, made variable names more understandable.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update jacobi_iteration_method.py
left the old algorithm commented out, as it clearly shows what is being done.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update jacobi_iteration_method.py
Edits upon request.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.../jacobi_iteration_method.py | 34 +++++++++++++++++--
1 file changed, 32 insertions(+), 2 deletions(-)
diff --git a/arithmetic_analysis/jacobi_iteration_method.py b/arithmetic_analysis/jacobi_iteration_method.py
index dba8a9ff44d3..44c52dd44640 100644
--- a/arithmetic_analysis/jacobi_iteration_method.py
+++ b/arithmetic_analysis/jacobi_iteration_method.py
@@ -12,7 +12,7 @@
def jacobi_iteration_method(
coefficient_matrix: NDArray[float64],
constant_matrix: NDArray[float64],
- init_val: list[int],
+ init_val: list[float],
iterations: int,
) -> list[float]:
"""
@@ -115,6 +115,7 @@ def jacobi_iteration_method(
strictly_diagonally_dominant(table)
+ """
# Iterates the whole matrix for given number of times
for _ in range(iterations):
new_val = []
@@ -130,8 +131,37 @@ def jacobi_iteration_method(
temp = (temp + val) / denom
new_val.append(temp)
init_val = new_val
+ """
+
+ # denominator - a list of values along the diagonal
+ denominator = np.diag(coefficient_matrix)
+
+ # val_last - values of the last column of the table array
+ val_last = table[:, -1]
+
+ # masks - boolean mask of all strings without diagonal
+ # elements array coefficient_matrix
+ masks = ~np.eye(coefficient_matrix.shape[0], dtype=bool)
+
+ # no_diagonals - coefficient_matrix array values without diagonal elements
+ no_diagonals = coefficient_matrix[masks].reshape(-1, rows - 1)
+
+ # Here we get 'i_col' - these are the column numbers, for each row
+ # without diagonal elements, except for the last column.
+ i_row, i_col = np.where(masks)
+ ind = i_col.reshape(-1, rows - 1)
+
+ #'i_col' is converted to a two-dimensional list 'ind', which will be
+ # used to make selections from 'init_val' ('arr' array see below).
+
+ # Iterates the whole matrix for given number of times
+ for _ in range(iterations):
+ arr = np.take(init_val, ind)
+ sum_product_rows = np.sum((-1) * no_diagonals * arr, axis=1)
+ new_val = (sum_product_rows + val_last) / denominator
+ init_val = new_val
- return [float(i) for i in new_val]
+ return new_val.tolist()
# Checks if the given matrix is strictly diagonally dominant
From 1488cdea708485eb1d81c73126eab13cb9b04a47 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Tue, 12 Sep 2023 01:56:50 +0200
Subject: [PATCH 196/808] [pre-commit.ci] pre-commit autoupdate (#9056)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.287 → v0.0.288](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.287...v0.0.288)
- [github.com/psf/black: 23.7.0 → 23.9.1](https://github.com/psf/black/compare/23.7.0...23.9.1)
* updating DIRECTORY.md
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.pre-commit-config.yaml | 4 ++--
DIRECTORY.md | 5 +++++
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index c046789463cc..722b408ee9e9 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,12 +16,12 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.287
+ rev: v0.0.288
hooks:
- id: ruff
- repo: https://github.com/psf/black
- rev: 23.7.0
+ rev: 23.9.1
hooks:
- id: black
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 43da91cb818e..1b802564f939 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -5,6 +5,7 @@
* [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py)
* [Intersection](arithmetic_analysis/intersection.py)
* [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py)
+ * [Junk](arithmetic_analysis/junk.py)
* [Lu Decomposition](arithmetic_analysis/lu_decomposition.py)
* [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py)
* [Newton Method](arithmetic_analysis/newton_method.py)
@@ -133,6 +134,7 @@
## Computer Vision
* [Cnn Classification](computer_vision/cnn_classification.py)
* [Flip Augmentation](computer_vision/flip_augmentation.py)
+ * [Haralick Descriptors](computer_vision/haralick_descriptors.py)
* [Harris Corner](computer_vision/harris_corner.py)
* [Horn Schunck](computer_vision/horn_schunck.py)
* [Mean Threshold](computer_vision/mean_threshold.py)
@@ -586,6 +588,7 @@
* [Greedy Coin Change](maths/greedy_coin_change.py)
* [Hamming Numbers](maths/hamming_numbers.py)
* [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py)
+ * [Harshad Numbers](maths/harshad_numbers.py)
* [Hexagonal Number](maths/hexagonal_number.py)
* [Integration By Simpson Approx](maths/integration_by_simpson_approx.py)
* [Interquartile Range](maths/interquartile_range.py)
@@ -626,6 +629,7 @@
* [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py)
* [Points Are Collinear 3D](maths/points_are_collinear_3d.py)
* [Pollard Rho](maths/pollard_rho.py)
+ * [Polygonal Numbers](maths/polygonal_numbers.py)
* [Polynomial Evaluation](maths/polynomial_evaluation.py)
* Polynomials
* [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py)
@@ -712,6 +716,7 @@
* Activation Functions
* [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py)
* [Leaky Rectified Linear Unit](neural_network/activation_functions/leaky_rectified_linear_unit.py)
+ * [Scaled Exponential Linear Unit](neural_network/activation_functions/scaled_exponential_linear_unit.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Perceptron](neural_network/perceptron.py)
From fbad85d3ecbbb826a5891807c823149d38bbaed3 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Sat, 16 Sep 2023 18:12:31 -0400
Subject: [PATCH 197/808] Delete empty junk file (#9062)
* updating DIRECTORY.md
* updating DIRECTORY.md
* Delete empty junk file
* updating DIRECTORY.md
* Fix ruff errors
* Fix more ruff errors
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 -
arithmetic_analysis/junk.py | 0
computer_vision/haralick_descriptors.py | 8 +++++---
conversions/convert_number_to_words.py | 6 +++---
graphs/tarjans_scc.py | 2 +-
5 files changed, 9 insertions(+), 8 deletions(-)
delete mode 100644 arithmetic_analysis/junk.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 1b802564f939..d81e4ec1ee83 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -5,7 +5,6 @@
* [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py)
* [Intersection](arithmetic_analysis/intersection.py)
* [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py)
- * [Junk](arithmetic_analysis/junk.py)
* [Lu Decomposition](arithmetic_analysis/lu_decomposition.py)
* [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py)
* [Newton Method](arithmetic_analysis/newton_method.py)
diff --git a/arithmetic_analysis/junk.py b/arithmetic_analysis/junk.py
deleted file mode 100644
index e69de29bb2d1..000000000000
diff --git a/computer_vision/haralick_descriptors.py b/computer_vision/haralick_descriptors.py
index 1a86d84ea14b..413cea304f6c 100644
--- a/computer_vision/haralick_descriptors.py
+++ b/computer_vision/haralick_descriptors.py
@@ -100,7 +100,9 @@ def binarize(image: np.ndarray, threshold: float = 127.0) -> np.ndarray:
return np.where(image > threshold, 1, 0)
-def transform(image: np.ndarray, kind: str, kernel: np.ndarray = None) -> np.ndarray:
+def transform(
+ image: np.ndarray, kind: str, kernel: np.ndarray | None = None
+) -> np.ndarray:
"""
Simple image transformation using one of two available filter functions:
Erosion and Dilation.
@@ -154,7 +156,7 @@ def transform(image: np.ndarray, kind: str, kernel: np.ndarray = None) -> np.nda
return transformed
-def opening_filter(image: np.ndarray, kernel: np.ndarray = None) -> np.ndarray:
+def opening_filter(image: np.ndarray, kernel: np.ndarray | None = None) -> np.ndarray:
"""
Opening filter, defined as the sequence of
erosion and then a dilation filter on the same image.
@@ -172,7 +174,7 @@ def opening_filter(image: np.ndarray, kernel: np.ndarray = None) -> np.ndarray:
return transform(transform(image, "dilation", kernel), "erosion", kernel)
-def closing_filter(image: np.ndarray, kernel: np.ndarray = None) -> np.ndarray:
+def closing_filter(image: np.ndarray, kernel: np.ndarray | None = None) -> np.ndarray:
"""
Opening filter, defined as the sequence of
dilation and then erosion filter on the same image.
diff --git a/conversions/convert_number_to_words.py b/conversions/convert_number_to_words.py
index 0e4405319f1f..0c428928b31d 100644
--- a/conversions/convert_number_to_words.py
+++ b/conversions/convert_number_to_words.py
@@ -54,7 +54,7 @@ def max_value(cls, system: str) -> int:
class NumberWords(Enum):
- ONES: ClassVar = {
+ ONES: ClassVar[dict[int, str]] = {
0: "",
1: "one",
2: "two",
@@ -67,7 +67,7 @@ class NumberWords(Enum):
9: "nine",
}
- TEENS: ClassVar = {
+ TEENS: ClassVar[dict[int, str]] = {
0: "ten",
1: "eleven",
2: "twelve",
@@ -80,7 +80,7 @@ class NumberWords(Enum):
9: "nineteen",
}
- TENS: ClassVar = {
+ TENS: ClassVar[dict[int, str]] = {
2: "twenty",
3: "thirty",
4: "forty",
diff --git a/graphs/tarjans_scc.py b/graphs/tarjans_scc.py
index 30f8ca8a204f..dfd2e52704d5 100644
--- a/graphs/tarjans_scc.py
+++ b/graphs/tarjans_scc.py
@@ -77,7 +77,7 @@ def create_graph(n, edges):
n_vertices = 7
source = [0, 0, 1, 2, 3, 3, 4, 4, 6]
target = [1, 3, 2, 0, 1, 4, 5, 6, 5]
- edges = [(u, v) for u, v in zip(source, target)]
+ edges = list(zip(source, target))
g = create_graph(n_vertices, edges)
assert [[5], [6], [4], [3, 2, 1, 0]] == tarjan(g)
From dc50add8a78ebf34bc7bb050c1a0e61d207b9544 Mon Sep 17 00:00:00 2001
From: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
Date: Sat, 23 Sep 2023 14:21:36 +0530
Subject: [PATCH 198/808] Update xgboost_regressor.py (#9078)
* Update xgboost_regressor.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
machine_learning/xgboost_regressor.py | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/machine_learning/xgboost_regressor.py b/machine_learning/xgboost_regressor.py
index 023984fc1f59..a540e3ab03eb 100644
--- a/machine_learning/xgboost_regressor.py
+++ b/machine_learning/xgboost_regressor.py
@@ -27,7 +27,9 @@ def xgboost(
... 1.14300000e+03, 2.60958904e+00, 3.67800000e+01, -1.19780000e+02]]))
array([[1.1139996]], dtype=float32)
"""
- xgb = XGBRegressor(verbosity=0, random_state=42)
+ xgb = XGBRegressor(
+ verbosity=0, random_state=42, tree_method="exact", base_score=0.5
+ )
xgb.fit(features, target)
# Predict target for test data
predictions = xgb.predict(test_features)
From b203150ac481743a6d8c1ef01091712a54dfbf6c Mon Sep 17 00:00:00 2001
From: omahs <73983677+omahs@users.noreply.github.com>
Date: Sat, 23 Sep 2023 10:53:09 +0200
Subject: [PATCH 199/808] Fix typos (#9076)
* fix typo
* fix typo
* fix typos
* fix typo
---
cellular_automata/langtons_ant.py | 2 +-
compression/README.md | 4 ++--
hashes/README.md | 4 ++--
sorts/README.md | 2 +-
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/cellular_automata/langtons_ant.py b/cellular_automata/langtons_ant.py
index 983c626546ad..9847c50a5c3e 100644
--- a/cellular_automata/langtons_ant.py
+++ b/cellular_automata/langtons_ant.py
@@ -30,7 +30,7 @@ def __init__(self, width: int, height: int) -> None:
self.board = [[True] * width for _ in range(height)]
self.ant_position: tuple[int, int] = (width // 2, height // 2)
- # Initially pointing left (similar to the the wikipedia image)
+ # Initially pointing left (similar to the wikipedia image)
# (0 = 0° | 1 = 90° | 2 = 180 ° | 3 = 270°)
self.ant_direction: int = 3
diff --git a/compression/README.md b/compression/README.md
index cf54ea986175..bad7ae1a2f76 100644
--- a/compression/README.md
+++ b/compression/README.md
@@ -1,9 +1,9 @@
# Compression
Data compression is everywhere, you need it to store data without taking too much space.
-Either the compression lose some data (then we talk about lossy compression, such as .jpg) or it does not (and then it is lossless compression, such as .png)
+Either the compression loses some data (then we talk about lossy compression, such as .jpg) or it does not (and then it is lossless compression, such as .png)
-Lossless compression is mainly used for archive purpose as it allow storing data without losing information about the file archived. On the other hand, lossy compression is used for transfer of file where quality isn't necessarily what is required (i.e: images on Twitter).
+Lossless compression is mainly used for archive purpose as it allows storing data without losing information about the file archived. On the other hand, lossy compression is used for transfer of file where quality isn't necessarily what is required (i.e: images on Twitter).
*
*
diff --git a/hashes/README.md b/hashes/README.md
index 6df9a2fb6360..0237260eaa67 100644
--- a/hashes/README.md
+++ b/hashes/README.md
@@ -7,11 +7,11 @@ Unlike encryption, which is intended to protect data in transit, hashing is inte
This is one of the first algorithms that has gained widespread acceptance. MD5 is hashing algorithm made by Ray Rivest that is known to suffer vulnerabilities. It was created in 1992 as the successor to MD4. Currently MD6 is in the works, but as of 2009 Rivest had removed it from NIST consideration for SHA-3.
### SHA
-SHA stands for Security Hashing Algorithm and it’s probably best known as the hashing algorithm used in most SSL/TLS cipher suites. A cipher suite is a collection of ciphers and algorithms that are used for SSL/TLS connections. SHA handles the hashing aspects. SHA-1, as we mentioned earlier, is now deprecated. SHA-2 is now mandatory. SHA-2 is sometimes known has SHA-256, though variants with longer bit lengths are also available.
+SHA stands for Security Hashing Algorithm and it’s probably best known as the hashing algorithm used in most SSL/TLS cipher suites. A cipher suite is a collection of ciphers and algorithms that are used for SSL/TLS connections. SHA handles the hashing aspects. SHA-1, as we mentioned earlier, is now deprecated. SHA-2 is now mandatory. SHA-2 is sometimes known as SHA-256, though variants with longer bit lengths are also available.
### SHA256
SHA 256 is a member of the SHA 2 algorithm family, under which SHA stands for Secure Hash Algorithm. It was a collaborative effort between both the NSA and NIST to implement a successor to the SHA 1 family, which was beginning to lose potency against brute force attacks. It was published in 2001.
The importance of the 256 in the name refers to the final hash digest value, i.e. the hash value will remain 256 bits regardless of the size of the plaintext/cleartext. Other algorithms in the SHA family are similar to SHA 256 in some ways.
### Luhn
-The Luhn algorithm, also renowned as the modulus 10 or mod 10 algorithm, is a straightforward checksum formula used to validate a wide range of identification numbers, including credit card numbers, IMEI numbers, and Canadian Social Insurance Numbers. A community of mathematicians developed the LUHN formula in the late 1960s. Companies offering credit cards quickly followed suit. Since the algorithm is in the public interest, anyone can use it. The algorithm is used by most credit cards and many government identification numbers as a simple method of differentiating valid figures from mistyped or otherwise incorrect numbers. It was created to guard against unintentional errors, not malicious attacks.
\ No newline at end of file
+The Luhn algorithm, also renowned as the modulus 10 or mod 10 algorithm, is a straightforward checksum formula used to validate a wide range of identification numbers, including credit card numbers, IMEI numbers, and Canadian Social Insurance Numbers. A community of mathematicians developed the LUHN formula in the late 1960s. Companies offering credit cards quickly followed suit. Since the algorithm is in the public interest, anyone can use it. The algorithm is used by most credit cards and many government identification numbers as a simple method of differentiating valid figures from mistyped or otherwise incorrect numbers. It was created to guard against unintentional errors, not malicious attacks.
diff --git a/sorts/README.md b/sorts/README.md
index ceb0207c2be4..f24427d582e7 100644
--- a/sorts/README.md
+++ b/sorts/README.md
@@ -4,7 +4,7 @@ is specified by the sorting algorithm. The most typical orders are lexical or nu
of sorting lies in the fact that, if data is stored in a sorted manner, data searching can be highly optimised.
Another use for sorting is to represent data in a more readable manner.
-This section contains a lot of important algorithms that helps us to use sorting algorithms in various scenarios.
+This section contains a lot of important algorithms that help us to use sorting algorithms in various scenarios.
## References
*
*
From 53a51b3529ad5f985e6f65b5b3a4e155af1d2d63 Mon Sep 17 00:00:00 2001
From: Chris O <46587501+ChrisO345@users.noreply.github.com>
Date: Sun, 24 Sep 2023 19:09:32 +1300
Subject: [PATCH 200/808] Rewrite of base32.py algorithm (#9068)
* rewrite of base32.py
* changed maps to list comprehension
* Apply suggestions from code review
Co-authored-by: Tianyi Zheng
---------
Co-authored-by: Tianyi Zheng
---
ciphers/base32.py | 51 +++++++++++++++++++++++++----------------------
1 file changed, 27 insertions(+), 24 deletions(-)
diff --git a/ciphers/base32.py b/ciphers/base32.py
index fee53ccaf0c4..1924d1e185d7 100644
--- a/ciphers/base32.py
+++ b/ciphers/base32.py
@@ -1,42 +1,45 @@
-import base64
+"""
+Base32 encoding and decoding
+https://en.wikipedia.org/wiki/Base32
+"""
+B32_CHARSET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ234567"
-def base32_encode(string: str) -> bytes:
+
+def base32_encode(data: bytes) -> bytes:
"""
- Encodes a given string to base32, returning a bytes-like object
- >>> base32_encode("Hello World!")
+ >>> base32_encode(b"Hello World!")
b'JBSWY3DPEBLW64TMMQQQ===='
- >>> base32_encode("123456")
+ >>> base32_encode(b"123456")
b'GEZDGNBVGY======'
- >>> base32_encode("some long complex string")
+ >>> base32_encode(b"some long complex string")
b'ONXW2ZJANRXW4ZZAMNXW24DMMV4CA43UOJUW4ZY='
"""
-
- # encoded the input (we need a bytes like object)
- # then, b32encoded the bytes-like object
- return base64.b32encode(string.encode("utf-8"))
+ binary_data = "".join(bin(ord(d))[2:].zfill(8) for d in data.decode("utf-8"))
+ binary_data = binary_data.ljust(5 * ((len(binary_data) // 5) + 1), "0")
+ b32_chunks = map("".join, zip(*[iter(binary_data)] * 5))
+ b32_result = "".join(B32_CHARSET[int(chunk, 2)] for chunk in b32_chunks)
+ return bytes(b32_result.ljust(8 * ((len(b32_result) // 8) + 1), "="), "utf-8")
-def base32_decode(encoded_bytes: bytes) -> str:
+def base32_decode(data: bytes) -> bytes:
"""
- Decodes a given bytes-like object to a string, returning a string
>>> base32_decode(b'JBSWY3DPEBLW64TMMQQQ====')
- 'Hello World!'
+ b'Hello World!'
>>> base32_decode(b'GEZDGNBVGY======')
- '123456'
+ b'123456'
>>> base32_decode(b'ONXW2ZJANRXW4ZZAMNXW24DMMV4CA43UOJUW4ZY=')
- 'some long complex string'
+ b'some long complex string'
"""
-
- # decode the bytes from base32
- # then, decode the bytes-like object to return as a string
- return base64.b32decode(encoded_bytes).decode("utf-8")
+ binary_chunks = "".join(
+ bin(B32_CHARSET.index(_d))[2:].zfill(5)
+ for _d in data.decode("utf-8").strip("=")
+ )
+ binary_data = list(map("".join, zip(*[iter(binary_chunks)] * 8)))
+ return bytes("".join([chr(int(_d, 2)) for _d in binary_data]), "utf-8")
if __name__ == "__main__":
- test = "Hello World!"
- encoded = base32_encode(test)
- print(encoded)
+ import doctest
- decoded = base32_decode(encoded)
- print(decoded)
+ doctest.testmod()
From 708d9061413a5c049d63b97b08540aa4867d5523 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Sun, 24 Sep 2023 12:04:47 +0530
Subject: [PATCH 201/808] [pre-commit.ci] pre-commit autoupdate (#9067)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.288 → v0.0.290](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.288...v0.0.290)
* Update .pre-commit-config.yaml
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.pre-commit-config.yaml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 722b408ee9e9..809b841d0ea3 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.288
+ rev: v0.0.291
hooks:
- id: ruff
From 882fb2f3c972e67303dd65873f05b8f3d58724e1 Mon Sep 17 00:00:00 2001
From: Chris O <46587501+ChrisO345@users.noreply.github.com>
Date: Sun, 24 Sep 2023 20:36:06 +1300
Subject: [PATCH 202/808] Rewrite of base85.py algorithm (#9069)
* rewrite of base85.py
* changed maps to list comprehension
* Apply suggestions from code review
Co-authored-by: Tianyi Zheng
---------
Co-authored-by: Tianyi Zheng
---
ciphers/base85.py | 57 ++++++++++++++++++++++++++++++++++-------------
1 file changed, 41 insertions(+), 16 deletions(-)
diff --git a/ciphers/base85.py b/ciphers/base85.py
index afd1aff79d11..f0228e5052dd 100644
--- a/ciphers/base85.py
+++ b/ciphers/base85.py
@@ -1,30 +1,55 @@
-import base64
+"""
+Base85 (Ascii85) encoding and decoding
+https://en.wikipedia.org/wiki/Ascii85
+"""
-def base85_encode(string: str) -> bytes:
+
+def _base10_to_85(d: int) -> str:
+ return "".join(chr(d % 85 + 33)) + _base10_to_85(d // 85) if d > 0 else ""
+
+
+def _base85_to_10(digits: list) -> int:
+ return sum(char * 85**i for i, char in enumerate(reversed(digits)))
+
+
+def ascii85_encode(data: bytes) -> bytes:
"""
- >>> base85_encode("")
+ >>> ascii85_encode(b"")
b''
- >>> base85_encode("12345")
+ >>> ascii85_encode(b"12345")
b'0etOA2#'
- >>> base85_encode("base 85")
+ >>> ascii85_encode(b"base 85")
b'@UX=h+?24'
"""
- # encoded the input to a bytes-like object and then a85encode that
- return base64.a85encode(string.encode("utf-8"))
+ binary_data = "".join(bin(ord(d))[2:].zfill(8) for d in data.decode("utf-8"))
+ null_values = (32 * ((len(binary_data) // 32) + 1) - len(binary_data)) // 8
+ binary_data = binary_data.ljust(32 * ((len(binary_data) // 32) + 1), "0")
+ b85_chunks = [int(_s, 2) for _s in map("".join, zip(*[iter(binary_data)] * 32))]
+ result = "".join(_base10_to_85(chunk)[::-1] for chunk in b85_chunks)
+ return bytes(result[:-null_values] if null_values % 4 != 0 else result, "utf-8")
-def base85_decode(a85encoded: bytes) -> str:
+def ascii85_decode(data: bytes) -> bytes:
"""
- >>> base85_decode(b"")
- ''
- >>> base85_decode(b"0etOA2#")
- '12345'
- >>> base85_decode(b"@UX=h+?24")
- 'base 85'
+ >>> ascii85_decode(b"")
+ b''
+ >>> ascii85_decode(b"0etOA2#")
+ b'12345'
+ >>> ascii85_decode(b"@UX=h+?24")
+ b'base 85'
"""
- # a85decode the input into bytes and decode that into a human readable string
- return base64.a85decode(a85encoded).decode("utf-8")
+ null_values = 5 * ((len(data) // 5) + 1) - len(data)
+ binary_data = data.decode("utf-8") + "u" * null_values
+ b85_chunks = map("".join, zip(*[iter(binary_data)] * 5))
+ b85_segments = [[ord(_s) - 33 for _s in chunk] for chunk in b85_chunks]
+ results = [bin(_base85_to_10(chunk))[2::].zfill(32) for chunk in b85_segments]
+ char_chunks = [
+ [chr(int(_s, 2)) for _s in map("".join, zip(*[iter(r)] * 8))] for r in results
+ ]
+ result = "".join("".join(char) for char in char_chunks)
+ offset = int(null_values % 5 == 0)
+ return bytes(result[: offset - null_values], "utf-8")
if __name__ == "__main__":
From 211247ef82fd54540e4cb832fbbb612ca5845700 Mon Sep 17 00:00:00 2001
From: Amir Lavasani
Date: Mon, 25 Sep 2023 00:38:51 +0330
Subject: [PATCH 203/808] Add MFCC Feature Extraction Algorithm (#9057)
* Add MFCC feature extraction to machine learning
* Add standalone usage in comments
* Apply suggestions from code review
Co-authored-by: Christian Clauss
* Delete empty junk file (#9062)
* updating DIRECTORY.md
* updating DIRECTORY.md
* Delete empty junk file
* updating DIRECTORY.md
* Fix ruff errors
* Fix more ruff errors
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
* [main] Fix typo due to auto review change
* Add doctests for all functions
* Add MFCC feature extraction to machine learning
* Add standalone usage in comments
* Apply suggestions from code review
Co-authored-by: Christian Clauss
* [main] Fix typo due to auto review change
* Add doctests for all functions
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix some pre-commit issues
* Update review issues
* Remove types from docstring
* Rename dct
* Add mfcc docstring
* Add typing to several functions
* Apply suggestions from code review
* Update mfcc.py
* get_filter_points() -> tuple[np.ndarray, np.ndarray]:
* algorithm
---------
Co-authored-by: Christian Clauss
Co-authored-by: Tianyi Zheng
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
machine_learning/mfcc.py | 479 +++++++++++++++++++++++++++++++++++++++
1 file changed, 479 insertions(+)
create mode 100644 machine_learning/mfcc.py
diff --git a/machine_learning/mfcc.py b/machine_learning/mfcc.py
new file mode 100644
index 000000000000..7ce8ceb50ff2
--- /dev/null
+++ b/machine_learning/mfcc.py
@@ -0,0 +1,479 @@
+"""
+Mel Frequency Cepstral Coefficients (MFCC) Calculation
+
+MFCC is an algorithm widely used in audio and speech processing to represent the
+short-term power spectrum of a sound signal in a more compact and
+discriminative way. It is particularly popular in speech and audio processing
+tasks such as speech recognition and speaker identification.
+
+How Mel Frequency Cepstral Coefficients are Calculated:
+1. Preprocessing:
+ - Load an audio signal and normalize it to ensure that the values fall
+ within a specific range (e.g., between -1 and 1).
+ - Frame the audio signal into overlapping, fixed-length segments, typically
+ using a technique like windowing to reduce spectral leakage.
+
+2. Fourier Transform:
+ - Apply a Fast Fourier Transform (FFT) to each audio frame to convert it
+ from the time domain to the frequency domain. This results in a
+ representation of the audio frame as a sequence of frequency components.
+
+3. Power Spectrum:
+ - Calculate the power spectrum by taking the squared magnitude of each
+ frequency component obtained from the FFT. This step measures the energy
+ distribution across different frequency bands.
+
+4. Mel Filterbank:
+ - Apply a set of triangular filterbanks spaced in the Mel frequency scale
+ to the power spectrum. These filters mimic the human auditory system's
+ frequency response. Each filterbank sums the power spectrum values within
+ its band.
+
+5. Logarithmic Compression:
+ - Take the logarithm (typically base 10) of the filterbank values to
+ compress the dynamic range. This step mimics the logarithmic response of
+ the human ear to sound intensity.
+
+6. Discrete Cosine Transform (DCT):
+ - Apply the Discrete Cosine Transform to the log filterbank energies to
+ obtain the MFCC coefficients. This transformation helps decorrelate the
+ filterbank energies and captures the most important features of the audio
+ signal.
+
+7. Feature Extraction:
+ - Select a subset of the DCT coefficients to form the feature vector.
+ Often, the first few coefficients (e.g., 12-13) are used for most
+ applications.
+
+References:
+- Mel-Frequency Cepstral Coefficients (MFCCs):
+ https://en.wikipedia.org/wiki/Mel-frequency_cepstrum
+- Speech and Language Processing by Daniel Jurafsky & James H. Martin:
+ https://web.stanford.edu/~jurafsky/slp3/
+- Mel Frequency Cepstral Coefficient (MFCC) tutorial
+ http://practicalcryptography.com/miscellaneous/machine-learning
+ /guide-mel-frequency-cepstral-coefficients-mfccs/
+
+Author: Amir Lavasani
+"""
+
+
+import logging
+
+import numpy as np
+import scipy.fftpack as fft
+from scipy.signal import get_window
+
+logging.basicConfig(filename=f"{__file__}.log", level=logging.INFO)
+
+
+def mfcc(
+ audio: np.ndarray,
+ sample_rate: int,
+ ftt_size: int = 1024,
+ hop_length: int = 20,
+ mel_filter_num: int = 10,
+ dct_filter_num: int = 40,
+) -> np.ndarray:
+ """
+ Calculate Mel Frequency Cepstral Coefficients (MFCCs) from an audio signal.
+
+ Args:
+ audio: The input audio signal.
+ sample_rate: The sample rate of the audio signal (in Hz).
+ ftt_size: The size of the FFT window (default is 1024).
+ hop_length: The hop length for frame creation (default is 20ms).
+ mel_filter_num: The number of Mel filters (default is 10).
+ dct_filter_num: The number of DCT filters (default is 40).
+
+ Returns:
+ A matrix of MFCCs for the input audio.
+
+ Raises:
+ ValueError: If the input audio is empty.
+
+ Example:
+ >>> sample_rate = 44100 # Sample rate of 44.1 kHz
+ >>> duration = 2.0 # Duration of 1 second
+ >>> t = np.linspace(0, duration, int(sample_rate * duration), endpoint=False)
+ >>> audio = 0.5 * np.sin(2 * np.pi * 440.0 * t) # Generate a 440 Hz sine wave
+ >>> mfccs = mfcc(audio, sample_rate)
+ >>> mfccs.shape
+ (40, 101)
+ """
+ logging.info(f"Sample rate: {sample_rate}Hz")
+ logging.info(f"Audio duration: {len(audio) / sample_rate}s")
+ logging.info(f"Audio min: {np.min(audio)}")
+ logging.info(f"Audio max: {np.max(audio)}")
+
+ # normalize audio
+ audio_normalized = normalize(audio)
+
+ logging.info(f"Normalized audio min: {np.min(audio_normalized)}")
+ logging.info(f"Normalized audio max: {np.max(audio_normalized)}")
+
+ # frame audio into
+ audio_framed = audio_frames(
+ audio_normalized, sample_rate, ftt_size=ftt_size, hop_length=hop_length
+ )
+
+ logging.info(f"Framed audio shape: {audio_framed.shape}")
+ logging.info(f"First frame: {audio_framed[0]}")
+
+ # convert to frequency domain
+ # For simplicity we will choose the Hanning window.
+ window = get_window("hann", ftt_size, fftbins=True)
+ audio_windowed = audio_framed * window
+
+ logging.info(f"Windowed audio shape: {audio_windowed.shape}")
+ logging.info(f"First frame: {audio_windowed[0]}")
+
+ audio_fft = calculate_fft(audio_windowed, ftt_size)
+ logging.info(f"fft audio shape: {audio_fft.shape}")
+ logging.info(f"First frame: {audio_fft[0]}")
+
+ audio_power = calculate_signal_power(audio_fft)
+ logging.info(f"power audio shape: {audio_power.shape}")
+ logging.info(f"First frame: {audio_power[0]}")
+
+ filters = mel_spaced_filterbank(sample_rate, mel_filter_num, ftt_size)
+ logging.info(f"filters shape: {filters.shape}")
+
+ audio_filtered = np.dot(filters, np.transpose(audio_power))
+ audio_log = 10.0 * np.log10(audio_filtered)
+ logging.info(f"audio_log shape: {audio_log.shape}")
+
+ dct_filters = discrete_cosine_transform(dct_filter_num, mel_filter_num)
+ cepstral_coefficents = np.dot(dct_filters, audio_log)
+
+ logging.info(f"cepstral_coefficents shape: {cepstral_coefficents.shape}")
+ return cepstral_coefficents
+
+
+def normalize(audio: np.ndarray) -> np.ndarray:
+ """
+ Normalize an audio signal by scaling it to have values between -1 and 1.
+
+ Args:
+ audio: The input audio signal.
+
+ Returns:
+ The normalized audio signal.
+
+ Examples:
+ >>> audio = np.array([1, 2, 3, 4, 5])
+ >>> normalized_audio = normalize(audio)
+ >>> np.max(normalized_audio)
+ 1.0
+ >>> np.min(normalized_audio)
+ 0.2
+ """
+ # Divide the entire audio signal by the maximum absolute value
+ return audio / np.max(np.abs(audio))
+
+
+def audio_frames(
+ audio: np.ndarray,
+ sample_rate: int,
+ hop_length: int = 20,
+ ftt_size: int = 1024,
+) -> np.ndarray:
+ """
+ Split an audio signal into overlapping frames.
+
+ Args:
+ audio: The input audio signal.
+ sample_rate: The sample rate of the audio signal.
+ hop_length: The length of the hopping (default is 20ms).
+ ftt_size: The size of the FFT window (default is 1024).
+
+ Returns:
+ An array of overlapping frames.
+
+ Examples:
+ >>> audio = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]*1000)
+ >>> sample_rate = 8000
+ >>> frames = audio_frames(audio, sample_rate, hop_length=10, ftt_size=512)
+ >>> frames.shape
+ (126, 512)
+ """
+
+ hop_size = np.round(sample_rate * hop_length / 1000).astype(int)
+
+ # Pad the audio signal to handle edge cases
+ audio = np.pad(audio, int(ftt_size / 2), mode="reflect")
+
+ # Calculate the number of frames
+ frame_count = int((len(audio) - ftt_size) / hop_size) + 1
+
+ # Initialize an array to store the frames
+ frames = np.zeros((frame_count, ftt_size))
+
+ # Split the audio signal into frames
+ for n in range(frame_count):
+ frames[n] = audio[n * hop_size : n * hop_size + ftt_size]
+
+ return frames
+
+
+def calculate_fft(audio_windowed: np.ndarray, ftt_size: int = 1024) -> np.ndarray:
+ """
+ Calculate the Fast Fourier Transform (FFT) of windowed audio data.
+
+ Args:
+ audio_windowed: The windowed audio signal.
+ ftt_size: The size of the FFT (default is 1024).
+
+ Returns:
+ The FFT of the audio data.
+
+ Examples:
+ >>> audio_windowed = np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
+ >>> audio_fft = calculate_fft(audio_windowed, ftt_size=4)
+ >>> np.allclose(audio_fft[0], np.array([6.0+0.j, -1.5+0.8660254j, -1.5-0.8660254j]))
+ True
+ """
+ # Transpose the audio data to have time in rows and channels in columns
+ audio_transposed = np.transpose(audio_windowed)
+
+ # Initialize an array to store the FFT results
+ audio_fft = np.empty(
+ (int(1 + ftt_size // 2), audio_transposed.shape[1]),
+ dtype=np.complex64,
+ order="F",
+ )
+
+ # Compute FFT for each channel
+ for n in range(audio_fft.shape[1]):
+ audio_fft[:, n] = fft.fft(audio_transposed[:, n], axis=0)[: audio_fft.shape[0]]
+
+ # Transpose the FFT results back to the original shape
+ return np.transpose(audio_fft)
+
+
+def calculate_signal_power(audio_fft: np.ndarray) -> np.ndarray:
+ """
+ Calculate the power of the audio signal from its FFT.
+
+ Args:
+ audio_fft: The FFT of the audio signal.
+
+ Returns:
+ The power of the audio signal.
+
+ Examples:
+ >>> audio_fft = np.array([1+2j, 2+3j, 3+4j, 4+5j])
+ >>> power = calculate_signal_power(audio_fft)
+ >>> np.allclose(power, np.array([5, 13, 25, 41]))
+ True
+ """
+ # Calculate the power by squaring the absolute values of the FFT coefficients
+ return np.square(np.abs(audio_fft))
+
+
+def freq_to_mel(freq: float) -> float:
+ """
+ Convert a frequency in Hertz to the mel scale.
+
+ Args:
+ freq: The frequency in Hertz.
+
+ Returns:
+ The frequency in mel scale.
+
+ Examples:
+ >>> round(freq_to_mel(1000), 2)
+ 999.99
+ """
+ # Use the formula to convert frequency to the mel scale
+ return 2595.0 * np.log10(1.0 + freq / 700.0)
+
+
+def mel_to_freq(mels: float) -> float:
+ """
+ Convert a frequency in the mel scale to Hertz.
+
+ Args:
+ mels: The frequency in mel scale.
+
+ Returns:
+ The frequency in Hertz.
+
+ Examples:
+ >>> round(mel_to_freq(999.99), 2)
+ 1000.01
+ """
+ # Use the formula to convert mel scale to frequency
+ return 700.0 * (10.0 ** (mels / 2595.0) - 1.0)
+
+
+def mel_spaced_filterbank(
+ sample_rate: int, mel_filter_num: int = 10, ftt_size: int = 1024
+) -> np.ndarray:
+ """
+ Create a Mel-spaced filter bank for audio processing.
+
+ Args:
+ sample_rate: The sample rate of the audio.
+ mel_filter_num: The number of mel filters (default is 10).
+ ftt_size: The size of the FFT (default is 1024).
+
+ Returns:
+ Mel-spaced filter bank.
+
+ Examples:
+ >>> round(mel_spaced_filterbank(8000, 10, 1024)[0][1], 10)
+ 0.0004603981
+ """
+ freq_min = 0
+ freq_high = sample_rate // 2
+
+ logging.info(f"Minimum frequency: {freq_min}")
+ logging.info(f"Maximum frequency: {freq_high}")
+
+ # Calculate filter points and mel frequencies
+ filter_points, mel_freqs = get_filter_points(
+ sample_rate,
+ freq_min,
+ freq_high,
+ mel_filter_num,
+ ftt_size,
+ )
+
+ filters = get_filters(filter_points, ftt_size)
+
+ # normalize filters
+ # taken from the librosa library
+ enorm = 2.0 / (mel_freqs[2 : mel_filter_num + 2] - mel_freqs[:mel_filter_num])
+ return filters * enorm[:, np.newaxis]
+
+
+def get_filters(filter_points: np.ndarray, ftt_size: int) -> np.ndarray:
+ """
+ Generate filters for audio processing.
+
+ Args:
+ filter_points: A list of filter points.
+ ftt_size: The size of the FFT.
+
+ Returns:
+ A matrix of filters.
+
+ Examples:
+ >>> get_filters(np.array([0, 20, 51, 95, 161, 256], dtype=int), 512).shape
+ (4, 257)
+ """
+ num_filters = len(filter_points) - 2
+ filters = np.zeros((num_filters, int(ftt_size / 2) + 1))
+
+ for n in range(num_filters):
+ start = filter_points[n]
+ mid = filter_points[n + 1]
+ end = filter_points[n + 2]
+
+ # Linearly increase values from 0 to 1
+ filters[n, start:mid] = np.linspace(0, 1, mid - start)
+
+ # Linearly decrease values from 1 to 0
+ filters[n, mid:end] = np.linspace(1, 0, end - mid)
+
+ return filters
+
+
+def get_filter_points(
+ sample_rate: int,
+ freq_min: int,
+ freq_high: int,
+ mel_filter_num: int = 10,
+ ftt_size: int = 1024,
+) -> tuple[np.ndarray, np.ndarray]:
+ """
+ Calculate the filter points and frequencies for mel frequency filters.
+
+ Args:
+ sample_rate: The sample rate of the audio.
+ freq_min: The minimum frequency in Hertz.
+ freq_high: The maximum frequency in Hertz.
+ mel_filter_num: The number of mel filters (default is 10).
+ ftt_size: The size of the FFT (default is 1024).
+
+ Returns:
+ Filter points and corresponding frequencies.
+
+ Examples:
+ >>> filter_points = get_filter_points(8000, 0, 4000, mel_filter_num=4, ftt_size=512)
+ >>> filter_points[0]
+ array([ 0, 20, 51, 95, 161, 256])
+ >>> filter_points[1]
+ array([ 0. , 324.46707094, 799.33254207, 1494.30973963,
+ 2511.42581671, 4000. ])
+ """
+ # Convert minimum and maximum frequencies to mel scale
+ fmin_mel = freq_to_mel(freq_min)
+ fmax_mel = freq_to_mel(freq_high)
+
+ logging.info(f"MEL min: {fmin_mel}")
+ logging.info(f"MEL max: {fmax_mel}")
+
+ # Generate equally spaced mel frequencies
+ mels = np.linspace(fmin_mel, fmax_mel, num=mel_filter_num + 2)
+
+ # Convert mel frequencies back to Hertz
+ freqs = mel_to_freq(mels)
+
+ # Calculate filter points as integer values
+ filter_points = np.floor((ftt_size + 1) / sample_rate * freqs).astype(int)
+
+ return filter_points, freqs
+
+
+def discrete_cosine_transform(dct_filter_num: int, filter_num: int) -> np.ndarray:
+ """
+ Compute the Discrete Cosine Transform (DCT) basis matrix.
+
+ Args:
+ dct_filter_num: The number of DCT filters to generate.
+ filter_num: The number of the fbank filters.
+
+ Returns:
+ The DCT basis matrix.
+
+ Examples:
+ >>> round(discrete_cosine_transform(3, 5)[0][0], 5)
+ 0.44721
+ """
+ basis = np.empty((dct_filter_num, filter_num))
+ basis[0, :] = 1.0 / np.sqrt(filter_num)
+
+ samples = np.arange(1, 2 * filter_num, 2) * np.pi / (2.0 * filter_num)
+
+ for i in range(1, dct_filter_num):
+ basis[i, :] = np.cos(i * samples) * np.sqrt(2.0 / filter_num)
+
+ return basis
+
+
+def example(wav_file_path: str = "./path-to-file/sample.wav") -> np.ndarray:
+ """
+ Example function to calculate Mel Frequency Cepstral Coefficients
+ (MFCCs) from an audio file.
+
+ Args:
+ wav_file_path: The path to the WAV audio file.
+
+ Returns:
+ np.ndarray: The computed MFCCs for the audio.
+ """
+ from scipy.io import wavfile
+
+ # Load the audio from the WAV file
+ sample_rate, audio = wavfile.read(wav_file_path)
+
+ # Calculate MFCCs
+ return mfcc(audio, sample_rate)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From eace4cea32b831a1683b4c431379f0cd7b9061db Mon Sep 17 00:00:00 2001
From: gudlu1925 <120262240+gudlu1925@users.noreply.github.com>
Date: Wed, 27 Sep 2023 11:14:06 +0530
Subject: [PATCH 204/808] Added Coulomb_Law (#8714)
* Create coulomb_law.py
* Update coulomb_law.py
* Update coulomb_law.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update and rename coulomb_law.py to coulombs_law.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update coulombs_law.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update coulombs_law.py
* Update coulombs_law.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update coulombs_law.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update coulombs_law.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
physics/coulombs_law.py | 42 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
create mode 100644 physics/coulombs_law.py
diff --git a/physics/coulombs_law.py b/physics/coulombs_law.py
new file mode 100644
index 000000000000..252e8ec0f74e
--- /dev/null
+++ b/physics/coulombs_law.py
@@ -0,0 +1,42 @@
+"""
+Coulomb's law states that the magnitude of the electrostatic force of attraction
+or repulsion between two point charges is directly proportional to the product
+of the magnitudes of charges and inversely proportional to the square of the
+distance between them.
+
+F = k * q1 * q2 / r^2
+
+k is Coulomb's constant and equals 1/(4π*ε0)
+q1 is charge of first body (C)
+q2 is charge of second body (C)
+r is distance between two charged bodies (m)
+
+Reference: https://en.wikipedia.org/wiki/Coulomb%27s_law
+"""
+
+
+def coulombs_law(q1: float, q2: float, radius: float) -> float:
+ """
+ Calculate the electrostatic force of attraction or repulsion
+ between two point charges
+
+ >>> coulombs_law(15.5, 20, 15)
+ 12382849136.06
+ >>> coulombs_law(1, 15, 5)
+ 5392531075.38
+ >>> coulombs_law(20, -50, 15)
+ -39944674632.44
+ >>> coulombs_law(-5, -8, 10)
+ 3595020716.92
+ >>> coulombs_law(50, 100, 50)
+ 17975103584.6
+ """
+ if radius <= 0:
+ raise ValueError("The radius is always a positive non zero integer")
+ return round(((8.9875517923 * 10**9) * q1 * q2) / (radius**2), 2)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From b2e186f4b769ae98d04f7f2408d3ac86da44c06f Mon Sep 17 00:00:00 2001
From: Okza Pradhana
Date: Wed, 27 Sep 2023 13:06:19 +0700
Subject: [PATCH 205/808] feat(maths): add function to perform calculation
(#6602)
* feat(maths): add function to perform calculation
- Add single function to calculate sum of two positive numbers
using bitwise operator
* docs: add wikipedia url as explanation
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Apply suggestions from code review
Co-authored-by: Caeden Perelli-Harris
* Update sum_of_two_positive_numbers_bitwise.py
* Update sum_of_two_positive_numbers_bitwise.py
* Update sum_of_two_positive_numbers_bitwise.py
---------
Co-authored-by: Okza Pradhana
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
Co-authored-by: Caeden Perelli-Harris
---
maths/sum_of_two_positive_numbers_bitwise.py | 55 ++++++++++++++++++++
1 file changed, 55 insertions(+)
create mode 100644 maths/sum_of_two_positive_numbers_bitwise.py
diff --git a/maths/sum_of_two_positive_numbers_bitwise.py b/maths/sum_of_two_positive_numbers_bitwise.py
new file mode 100644
index 000000000000..70eaf6887b64
--- /dev/null
+++ b/maths/sum_of_two_positive_numbers_bitwise.py
@@ -0,0 +1,55 @@
+"""
+Calculates the sum of two non-negative integers using bitwise operators
+Wikipedia explanation: https://en.wikipedia.org/wiki/Binary_number
+"""
+
+
+def bitwise_addition_recursive(number: int, other_number: int) -> int:
+ """
+ >>> bitwise_addition_recursive(4, 5)
+ 9
+ >>> bitwise_addition_recursive(8, 9)
+ 17
+ >>> bitwise_addition_recursive(0, 4)
+ 4
+ >>> bitwise_addition_recursive(4.5, 9)
+ Traceback (most recent call last):
+ ...
+ TypeError: Both arguments MUST be integers!
+ >>> bitwise_addition_recursive('4', 9)
+ Traceback (most recent call last):
+ ...
+ TypeError: Both arguments MUST be integers!
+ >>> bitwise_addition_recursive('4.5', 9)
+ Traceback (most recent call last):
+ ...
+ TypeError: Both arguments MUST be integers!
+ >>> bitwise_addition_recursive(-1, 9)
+ Traceback (most recent call last):
+ ...
+ ValueError: Both arguments MUST be non-negative!
+ >>> bitwise_addition_recursive(1, -9)
+ Traceback (most recent call last):
+ ...
+ ValueError: Both arguments MUST be non-negative!
+ """
+
+ if not isinstance(number, int) or not isinstance(other_number, int):
+ raise TypeError("Both arguments MUST be integers!")
+
+ if number < 0 or other_number < 0:
+ raise ValueError("Both arguments MUST be non-negative!")
+
+ bitwise_sum = number ^ other_number
+ carry = number & other_number
+
+ if carry == 0:
+ return bitwise_sum
+
+ return bitwise_addition_recursive(bitwise_sum, carry << 1)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 84ec9414e45380a5e946d4f73b921b274ecd4be7 Mon Sep 17 00:00:00 2001
From: thor-harsh <105957576+thor-harsh@users.noreply.github.com>
Date: Wed, 27 Sep 2023 12:01:42 +0530
Subject: [PATCH 206/808] Update k_means_clust.py (#8996)
* Update k_means_clust.py
* Apply suggestions from code review
---------
Co-authored-by: Tianyi Zheng
---
machine_learning/k_means_clust.py | 23 ++++++++++-------------
1 file changed, 10 insertions(+), 13 deletions(-)
diff --git a/machine_learning/k_means_clust.py b/machine_learning/k_means_clust.py
index 7c8142aab878..d93c5addf2ee 100644
--- a/machine_learning/k_means_clust.py
+++ b/machine_learning/k_means_clust.py
@@ -11,10 +11,10 @@
- initial_centroids , initial centroid values generated by utility function(mentioned
in usage).
- maxiter , maximum number of iterations to process.
- - heterogeneity , empty list that will be filled with hetrogeneity values if passed
+ - heterogeneity , empty list that will be filled with heterogeneity values if passed
to kmeans func.
Usage:
- 1. define 'k' value, 'X' features array and 'hetrogeneity' empty list
+ 1. define 'k' value, 'X' features array and 'heterogeneity' empty list
2. create initial_centroids,
initial_centroids = get_initial_centroids(
X,
@@ -31,8 +31,8 @@
record_heterogeneity=heterogeneity,
verbose=True # whether to print logs in console or not.(default=False)
)
- 4. Plot the loss function, hetrogeneity values for every iteration saved in
- hetrogeneity list.
+ 4. Plot the loss function and heterogeneity values for every iteration saved in
+ heterogeneity list.
plot_heterogeneity(
heterogeneity,
k
@@ -198,13 +198,10 @@ def report_generator(
df: pd.DataFrame, clustering_variables: np.ndarray, fill_missing_report=None
) -> pd.DataFrame:
"""
- Function generates easy-erading clustering report. It takes 2 arguments as an input:
- DataFrame - dataframe with predicted cluester column;
- FillMissingReport - dictionary of rules how we are going to fill missing
- values of for final report generate (not included in modeling);
- in order to run the function following libraries must be imported:
- import pandas as pd
- import numpy as np
+ Generates a clustering report. This function takes 2 arguments as input:
+ df - dataframe with predicted cluster column
+ fill_missing_report - dictionary of rules on how we are going to fill in missing
+ values for final generated report (not included in modelling);
>>> data = pd.DataFrame()
>>> data['numbers'] = [1, 2, 3]
>>> data['col1'] = [0.5, 2.5, 4.5]
@@ -306,10 +303,10 @@ def report_generator(
a.columns = report.columns # rename columns to match report
report = report.drop(
report[report.Type == "count"].index
- ) # drop count values except cluster size
+ ) # drop count values except for cluster size
report = pd.concat(
[report, a, clustersize, clusterproportion], axis=0
- ) # concat report with clustert size and nan values
+ ) # concat report with cluster size and nan values
report["Mark"] = report["Features"].isin(clustering_variables)
cols = report.columns.tolist()
cols = cols[0:2] + cols[-1:] + cols[2:-1]
From 5830b29e7ecf5437ce46bcdefda88eedea693043 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Wed, 27 Sep 2023 08:00:34 -0400
Subject: [PATCH 207/808] Fix `mypy` errors in `erosion_operation.py` (#8603)
* updating DIRECTORY.md
* Fix mypy errors in erosion_operation.py
* Rename functions to use snake case
* updating DIRECTORY.md
* updating DIRECTORY.md
* Replace raw file string with pathlib Path
* Fix function name in erosion_operation.py doctest
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.../erosion_operation.py | 39 +++++++++++--------
1 file changed, 23 insertions(+), 16 deletions(-)
diff --git a/digital_image_processing/morphological_operations/erosion_operation.py b/digital_image_processing/morphological_operations/erosion_operation.py
index c0e1ef847237..53001da83468 100644
--- a/digital_image_processing/morphological_operations/erosion_operation.py
+++ b/digital_image_processing/morphological_operations/erosion_operation.py
@@ -1,34 +1,37 @@
+from pathlib import Path
+
import numpy as np
from PIL import Image
-def rgb2gray(rgb: np.array) -> np.array:
+def rgb_to_gray(rgb: np.ndarray) -> np.ndarray:
"""
Return gray image from rgb image
- >>> rgb2gray(np.array([[[127, 255, 0]]]))
+
+ >>> rgb_to_gray(np.array([[[127, 255, 0]]]))
array([[187.6453]])
- >>> rgb2gray(np.array([[[0, 0, 0]]]))
+ >>> rgb_to_gray(np.array([[[0, 0, 0]]]))
array([[0.]])
- >>> rgb2gray(np.array([[[2, 4, 1]]]))
+ >>> rgb_to_gray(np.array([[[2, 4, 1]]]))
array([[3.0598]])
- >>> rgb2gray(np.array([[[26, 255, 14], [5, 147, 20], [1, 200, 0]]]))
+ >>> rgb_to_gray(np.array([[[26, 255, 14], [5, 147, 20], [1, 200, 0]]]))
array([[159.0524, 90.0635, 117.6989]])
"""
r, g, b = rgb[:, :, 0], rgb[:, :, 1], rgb[:, :, 2]
return 0.2989 * r + 0.5870 * g + 0.1140 * b
-def gray2binary(gray: np.array) -> np.array:
+def gray_to_binary(gray: np.ndarray) -> np.ndarray:
"""
Return binary image from gray image
- >>> gray2binary(np.array([[127, 255, 0]]))
+ >>> gray_to_binary(np.array([[127, 255, 0]]))
array([[False, True, False]])
- >>> gray2binary(np.array([[0]]))
+ >>> gray_to_binary(np.array([[0]]))
array([[False]])
- >>> gray2binary(np.array([[26.2409, 4.9315, 1.4729]]))
+ >>> gray_to_binary(np.array([[26.2409, 4.9315, 1.4729]]))
array([[False, False, False]])
- >>> gray2binary(np.array([[26, 255, 14], [5, 147, 20], [1, 200, 0]]))
+ >>> gray_to_binary(np.array([[26, 255, 14], [5, 147, 20], [1, 200, 0]]))
array([[False, True, False],
[False, True, False],
[False, True, False]])
@@ -36,9 +39,10 @@ def gray2binary(gray: np.array) -> np.array:
return (gray > 127) & (gray <= 255)
-def erosion(image: np.array, kernel: np.array) -> np.array:
+def erosion(image: np.ndarray, kernel: np.ndarray) -> np.ndarray:
"""
Return eroded image
+
>>> erosion(np.array([[True, True, False]]), np.array([[0, 1, 0]]))
array([[False, False, False]])
>>> erosion(np.array([[True, False, False]]), np.array([[1, 1, 0]]))
@@ -62,14 +66,17 @@ def erosion(image: np.array, kernel: np.array) -> np.array:
return output
-# kernel to be applied
-structuring_element = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
-
if __name__ == "__main__":
# read original image
- image = np.array(Image.open(r"..\image_data\lena.jpg"))
+ lena_path = Path(__file__).resolve().parent / "image_data" / "lena.jpg"
+ lena = np.array(Image.open(lena_path))
+
+ # kernel to be applied
+ structuring_element = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
+
# Apply erosion operation to a binary image
- output = erosion(gray2binary(rgb2gray(image)), structuring_element)
+ output = erosion(gray_to_binary(rgb_to_gray(lena)), structuring_element)
+
# Save the output image
pil_img = Image.fromarray(output).convert("RGB")
pil_img.save("result_erosion.png")
From 76767d2f09d15aeff0a54cfc44652207eda2314e Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Wed, 27 Sep 2023 08:01:18 -0400
Subject: [PATCH 208/808] Consolidate the two existing kNN implementations
(#8903)
* Add type hints to k_nearest_neighbours.py
* Refactor k_nearest_neighbours.py into class
* Add documentation to k_nearest_neighbours.py
* Use heap-based priority queue for k_nearest_neighbours.py
* Delete knn_sklearn.py
* updating DIRECTORY.md
* Use optional args in k_nearest_neighbours.py for demo purposes
* Fix wrong function arg in k_nearest_neighbours.py
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 1 -
machine_learning/k_nearest_neighbours.py | 128 ++++++++++++++---------
machine_learning/knn_sklearn.py | 31 ------
3 files changed, 79 insertions(+), 81 deletions(-)
delete mode 100644 machine_learning/knn_sklearn.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index d81e4ec1ee83..902999460fe5 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -507,7 +507,6 @@
* [Gradient Descent](machine_learning/gradient_descent.py)
* [K Means Clust](machine_learning/k_means_clust.py)
* [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py)
- * [Knn Sklearn](machine_learning/knn_sklearn.py)
* [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py)
* [Linear Regression](machine_learning/linear_regression.py)
* Local Weighted Learning
diff --git a/machine_learning/k_nearest_neighbours.py b/machine_learning/k_nearest_neighbours.py
index 2a90cfe5987a..a43757c5c20e 100644
--- a/machine_learning/k_nearest_neighbours.py
+++ b/machine_learning/k_nearest_neighbours.py
@@ -1,58 +1,88 @@
+"""
+k-Nearest Neighbours (kNN) is a simple non-parametric supervised learning
+algorithm used for classification. Given some labelled training data, a given
+point is classified using its k nearest neighbours according to some distance
+metric. The most commonly occurring label among the neighbours becomes the label
+of the given point. In effect, the label of the given point is decided by a
+majority vote.
+
+This implementation uses the commonly used Euclidean distance metric, but other
+distance metrics can also be used.
+
+Reference: https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
+"""
+
from collections import Counter
+from heapq import nsmallest
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
-data = datasets.load_iris()
-
-X = np.array(data["data"])
-y = np.array(data["target"])
-classes = data["target_names"]
-
-X_train, X_test, y_train, y_test = train_test_split(X, y)
-
-
-def euclidean_distance(a, b):
- """
- Gives the euclidean distance between two points
- >>> euclidean_distance([0, 0], [3, 4])
- 5.0
- >>> euclidean_distance([1, 2, 3], [1, 8, 11])
- 10.0
- """
- return np.linalg.norm(np.array(a) - np.array(b))
-
-
-def classifier(train_data, train_target, classes, point, k=5):
- """
- Classifies the point using the KNN algorithm
- k closest points are found (ranked in ascending order of euclidean distance)
- Params:
- :train_data: Set of points that are classified into two or more classes
- :train_target: List of classes in the order of train_data points
- :classes: Labels of the classes
- :point: The data point that needs to be classified
-
- >>> X_train = [[0, 0], [1, 0], [0, 1], [0.5, 0.5], [3, 3], [2, 3], [3, 2]]
- >>> y_train = [0, 0, 0, 0, 1, 1, 1]
- >>> classes = ['A','B']; point = [1.2,1.2]
- >>> classifier(X_train, y_train, classes,point)
- 'A'
- """
- data = zip(train_data, train_target)
- # List of distances of all points from the point to be classified
- distances = []
- for data_point in data:
- distance = euclidean_distance(data_point[0], point)
- distances.append((distance, data_point[1]))
- # Choosing 'k' points with the least distances.
- votes = [i[1] for i in sorted(distances)[:k]]
- # Most commonly occurring class among them
- # is the class into which the point is classified
- result = Counter(votes).most_common(1)[0][0]
- return classes[result]
+
+class KNN:
+ def __init__(
+ self,
+ train_data: np.ndarray[float],
+ train_target: np.ndarray[int],
+ class_labels: list[str],
+ ) -> None:
+ """
+ Create a kNN classifier using the given training data and class labels
+ """
+ self.data = zip(train_data, train_target)
+ self.labels = class_labels
+
+ @staticmethod
+ def _euclidean_distance(a: np.ndarray[float], b: np.ndarray[float]) -> float:
+ """
+ Calculate the Euclidean distance between two points
+ >>> KNN._euclidean_distance(np.array([0, 0]), np.array([3, 4]))
+ 5.0
+ >>> KNN._euclidean_distance(np.array([1, 2, 3]), np.array([1, 8, 11]))
+ 10.0
+ """
+ return np.linalg.norm(a - b)
+
+ def classify(self, pred_point: np.ndarray[float], k: int = 5) -> str:
+ """
+ Classify a given point using the kNN algorithm
+ >>> train_X = np.array(
+ ... [[0, 0], [1, 0], [0, 1], [0.5, 0.5], [3, 3], [2, 3], [3, 2]]
+ ... )
+ >>> train_y = np.array([0, 0, 0, 0, 1, 1, 1])
+ >>> classes = ['A', 'B']
+ >>> knn = KNN(train_X, train_y, classes)
+ >>> point = np.array([1.2, 1.2])
+ >>> knn.classify(point)
+ 'A'
+ """
+ # Distances of all points from the point to be classified
+ distances = (
+ (self._euclidean_distance(data_point[0], pred_point), data_point[1])
+ for data_point in self.data
+ )
+
+ # Choosing k points with the shortest distances
+ votes = (i[1] for i in nsmallest(k, distances))
+
+ # Most commonly occurring class is the one into which the point is classified
+ result = Counter(votes).most_common(1)[0][0]
+ return self.labels[result]
if __name__ == "__main__":
- print(classifier(X_train, y_train, classes, [4.4, 3.1, 1.3, 1.4]))
+ import doctest
+
+ doctest.testmod()
+
+ iris = datasets.load_iris()
+
+ X = np.array(iris["data"])
+ y = np.array(iris["target"])
+ iris_classes = iris["target_names"]
+
+ X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
+ iris_point = np.array([4.4, 3.1, 1.3, 1.4])
+ classifier = KNN(X_train, y_train, iris_classes)
+ print(classifier.classify(iris_point, k=3))
diff --git a/machine_learning/knn_sklearn.py b/machine_learning/knn_sklearn.py
deleted file mode 100644
index 4a621a4244b6..000000000000
--- a/machine_learning/knn_sklearn.py
+++ /dev/null
@@ -1,31 +0,0 @@
-from sklearn.datasets import load_iris
-from sklearn.model_selection import train_test_split
-from sklearn.neighbors import KNeighborsClassifier
-
-# Load iris file
-iris = load_iris()
-iris.keys()
-
-
-print(f"Target names: \n {iris.target_names} ")
-print(f"\n Features: \n {iris.feature_names}")
-
-# Train set e Test set
-X_train, X_test, y_train, y_test = train_test_split(
- iris["data"], iris["target"], random_state=4
-)
-
-# KNN
-
-knn = KNeighborsClassifier(n_neighbors=1)
-knn.fit(X_train, y_train)
-
-# new array to test
-X_new = [[1, 2, 1, 4], [2, 3, 4, 5]]
-
-prediction = knn.predict(X_new)
-
-print(
- f"\nNew array: \n {X_new}\n\nTarget Names Prediction: \n"
- f" {iris['target_names'][prediction]}"
-)
From f9b8759ba82cd7ca4e4a99b9bc9b661ace5a93cc Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Wed, 27 Sep 2023 09:54:40 -0400
Subject: [PATCH 209/808] Move bitwise add (#9097)
* updating DIRECTORY.md
* updating DIRECTORY.md
* updating DIRECTORY.md
* Move and rename maths/sum_of_two_positive_numbers_bitwise.py
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 3 +++
.../bitwise_addition_recursive.py | 0
2 files changed, 3 insertions(+)
rename maths/sum_of_two_positive_numbers_bitwise.py => bit_manipulation/bitwise_addition_recursive.py (100%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 902999460fe5..e596d96e5e83 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -43,6 +43,7 @@
* [Binary Shifts](bit_manipulation/binary_shifts.py)
* [Binary Twos Complement](bit_manipulation/binary_twos_complement.py)
* [Binary Xor Operator](bit_manipulation/binary_xor_operator.py)
+ * [Bitwise Addition Recursive](bit_manipulation/bitwise_addition_recursive.py)
* [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py)
* [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py)
* [Gray Code Sequence](bit_manipulation/gray_code_sequence.py)
@@ -514,6 +515,7 @@
* [Logistic Regression](machine_learning/logistic_regression.py)
* Lstm
* [Lstm Prediction](machine_learning/lstm/lstm_prediction.py)
+ * [Mfcc](machine_learning/mfcc.py)
* [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py)
* [Polynomial Regression](machine_learning/polynomial_regression.py)
* [Scoring Functions](machine_learning/scoring_functions.py)
@@ -752,6 +754,7 @@
* [Basic Orbital Capture](physics/basic_orbital_capture.py)
* [Casimir Effect](physics/casimir_effect.py)
* [Centripetal Force](physics/centripetal_force.py)
+ * [Coulombs Law](physics/coulombs_law.py)
* [Grahams Law](physics/grahams_law.py)
* [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py)
* [Hubble Parameter](physics/hubble_parameter.py)
diff --git a/maths/sum_of_two_positive_numbers_bitwise.py b/bit_manipulation/bitwise_addition_recursive.py
similarity index 100%
rename from maths/sum_of_two_positive_numbers_bitwise.py
rename to bit_manipulation/bitwise_addition_recursive.py
From 38c2b839819549d1ab8566675fab09db449875cc Mon Sep 17 00:00:00 2001
From: aryan1165 <111041731+aryan1165@users.noreply.github.com>
Date: Wed, 27 Sep 2023 19:26:01 +0530
Subject: [PATCH 210/808] Deleted euclidean_gcd.py. Fixes#8063 (#9108)
---
maths/euclidean_gcd.py | 47 ------------------------------------------
1 file changed, 47 deletions(-)
delete mode 100644 maths/euclidean_gcd.py
diff --git a/maths/euclidean_gcd.py b/maths/euclidean_gcd.py
deleted file mode 100644
index de4b250243db..000000000000
--- a/maths/euclidean_gcd.py
+++ /dev/null
@@ -1,47 +0,0 @@
-""" https://en.wikipedia.org/wiki/Euclidean_algorithm """
-
-
-def euclidean_gcd(a: int, b: int) -> int:
- """
- Examples:
- >>> euclidean_gcd(3, 5)
- 1
-
- >>> euclidean_gcd(6, 3)
- 3
- """
- while b:
- a, b = b, a % b
- return a
-
-
-def euclidean_gcd_recursive(a: int, b: int) -> int:
- """
- Recursive method for euclicedan gcd algorithm
-
- Examples:
- >>> euclidean_gcd_recursive(3, 5)
- 1
-
- >>> euclidean_gcd_recursive(6, 3)
- 3
- """
- return a if b == 0 else euclidean_gcd_recursive(b, a % b)
-
-
-def main():
- print(f"euclidean_gcd(3, 5) = {euclidean_gcd(3, 5)}")
- print(f"euclidean_gcd(5, 3) = {euclidean_gcd(5, 3)}")
- print(f"euclidean_gcd(1, 3) = {euclidean_gcd(1, 3)}")
- print(f"euclidean_gcd(3, 6) = {euclidean_gcd(3, 6)}")
- print(f"euclidean_gcd(6, 3) = {euclidean_gcd(6, 3)}")
-
- print(f"euclidean_gcd_recursive(3, 5) = {euclidean_gcd_recursive(3, 5)}")
- print(f"euclidean_gcd_recursive(5, 3) = {euclidean_gcd_recursive(5, 3)}")
- print(f"euclidean_gcd_recursive(1, 3) = {euclidean_gcd_recursive(1, 3)}")
- print(f"euclidean_gcd_recursive(3, 6) = {euclidean_gcd_recursive(3, 6)}")
- print(f"euclidean_gcd_recursive(6, 3) = {euclidean_gcd_recursive(6, 3)}")
-
-
-if __name__ == "__main__":
- main()
From 35dd529c85fc433e0780cdaff586c684208aa1b7 Mon Sep 17 00:00:00 2001
From: Hetarth Jain
Date: Thu, 28 Sep 2023 23:54:46 +0530
Subject: [PATCH 211/808] Returning Index instead of boolean in
knuth_morris_pratt (kmp) function, making it compatible with str.find().
(#9083)
* Update knuth_morris_pratt.py - changed Boolean to Index
* Update knuth_morris_pratt.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update knuth_morris_pratt.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update knuth_morris_pratt.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update back_propagation_neural_network.py
* Update back_propagation_neural_network.py
* Update strings/knuth_morris_pratt.py
* Update knuth_morris_pratt.py
* Update knuth_morris_pratt.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
strings/knuth_morris_pratt.py | 33 +++++++++++++++++++++++++--------
1 file changed, 25 insertions(+), 8 deletions(-)
diff --git a/strings/knuth_morris_pratt.py b/strings/knuth_morris_pratt.py
index a488c171a93b..8a04eb2532c0 100644
--- a/strings/knuth_morris_pratt.py
+++ b/strings/knuth_morris_pratt.py
@@ -1,7 +1,7 @@
from __future__ import annotations
-def kmp(pattern: str, text: str) -> bool:
+def knuth_morris_pratt(text: str, pattern: str) -> int:
"""
The Knuth-Morris-Pratt Algorithm for finding a pattern within a piece of text
with complexity O(n + m)
@@ -14,6 +14,12 @@ def kmp(pattern: str, text: str) -> bool:
2) Step through the text one character at a time and compare it to a character in
the pattern updating our location within the pattern if necessary
+ >>> kmp = "knuth_morris_pratt"
+ >>> all(
+ ... knuth_morris_pratt(kmp, s) == kmp.find(s)
+ ... for s in ("kn", "h_m", "rr", "tt", "not there")
+ ... )
+ True
"""
# 1) Construct the failure array
@@ -24,7 +30,7 @@ def kmp(pattern: str, text: str) -> bool:
while i < len(text):
if pattern[j] == text[i]:
if j == (len(pattern) - 1):
- return True
+ return i - j
j += 1
# if this is a prefix in our pattern
@@ -33,7 +39,7 @@ def kmp(pattern: str, text: str) -> bool:
j = failure[j - 1]
continue
i += 1
- return False
+ return -1
def get_failure_array(pattern: str) -> list[int]:
@@ -57,27 +63,38 @@ def get_failure_array(pattern: str) -> list[int]:
if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
# Test 1)
pattern = "abc1abc12"
text1 = "alskfjaldsabc1abc1abc12k23adsfabcabc"
text2 = "alskfjaldsk23adsfabcabc"
- assert kmp(pattern, text1) and not kmp(pattern, text2)
+ assert knuth_morris_pratt(text1, pattern) and knuth_morris_pratt(text2, pattern)
# Test 2)
pattern = "ABABX"
text = "ABABZABABYABABX"
- assert kmp(pattern, text)
+ assert knuth_morris_pratt(text, pattern)
# Test 3)
pattern = "AAAB"
text = "ABAAAAAB"
- assert kmp(pattern, text)
+ assert knuth_morris_pratt(text, pattern)
# Test 4)
pattern = "abcdabcy"
text = "abcxabcdabxabcdabcdabcy"
- assert kmp(pattern, text)
+ assert knuth_morris_pratt(text, pattern)
+
+ # Test 5) -> Doctests
+ kmp = "knuth_morris_pratt"
+ assert all(
+ knuth_morris_pratt(kmp, s) == kmp.find(s)
+ for s in ("kn", "h_m", "rr", "tt", "not there")
+ )
- # Test 5)
+ # Test 6)
pattern = "aabaabaaa"
assert get_failure_array(pattern) == [0, 1, 0, 1, 2, 3, 4, 5, 2]
From 467903aa33ad746262bd46d803231d0930131197 Mon Sep 17 00:00:00 2001
From: Belhadj Ahmed Walid <80895522+BAW2501@users.noreply.github.com>
Date: Sat, 30 Sep 2023 05:33:13 +0100
Subject: [PATCH 212/808] added smith waterman algorithm (#9001)
* added smith waterman algorithm
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* descriptive names for the parameters a and b
* doctesting lowercase upcase empty string cases
* updated block quot,fixed traceback and doctests
* shorter block quote
Co-authored-by: Tianyi Zheng
* global vars to func params,more doctests
* updated doctests
* user access to SW params
* formating
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
dynamic_programming/smith_waterman.py | 193 ++++++++++++++++++++++++++
1 file changed, 193 insertions(+)
create mode 100644 dynamic_programming/smith_waterman.py
diff --git a/dynamic_programming/smith_waterman.py b/dynamic_programming/smith_waterman.py
new file mode 100644
index 000000000000..4c5d58379f07
--- /dev/null
+++ b/dynamic_programming/smith_waterman.py
@@ -0,0 +1,193 @@
+"""
+https://en.wikipedia.org/wiki/Smith%E2%80%93Waterman_algorithm
+The Smith-Waterman algorithm is a dynamic programming algorithm used for sequence
+alignment. It is particularly useful for finding similarities between two sequences,
+such as DNA or protein sequences. In this implementation, gaps are penalized
+linearly, meaning that the score is reduced by a fixed amount for each gap introduced
+in the alignment. However, it's important to note that the Smith-Waterman algorithm
+supports other gap penalty methods as well.
+"""
+
+
+def score_function(
+ source_char: str,
+ target_char: str,
+ match: int = 1,
+ mismatch: int = -1,
+ gap: int = -2,
+) -> int:
+ """
+ Calculate the score for a character pair based on whether they match or mismatch.
+ Returns 1 if the characters match, -1 if they mismatch, and -2 if either of the
+ characters is a gap.
+ >>> score_function('A', 'A')
+ 1
+ >>> score_function('A', 'C')
+ -1
+ >>> score_function('-', 'A')
+ -2
+ >>> score_function('A', '-')
+ -2
+ >>> score_function('-', '-')
+ -2
+ """
+ if "-" in (source_char, target_char):
+ return gap
+ return match if source_char == target_char else mismatch
+
+
+def smith_waterman(
+ query: str,
+ subject: str,
+ match: int = 1,
+ mismatch: int = -1,
+ gap: int = -2,
+) -> list[list[int]]:
+ """
+ Perform the Smith-Waterman local sequence alignment algorithm.
+ Returns a 2D list representing the score matrix. Each value in the matrix
+ corresponds to the score of the best local alignment ending at that point.
+ >>> smith_waterman('ACAC', 'CA')
+ [[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 2], [0, 1, 0]]
+ >>> smith_waterman('acac', 'ca')
+ [[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 2], [0, 1, 0]]
+ >>> smith_waterman('ACAC', 'ca')
+ [[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 2], [0, 1, 0]]
+ >>> smith_waterman('acac', 'CA')
+ [[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 2], [0, 1, 0]]
+ >>> smith_waterman('ACAC', '')
+ [[0], [0], [0], [0], [0]]
+ >>> smith_waterman('', 'CA')
+ [[0, 0, 0]]
+ >>> smith_waterman('ACAC', 'CA')
+ [[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 2], [0, 1, 0]]
+
+ >>> smith_waterman('acac', 'ca')
+ [[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 2], [0, 1, 0]]
+
+ >>> smith_waterman('ACAC', 'ca')
+ [[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 2], [0, 1, 0]]
+
+ >>> smith_waterman('acac', 'CA')
+ [[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 2], [0, 1, 0]]
+
+ >>> smith_waterman('ACAC', '')
+ [[0], [0], [0], [0], [0]]
+
+ >>> smith_waterman('', 'CA')
+ [[0, 0, 0]]
+
+ >>> smith_waterman('AGT', 'AGT')
+ [[0, 0, 0, 0], [0, 1, 0, 0], [0, 0, 2, 0], [0, 0, 0, 3]]
+
+ >>> smith_waterman('AGT', 'GTA')
+ [[0, 0, 0, 0], [0, 0, 0, 1], [0, 1, 0, 0], [0, 0, 2, 0]]
+
+ >>> smith_waterman('AGT', 'GTC')
+ [[0, 0, 0, 0], [0, 0, 0, 0], [0, 1, 0, 0], [0, 0, 2, 0]]
+
+ >>> smith_waterman('AGT', 'G')
+ [[0, 0], [0, 0], [0, 1], [0, 0]]
+
+ >>> smith_waterman('G', 'AGT')
+ [[0, 0, 0, 0], [0, 0, 1, 0]]
+
+ >>> smith_waterman('AGT', 'AGTCT')
+ [[0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 2, 0, 0, 0], [0, 0, 0, 3, 1, 1]]
+
+ >>> smith_waterman('AGTCT', 'AGT')
+ [[0, 0, 0, 0], [0, 1, 0, 0], [0, 0, 2, 0], [0, 0, 0, 3], [0, 0, 0, 1], [0, 0, 0, 1]]
+
+ >>> smith_waterman('AGTCT', 'GTC')
+ [[0, 0, 0, 0], [0, 0, 0, 0], [0, 1, 0, 0], [0, 0, 2, 0], [0, 0, 0, 3], [0, 0, 1, 1]]
+ """
+ # make both query and subject uppercase
+ query = query.upper()
+ subject = subject.upper()
+
+ # Initialize score matrix
+ m = len(query)
+ n = len(subject)
+ score = [[0] * (n + 1) for _ in range(m + 1)]
+ kwargs = {"match": match, "mismatch": mismatch, "gap": gap}
+
+ for i in range(1, m + 1):
+ for j in range(1, n + 1):
+ # Calculate scores for each cell
+ match = score[i - 1][j - 1] + score_function(
+ query[i - 1], subject[j - 1], **kwargs
+ )
+ delete = score[i - 1][j] + gap
+ insert = score[i][j - 1] + gap
+
+ # Take maximum score
+ score[i][j] = max(0, match, delete, insert)
+
+ return score
+
+
+def traceback(score: list[list[int]], query: str, subject: str) -> str:
+ r"""
+ Perform traceback to find the optimal local alignment.
+ Starts from the highest scoring cell in the matrix and traces back recursively
+ until a 0 score is found. Returns the alignment strings.
+ >>> traceback([[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 2], [0, 1, 0]], 'ACAC', 'CA')
+ 'CA\nCA'
+ >>> traceback([[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 2], [0, 1, 0]], 'acac', 'ca')
+ 'CA\nCA'
+ >>> traceback([[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 2], [0, 1, 0]], 'ACAC', 'ca')
+ 'CA\nCA'
+ >>> traceback([[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 2], [0, 1, 0]], 'acac', 'CA')
+ 'CA\nCA'
+ >>> traceback([[0, 0, 0]], 'ACAC', '')
+ ''
+ """
+ # make both query and subject uppercase
+ query = query.upper()
+ subject = subject.upper()
+ # find the indices of the maximum value in the score matrix
+ max_value = float("-inf")
+ i_max = j_max = 0
+ for i, row in enumerate(score):
+ for j, value in enumerate(row):
+ if value > max_value:
+ max_value = value
+ i_max, j_max = i, j
+ # Traceback logic to find optimal alignment
+ i = i_max
+ j = j_max
+ align1 = ""
+ align2 = ""
+ gap = score_function("-", "-")
+ # guard against empty query or subject
+ if i == 0 or j == 0:
+ return ""
+ while i > 0 and j > 0:
+ if score[i][j] == score[i - 1][j - 1] + score_function(
+ query[i - 1], subject[j - 1]
+ ):
+ # optimal path is a diagonal take both letters
+ align1 = query[i - 1] + align1
+ align2 = subject[j - 1] + align2
+ i -= 1
+ j -= 1
+ elif score[i][j] == score[i - 1][j] + gap:
+ # optimal path is a vertical
+ align1 = query[i - 1] + align1
+ align2 = f"-{align2}"
+ i -= 1
+ else:
+ # optimal path is a horizontal
+ align1 = f"-{align1}"
+ align2 = subject[j - 1] + align2
+ j -= 1
+
+ return f"{align1}\n{align2}"
+
+
+if __name__ == "__main__":
+ query = "HEAGAWGHEE"
+ subject = "PAWHEAE"
+
+ score = smith_waterman(query, subject, match=1, mismatch=-1, gap=-2)
+ print(traceback(score, query, subject))
From dec96438be1a165eaa300a4d6df33e338b4e44c6 Mon Sep 17 00:00:00 2001
From: Caeden Perelli-Harris
Date: Sat, 30 Sep 2023 05:57:56 +0100
Subject: [PATCH 213/808] Create word search algorithm (#8906)
* feat(other): Create word_search algorithm
* updating DIRECTORY.md
* doc(word_search): Link to wikipedia article
* Apply suggestions from code review
Co-authored-by: Tianyi Zheng
* Update word_search.py
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
DIRECTORY.md | 1 +
other/word_search.py | 396 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 397 insertions(+)
create mode 100644 other/word_search.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index e596d96e5e83..aabbf27512ce 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -747,6 +747,7 @@
* [Scoring Algorithm](other/scoring_algorithm.py)
* [Sdes](other/sdes.py)
* [Tower Of Hanoi](other/tower_of_hanoi.py)
+ * [Word Search](other/word_search.py)
## Physics
* [Altitude Pressure](physics/altitude_pressure.py)
diff --git a/other/word_search.py b/other/word_search.py
new file mode 100644
index 000000000000..a4796e220c7c
--- /dev/null
+++ b/other/word_search.py
@@ -0,0 +1,396 @@
+"""
+Creates a random wordsearch with eight different directions
+that are best described as compass locations.
+
+@ https://en.wikipedia.org/wiki/Word_search
+"""
+
+
+from random import choice, randint, shuffle
+
+# The words to display on the word search -
+# can be made dynamic by randonly selecting a certain number of
+# words from a predefined word file, while ensuring the character
+# count fits within the matrix size (n x m)
+WORDS = ["cat", "dog", "snake", "fish"]
+
+WIDTH = 10
+HEIGHT = 10
+
+
+class WordSearch:
+ """
+ >>> ws = WordSearch(WORDS, WIDTH, HEIGHT)
+ >>> ws.board # doctest: +ELLIPSIS
+ [[None, ..., None], ..., [None, ..., None]]
+ >>> ws.generate_board()
+ """
+
+ def __init__(self, words: list[str], width: int, height: int) -> None:
+ self.words = words
+ self.width = width
+ self.height = height
+
+ # Board matrix holding each letter
+ self.board: list[list[str | None]] = [[None] * width for _ in range(height)]
+
+ def insert_north(self, word: str, rows: list[int], cols: list[int]) -> None:
+ """
+ >>> ws = WordSearch(WORDS, 3, 3)
+ >>> ws.insert_north("cat", [2], [2])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [[None, None, 't'],
+ [None, None, 'a'],
+ [None, None, 'c']]
+ >>> ws.insert_north("at", [0, 1, 2], [2, 1])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [[None, 't', 't'],
+ [None, 'a', 'a'],
+ [None, None, 'c']]
+ """
+ word_length = len(word)
+ # Attempt to insert the word into each row and when successful, exit
+ for row in rows:
+ # Check if there is space above the row to fit in the word
+ if word_length > row + 1:
+ continue
+
+ # Attempt to insert the word into each column
+ for col in cols:
+ # Only check to be made here is if there are existing letters
+ # above the column that will be overwritten
+ letters_above = [self.board[row - i][col] for i in range(word_length)]
+ if all(letter is None for letter in letters_above):
+ # Successful, insert the word north
+ for i in range(word_length):
+ self.board[row - i][col] = word[i]
+ return
+
+ def insert_northeast(self, word: str, rows: list[int], cols: list[int]) -> None:
+ """
+ >>> ws = WordSearch(WORDS, 3, 3)
+ >>> ws.insert_northeast("cat", [2], [0])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [[None, None, 't'],
+ [None, 'a', None],
+ ['c', None, None]]
+ >>> ws.insert_northeast("at", [0, 1], [2, 1, 0])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [[None, 't', 't'],
+ ['a', 'a', None],
+ ['c', None, None]]
+ """
+ word_length = len(word)
+ # Attempt to insert the word into each row and when successful, exit
+ for row in rows:
+ # Check if there is space for the word above the row
+ if word_length > row + 1:
+ continue
+
+ # Attempt to insert the word into each column
+ for col in cols:
+ # Check if there is space to the right of the word as well as above
+ if word_length + col > self.width:
+ continue
+
+ # Check if there are existing letters
+ # to the right of the column that will be overwritten
+ letters_diagonal_left = [
+ self.board[row - i][col + i] for i in range(word_length)
+ ]
+ if all(letter is None for letter in letters_diagonal_left):
+ # Successful, insert the word northeast
+ for i in range(word_length):
+ self.board[row - i][col + i] = word[i]
+ return
+
+ def insert_east(self, word: str, rows: list[int], cols: list[int]) -> None:
+ """
+ >>> ws = WordSearch(WORDS, 3, 3)
+ >>> ws.insert_east("cat", [1], [0])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [[None, None, None],
+ ['c', 'a', 't'],
+ [None, None, None]]
+ >>> ws.insert_east("at", [1, 0], [2, 1, 0])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [[None, 'a', 't'],
+ ['c', 'a', 't'],
+ [None, None, None]]
+ """
+ word_length = len(word)
+ # Attempt to insert the word into each row and when successful, exit
+ for row in rows:
+ # Attempt to insert the word into each column
+ for col in cols:
+ # Check if there is space to the right of the word
+ if word_length + col > self.width:
+ continue
+
+ # Check if there are existing letters
+ # to the right of the column that will be overwritten
+ letters_left = [self.board[row][col + i] for i in range(word_length)]
+ if all(letter is None for letter in letters_left):
+ # Successful, insert the word east
+ for i in range(word_length):
+ self.board[row][col + i] = word[i]
+ return
+
+ def insert_southeast(self, word: str, rows: list[int], cols: list[int]) -> None:
+ """
+ >>> ws = WordSearch(WORDS, 3, 3)
+ >>> ws.insert_southeast("cat", [0], [0])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [['c', None, None],
+ [None, 'a', None],
+ [None, None, 't']]
+ >>> ws.insert_southeast("at", [1, 0], [2, 1, 0])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [['c', None, None],
+ ['a', 'a', None],
+ [None, 't', 't']]
+ """
+ word_length = len(word)
+ # Attempt to insert the word into each row and when successful, exit
+ for row in rows:
+ # Check if there is space for the word below the row
+ if word_length + row > self.height:
+ continue
+
+ # Attempt to insert the word into each column
+ for col in cols:
+ # Check if there is space to the right of the word as well as below
+ if word_length + col > self.width:
+ continue
+
+ # Check if there are existing letters
+ # to the right of the column that will be overwritten
+ letters_diagonal_left = [
+ self.board[row + i][col + i] for i in range(word_length)
+ ]
+ if all(letter is None for letter in letters_diagonal_left):
+ # Successful, insert the word southeast
+ for i in range(word_length):
+ self.board[row + i][col + i] = word[i]
+ return
+
+ def insert_south(self, word: str, rows: list[int], cols: list[int]) -> None:
+ """
+ >>> ws = WordSearch(WORDS, 3, 3)
+ >>> ws.insert_south("cat", [0], [0])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [['c', None, None],
+ ['a', None, None],
+ ['t', None, None]]
+ >>> ws.insert_south("at", [2, 1, 0], [0, 1, 2])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [['c', None, None],
+ ['a', 'a', None],
+ ['t', 't', None]]
+ """
+ word_length = len(word)
+ # Attempt to insert the word into each row and when successful, exit
+ for row in rows:
+ # Check if there is space below the row to fit in the word
+ if word_length + row > self.height:
+ continue
+
+ # Attempt to insert the word into each column
+ for col in cols:
+ # Only check to be made here is if there are existing letters
+ # below the column that will be overwritten
+ letters_below = [self.board[row + i][col] for i in range(word_length)]
+ if all(letter is None for letter in letters_below):
+ # Successful, insert the word south
+ for i in range(word_length):
+ self.board[row + i][col] = word[i]
+ return
+
+ def insert_southwest(self, word: str, rows: list[int], cols: list[int]) -> None:
+ """
+ >>> ws = WordSearch(WORDS, 3, 3)
+ >>> ws.insert_southwest("cat", [0], [2])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [[None, None, 'c'],
+ [None, 'a', None],
+ ['t', None, None]]
+ >>> ws.insert_southwest("at", [1, 2], [2, 1, 0])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [[None, None, 'c'],
+ [None, 'a', 'a'],
+ ['t', 't', None]]
+ """
+ word_length = len(word)
+ # Attempt to insert the word into each row and when successful, exit
+ for row in rows:
+ # Check if there is space for the word below the row
+ if word_length + row > self.height:
+ continue
+
+ # Attempt to insert the word into each column
+ for col in cols:
+ # Check if there is space to the left of the word as well as below
+ if word_length > col + 1:
+ continue
+
+ # Check if there are existing letters
+ # to the right of the column that will be overwritten
+ letters_diagonal_left = [
+ self.board[row + i][col - i] for i in range(word_length)
+ ]
+ if all(letter is None for letter in letters_diagonal_left):
+ # Successful, insert the word southwest
+ for i in range(word_length):
+ self.board[row + i][col - i] = word[i]
+ return
+
+ def insert_west(self, word: str, rows: list[int], cols: list[int]) -> None:
+ """
+ >>> ws = WordSearch(WORDS, 3, 3)
+ >>> ws.insert_west("cat", [1], [2])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [[None, None, None],
+ ['t', 'a', 'c'],
+ [None, None, None]]
+ >>> ws.insert_west("at", [1, 0], [1, 2, 0])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [['t', 'a', None],
+ ['t', 'a', 'c'],
+ [None, None, None]]
+ """
+ word_length = len(word)
+ # Attempt to insert the word into each row and when successful, exit
+ for row in rows:
+ # Attempt to insert the word into each column
+ for col in cols:
+ # Check if there is space to the left of the word
+ if word_length > col + 1:
+ continue
+
+ # Check if there are existing letters
+ # to the left of the column that will be overwritten
+ letters_left = [self.board[row][col - i] for i in range(word_length)]
+ if all(letter is None for letter in letters_left):
+ # Successful, insert the word west
+ for i in range(word_length):
+ self.board[row][col - i] = word[i]
+ return
+
+ def insert_northwest(self, word: str, rows: list[int], cols: list[int]) -> None:
+ """
+ >>> ws = WordSearch(WORDS, 3, 3)
+ >>> ws.insert_northwest("cat", [2], [2])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [['t', None, None],
+ [None, 'a', None],
+ [None, None, 'c']]
+ >>> ws.insert_northwest("at", [1, 2], [0, 1])
+ >>> ws.board # doctest: +NORMALIZE_WHITESPACE
+ [['t', None, None],
+ ['t', 'a', None],
+ [None, 'a', 'c']]
+ """
+ word_length = len(word)
+ # Attempt to insert the word into each row and when successful, exit
+ for row in rows:
+ # Check if there is space for the word above the row
+ if word_length > row + 1:
+ continue
+
+ # Attempt to insert the word into each column
+ for col in cols:
+ # Check if there is space to the left of the word as well as above
+ if word_length > col + 1:
+ continue
+
+ # Check if there are existing letters
+ # to the right of the column that will be overwritten
+ letters_diagonal_left = [
+ self.board[row - i][col - i] for i in range(word_length)
+ ]
+ if all(letter is None for letter in letters_diagonal_left):
+ # Successful, insert the word northwest
+ for i in range(word_length):
+ self.board[row - i][col - i] = word[i]
+ return
+
+ def generate_board(self) -> None:
+ """
+ Generates a board with a random direction for each word.
+
+ >>> wt = WordSearch(WORDS, WIDTH, HEIGHT)
+ >>> wt.generate_board()
+ >>> len(list(filter(lambda word: word is not None, sum(wt.board, start=[])))
+ ... ) == sum(map(lambda word: len(word), WORDS))
+ True
+ """
+ directions = (
+ self.insert_north,
+ self.insert_northeast,
+ self.insert_east,
+ self.insert_southeast,
+ self.insert_south,
+ self.insert_southwest,
+ self.insert_west,
+ self.insert_northwest,
+ )
+ for word in self.words:
+ # Shuffle the row order and column order that is used when brute forcing
+ # the insertion of the word
+ rows, cols = list(range(self.height)), list(range(self.width))
+ shuffle(rows)
+ shuffle(cols)
+
+ # Insert the word via the direction
+ choice(directions)(word, rows, cols)
+
+
+def visualise_word_search(
+ board: list[list[str | None]] | None = None, *, add_fake_chars: bool = True
+) -> None:
+ """
+ Graphically displays the word search in the terminal.
+
+ >>> ws = WordSearch(WORDS, 5, 5)
+ >>> ws.insert_north("cat", [4], [4])
+ >>> visualise_word_search(
+ ... ws.board, add_fake_chars=False) # doctest: +NORMALIZE_WHITESPACE
+ # # # # #
+ # # # # #
+ # # # # t
+ # # # # a
+ # # # # c
+ >>> ws.insert_northeast("snake", [4], [4, 3, 2, 1, 0])
+ >>> visualise_word_search(
+ ... ws.board, add_fake_chars=False) # doctest: +NORMALIZE_WHITESPACE
+ # # # # e
+ # # # k #
+ # # a # t
+ # n # # a
+ s # # # c
+ """
+ if board is None:
+ word_search = WordSearch(WORDS, WIDTH, HEIGHT)
+ word_search.generate_board()
+ board = word_search.board
+
+ result = ""
+ for row in range(len(board)):
+ for col in range(len(board[0])):
+ character = "#"
+ if (letter := board[row][col]) is not None:
+ character = letter
+ # Empty char, so add a fake char
+ elif add_fake_chars:
+ character = chr(randint(97, 122))
+ result += f"{character} "
+ result += "\n"
+ print(result, end="")
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
+ visualise_word_search()
From aaf7195465ddfe743cda707cac0feacf70287ecd Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Sat, 30 Sep 2023 23:10:33 -0400
Subject: [PATCH 214/808] Fix mypy error in web_programming/reddit.py (#9162)
* updating DIRECTORY.md
* updating DIRECTORY.md
* Fix mypy error in web_programming/reddit.py
web_programming/reddit.py:36: error: Missing named argument "response" for "HTTPError" [call-arg]
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 2 +-
web_programming/reddit.py | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index aabbf27512ce..001da2c15b99 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -341,6 +341,7 @@
* [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py)
* [Regex Match](dynamic_programming/regex_match.py)
* [Rod Cutting](dynamic_programming/rod_cutting.py)
+ * [Smith Waterman](dynamic_programming/smith_waterman.py)
* [Subset Generation](dynamic_programming/subset_generation.py)
* [Sum Of Subset](dynamic_programming/sum_of_subset.py)
* [Tribonacci](dynamic_programming/tribonacci.py)
@@ -567,7 +568,6 @@
* [Dual Number Automatic Differentiation](maths/dual_number_automatic_differentiation.py)
* [Entropy](maths/entropy.py)
* [Euclidean Distance](maths/euclidean_distance.py)
- * [Euclidean Gcd](maths/euclidean_gcd.py)
* [Euler Method](maths/euler_method.py)
* [Euler Modified](maths/euler_modified.py)
* [Eulers Totient](maths/eulers_totient.py)
diff --git a/web_programming/reddit.py b/web_programming/reddit.py
index 5ca5f828c0fb..1c165ecc49ec 100644
--- a/web_programming/reddit.py
+++ b/web_programming/reddit.py
@@ -33,7 +33,7 @@ def get_subreddit_data(
headers={"User-agent": "A random string"},
)
if response.status_code == 429:
- raise requests.HTTPError
+ raise requests.HTTPError(response=response)
data = response.json()
if not wanted_data:
From 5f8d1cb5c99cccf6e5ce62fbca9c3dcd60a75292 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Sat, 30 Sep 2023 23:31:35 -0400
Subject: [PATCH 215/808] Fix DeprecationWarning in local_weighted_learning.py
(#9165)
Fix DeprecationWarning that occurs during build due to converting an
np.ndarray to a scalar implicitly
---
.../local_weighted_learning/local_weighted_learning.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/machine_learning/local_weighted_learning/local_weighted_learning.py b/machine_learning/local_weighted_learning/local_weighted_learning.py
index 8dd0e55d41df..ada6f7cd2520 100644
--- a/machine_learning/local_weighted_learning/local_weighted_learning.py
+++ b/machine_learning/local_weighted_learning/local_weighted_learning.py
@@ -122,7 +122,7 @@ def local_weight_regression(
"""
y_pred = np.zeros(len(x_train)) # Initialize array of predictions
for i, item in enumerate(x_train):
- y_pred[i] = item @ local_weight(item, x_train, y_train, tau)
+ y_pred[i] = np.dot(item, local_weight(item, x_train, y_train, tau))
return y_pred
From 320d895b86133b4b5c489df39ab245fa6be4bce8 Mon Sep 17 00:00:00 2001
From: aryan1165 <111041731+aryan1165@users.noreply.github.com>
Date: Sun, 1 Oct 2023 09:36:15 +0530
Subject: [PATCH 216/808] Fixed permute_recursive() by passing nums.copy().
Fixes #9014 (#9161)
* Fixes #9014
* Fixed permute_recursive() by passing nums.copy()
---
data_structures/arrays/permutations.py | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/data_structures/arrays/permutations.py b/data_structures/arrays/permutations.py
index 0f029187b92b..4906dd5c2ae1 100644
--- a/data_structures/arrays/permutations.py
+++ b/data_structures/arrays/permutations.py
@@ -10,7 +10,7 @@ def permute_recursive(nums: list[int]) -> list[list[int]]:
return [[]]
for _ in range(len(nums)):
n = nums.pop(0)
- permutations = permute_recursive(nums)
+ permutations = permute_recursive(nums.copy())
for perm in permutations:
perm.append(n)
result.extend(permutations)
@@ -43,6 +43,6 @@ def backtrack(start: int) -> None:
if __name__ == "__main__":
import doctest
- res = permute_backtrack([1, 2, 3])
- print(res)
+ result = permute_backtrack([1, 2, 3])
+ print(result)
doctest.testmod()
From 280dfc1a22adb08aa71984ee4b22e4df220a8e68 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Sun, 1 Oct 2023 00:07:25 -0400
Subject: [PATCH 217/808] Fix DeprecationWarning in local_weighted_learning.py
(Attempt 2) (#9170)
* Fix DeprecationWarning in local_weighted_learning.py
Fix DeprecationWarning that occurs during build due to converting an
np.ndarray to a scalar implicitly
* DeprecationWarning fix attempt 2
---
.../local_weighted_learning/local_weighted_learning.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/machine_learning/local_weighted_learning/local_weighted_learning.py b/machine_learning/local_weighted_learning/local_weighted_learning.py
index ada6f7cd2520..f3056da40e24 100644
--- a/machine_learning/local_weighted_learning/local_weighted_learning.py
+++ b/machine_learning/local_weighted_learning/local_weighted_learning.py
@@ -122,7 +122,7 @@ def local_weight_regression(
"""
y_pred = np.zeros(len(x_train)) # Initialize array of predictions
for i, item in enumerate(x_train):
- y_pred[i] = np.dot(item, local_weight(item, x_train, y_train, tau))
+ y_pred[i] = np.dot(item, local_weight(item, x_train, y_train, tau)).item()
return y_pred
From 832610ab1d05c8cea2814adcc8db5597e7e5ede7 Mon Sep 17 00:00:00 2001
From: aryan1165 <111041731+aryan1165@users.noreply.github.com>
Date: Sun, 1 Oct 2023 10:10:53 +0530
Subject: [PATCH 218/808] Deleted sorts/random_pivot_quick_sort.py (#9178)
---
sorts/random_pivot_quick_sort.py | 44 --------------------------------
1 file changed, 44 deletions(-)
delete mode 100644 sorts/random_pivot_quick_sort.py
diff --git a/sorts/random_pivot_quick_sort.py b/sorts/random_pivot_quick_sort.py
deleted file mode 100644
index 748b6741047e..000000000000
--- a/sorts/random_pivot_quick_sort.py
+++ /dev/null
@@ -1,44 +0,0 @@
-"""
-Picks the random index as the pivot
-"""
-import random
-
-
-def partition(a, left_index, right_index):
- pivot = a[left_index]
- i = left_index + 1
- for j in range(left_index + 1, right_index):
- if a[j] < pivot:
- a[j], a[i] = a[i], a[j]
- i += 1
- a[left_index], a[i - 1] = a[i - 1], a[left_index]
- return i - 1
-
-
-def quick_sort_random(a, left, right):
- if left < right:
- pivot = random.randint(left, right - 1)
- a[pivot], a[left] = (
- a[left],
- a[pivot],
- ) # switches the pivot with the left most bound
- pivot_index = partition(a, left, right)
- quick_sort_random(
- a, left, pivot_index
- ) # recursive quicksort to the left of the pivot point
- quick_sort_random(
- a, pivot_index + 1, right
- ) # recursive quicksort to the right of the pivot point
-
-
-def main():
- user_input = input("Enter numbers separated by a comma:\n").strip()
- arr = [int(item) for item in user_input.split(",")]
-
- quick_sort_random(arr, 0, len(arr))
-
- print(arr)
-
-
-if __name__ == "__main__":
- main()
From 3dbafd3f0db55e040a7fd277134d86ec3accfb57 Mon Sep 17 00:00:00 2001
From: aryan1165 <111041731+aryan1165@users.noreply.github.com>
Date: Sun, 1 Oct 2023 10:51:46 +0530
Subject: [PATCH 219/808] Deleted random_normal_distribution_quicksort.py.
Fixes #9124 (#9182)
---
sorts/random_normal_distribution_quicksort.py | 62 -------------------
1 file changed, 62 deletions(-)
delete mode 100644 sorts/random_normal_distribution_quicksort.py
diff --git a/sorts/random_normal_distribution_quicksort.py b/sorts/random_normal_distribution_quicksort.py
deleted file mode 100644
index f7f60903c546..000000000000
--- a/sorts/random_normal_distribution_quicksort.py
+++ /dev/null
@@ -1,62 +0,0 @@
-from random import randint
-from tempfile import TemporaryFile
-
-import numpy as np
-
-
-def _in_place_quick_sort(a, start, end):
- count = 0
- if start < end:
- pivot = randint(start, end)
- temp = a[end]
- a[end] = a[pivot]
- a[pivot] = temp
-
- p, count = _in_place_partition(a, start, end)
- count += _in_place_quick_sort(a, start, p - 1)
- count += _in_place_quick_sort(a, p + 1, end)
- return count
-
-
-def _in_place_partition(a, start, end):
- count = 0
- pivot = randint(start, end)
- temp = a[end]
- a[end] = a[pivot]
- a[pivot] = temp
- new_pivot_index = start - 1
- for index in range(start, end):
- count += 1
- if a[index] < a[end]: # check if current val is less than pivot value
- new_pivot_index = new_pivot_index + 1
- temp = a[new_pivot_index]
- a[new_pivot_index] = a[index]
- a[index] = temp
-
- temp = a[new_pivot_index + 1]
- a[new_pivot_index + 1] = a[end]
- a[end] = temp
- return new_pivot_index + 1, count
-
-
-outfile = TemporaryFile()
-p = 100 # 1000 elements are to be sorted
-
-
-mu, sigma = 0, 1 # mean and standard deviation
-X = np.random.normal(mu, sigma, p)
-np.save(outfile, X)
-print("The array is")
-print(X)
-
-
-outfile.seek(0) # using the same array
-M = np.load(outfile)
-r = len(M) - 1
-z = _in_place_quick_sort(M, 0, r)
-
-print(
- "No of Comparisons for 100 elements selected from a standard normal distribution"
- "is :"
-)
-print(z)
From fbbbd5db05987e735ec35fc658136001d3e9e663 Mon Sep 17 00:00:00 2001
From: aryan1165 <111041731+aryan1165@users.noreply.github.com>
Date: Sun, 1 Oct 2023 11:04:03 +0530
Subject: [PATCH 220/808] Deleted add.py. As stated in #6216 (#9180)
---
maths/add.py | 19 -------------------
1 file changed, 19 deletions(-)
delete mode 100644 maths/add.py
diff --git a/maths/add.py b/maths/add.py
deleted file mode 100644
index c89252c645ea..000000000000
--- a/maths/add.py
+++ /dev/null
@@ -1,19 +0,0 @@
-"""
-Just to check
-"""
-
-
-def add(a: float, b: float) -> float:
- """
- >>> add(2, 2)
- 4
- >>> add(2, -2)
- 0
- """
- return a + b
-
-
-if __name__ == "__main__":
- a = 5
- b = 6
- print(f"The sum of {a} + {b} is {add(a, b)}")
From eaa87bd791cdc18d210d775f3258767751f9d3fe Mon Sep 17 00:00:00 2001
From: aryan1165 <111041731+aryan1165@users.noreply.github.com>
Date: Sun, 1 Oct 2023 14:13:48 +0530
Subject: [PATCH 221/808] Made binary tree memory-friendly using generators
based travels. Fixes (#9208)
#8725
---
.../binary_tree/binary_tree_traversals.py | 56 +++++++++++--------
1 file changed, 34 insertions(+), 22 deletions(-)
diff --git a/data_structures/binary_tree/binary_tree_traversals.py b/data_structures/binary_tree/binary_tree_traversals.py
index 2afb7604f9c6..5dbbbe623906 100644
--- a/data_structures/binary_tree/binary_tree_traversals.py
+++ b/data_structures/binary_tree/binary_tree_traversals.py
@@ -1,12 +1,12 @@
-# https://en.wikipedia.org/wiki/Tree_traversal
from __future__ import annotations
from collections import deque
-from collections.abc import Sequence
+from collections.abc import Generator, Sequence
from dataclasses import dataclass
from typing import Any
+# https://en.wikipedia.org/wiki/Tree_traversal
@dataclass
class Node:
data: int
@@ -31,44 +31,56 @@ def make_tree() -> Node | None:
return tree
-def preorder(root: Node | None) -> list[int]:
+def preorder(root: Node | None) -> Generator[int, None, None]:
"""
Pre-order traversal visits root node, left subtree, right subtree.
- >>> preorder(make_tree())
+ >>> list(preorder(make_tree()))
[1, 2, 4, 5, 3]
"""
- return [root.data, *preorder(root.left), *preorder(root.right)] if root else []
+ if not root:
+ return
+ yield root.data
+ yield from preorder(root.left)
+ yield from preorder(root.right)
-def postorder(root: Node | None) -> list[int]:
+def postorder(root: Node | None) -> Generator[int, None, None]:
"""
Post-order traversal visits left subtree, right subtree, root node.
- >>> postorder(make_tree())
+ >>> list(postorder(make_tree()))
[4, 5, 2, 3, 1]
"""
- return postorder(root.left) + postorder(root.right) + [root.data] if root else []
+ if not root:
+ return
+ yield from postorder(root.left)
+ yield from postorder(root.right)
+ yield root.data
-def inorder(root: Node | None) -> list[int]:
+def inorder(root: Node | None) -> Generator[int, None, None]:
"""
In-order traversal visits left subtree, root node, right subtree.
- >>> inorder(make_tree())
+ >>> list(inorder(make_tree()))
[4, 2, 5, 1, 3]
"""
- return [*inorder(root.left), root.data, *inorder(root.right)] if root else []
+ if not root:
+ return
+ yield from inorder(root.left)
+ yield root.data
+ yield from inorder(root.right)
-def reverse_inorder(root: Node | None) -> list[int]:
+def reverse_inorder(root: Node | None) -> Generator[int, None, None]:
"""
Reverse in-order traversal visits right subtree, root node, left subtree.
- >>> reverse_inorder(make_tree())
+ >>> list(reverse_inorder(make_tree()))
[3, 1, 5, 2, 4]
"""
- return (
- [*reverse_inorder(root.right), root.data, *reverse_inorder(root.left)]
- if root
- else []
- )
+ if not root:
+ return
+ yield from reverse_inorder(root.right)
+ yield root.data
+ yield from reverse_inorder(root.left)
def height(root: Node | None) -> int:
@@ -178,10 +190,10 @@ def main() -> None: # Main function for testing.
root = make_tree()
# All Traversals of the binary are as follows:
- print(f"In-order Traversal: {inorder(root)}")
- print(f"Reverse In-order Traversal: {reverse_inorder(root)}")
- print(f"Pre-order Traversal: {preorder(root)}")
- print(f"Post-order Traversal: {postorder(root)}", "\n")
+ print(f"In-order Traversal: {list(inorder(root))}")
+ print(f"Reverse In-order Traversal: {list(reverse_inorder(root))}")
+ print(f"Pre-order Traversal: {list(preorder(root))}")
+ print(f"Post-order Traversal: {list(postorder(root))}", "\n")
print(f"Height of Tree: {height(root)}", "\n")
From cfabd91a8ba83bbe23d2790494e2450118044fcc Mon Sep 17 00:00:00 2001
From: Shreya Bhalgat <85868386+shreyabhalgat@users.noreply.github.com>
Date: Sun, 1 Oct 2023 16:58:20 +0530
Subject: [PATCH 222/808] Add missing number algorithm (#9203)
* Added missing_number algorithm using bit manipulation
* Update bit_manipulation/missing_number.py
---------
Co-authored-by: Christian Clauss
---
bit_manipulation/missing_number.py | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
create mode 100644 bit_manipulation/missing_number.py
diff --git a/bit_manipulation/missing_number.py b/bit_manipulation/missing_number.py
new file mode 100644
index 000000000000..92502a778ace
--- /dev/null
+++ b/bit_manipulation/missing_number.py
@@ -0,0 +1,21 @@
+def find_missing_number(nums: list[int]) -> int:
+ """
+ Finds the missing number in a list of consecutive integers.
+
+ Args:
+ nums: A list of integers.
+
+ Returns:
+ The missing number.
+
+ Example:
+ >>> find_missing_number([0, 1, 3, 4])
+ 2
+ """
+ n = len(nums)
+ missing_number = n
+
+ for i in range(n):
+ missing_number ^= i ^ nums[i]
+
+ return missing_number
From 596d93423862da8c8e419e9b74c1321b7d26b7a1 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Sun, 1 Oct 2023 13:58:30 +0200
Subject: [PATCH 223/808] Fix ruff warning (#9272)
---
.github/workflows/ruff.yml | 2 +-
DIRECTORY.md | 3 ---
2 files changed, 1 insertion(+), 4 deletions(-)
diff --git a/.github/workflows/ruff.yml b/.github/workflows/ruff.yml
index ca2d5be47327..e71ac8a4e933 100644
--- a/.github/workflows/ruff.yml
+++ b/.github/workflows/ruff.yml
@@ -13,4 +13,4 @@ jobs:
steps:
- uses: actions/checkout@v3
- run: pip install --user ruff
- - run: ruff --format=github .
+ - run: ruff --output-format=github .
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 001da2c15b99..4ae1c69f7099 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -530,7 +530,6 @@
## Maths
* [Abs](maths/abs.py)
- * [Add](maths/add.py)
* [Addition Without Arithmetic](maths/addition_without_arithmetic.py)
* [Aliquot Sum](maths/aliquot_sum.py)
* [Allocation Number](maths/allocation_number.py)
@@ -1141,8 +1140,6 @@
* [Quick Sort](sorts/quick_sort.py)
* [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py)
* [Radix Sort](sorts/radix_sort.py)
- * [Random Normal Distribution Quicksort](sorts/random_normal_distribution_quicksort.py)
- * [Random Pivot Quick Sort](sorts/random_pivot_quick_sort.py)
* [Recursive Bubble Sort](sorts/recursive_bubble_sort.py)
* [Recursive Insertion Sort](sorts/recursive_insertion_sort.py)
* [Recursive Mergesort Array](sorts/recursive_mergesort_array.py)
From 43c3f4ea4070bfbe1f41f4b861c7ff3f89953715 Mon Sep 17 00:00:00 2001
From: Bama Charan Chhandogi
Date: Sun, 1 Oct 2023 20:16:12 +0530
Subject: [PATCH 224/808] add Three sum (#9177)
* add Three sum
* add Three sum
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update
* update
* update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* add documention
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
maths/three_sum.py | 47 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 47 insertions(+)
create mode 100644 maths/three_sum.py
diff --git a/maths/three_sum.py b/maths/three_sum.py
new file mode 100644
index 000000000000..09956f8415a0
--- /dev/null
+++ b/maths/three_sum.py
@@ -0,0 +1,47 @@
+"""
+https://en.wikipedia.org/wiki/3SUM
+"""
+
+
+def three_sum(nums: list[int]) -> list[list[int]]:
+ """
+ Find all unique triplets in a sorted array of integers that sum up to zero.
+
+ Args:
+ nums: A sorted list of integers.
+
+ Returns:
+ A list of lists containing unique triplets that sum up to zero.
+
+ >>> three_sum([-1, 0, 1, 2, -1, -4])
+ [[-1, -1, 2], [-1, 0, 1]]
+ >>> three_sum([1, 2, 3, 4])
+ []
+ """
+ nums.sort()
+ ans = []
+ for i in range(len(nums) - 2):
+ if i == 0 or (nums[i] != nums[i - 1]):
+ low, high, c = i + 1, len(nums) - 1, 0 - nums[i]
+ while low < high:
+ if nums[low] + nums[high] == c:
+ ans.append([nums[i], nums[low], nums[high]])
+
+ while low < high and nums[low] == nums[low + 1]:
+ low += 1
+ while low < high and nums[high] == nums[high - 1]:
+ high -= 1
+
+ low += 1
+ high -= 1
+ elif nums[low] + nums[high] < c:
+ low += 1
+ else:
+ high -= 1
+ return ans
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From bacad12a1f64d92a793ccc2ec88535c9a4092fb6 Mon Sep 17 00:00:00 2001
From: Muhammad Umer Farooq <115654418+Muhammadummerr@users.noreply.github.com>
Date: Sun, 1 Oct 2023 21:11:16 +0500
Subject: [PATCH 225/808] [NEW ALGORITHM] Rotate linked list by K. (#9278)
* Rotate linked list by k.
* Rotate linked list by k.
* updated variable name.
* Update data_structures/linked_list/rotate_linked_list_by_k.py
Co-authored-by: Christian Clauss
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update data_structures/linked_list/rotate_linked_list_by_k.py
Co-authored-by: Christian Clauss
* Update data_structures/linked_list/rotate_linked_list_by_k.py
* Make Node a dataclass
---------
Co-authored-by: Christian Clauss
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.../linked_list/rotate_to_the_right.py | 156 ++++++++++++++++++
1 file changed, 156 insertions(+)
create mode 100644 data_structures/linked_list/rotate_to_the_right.py
diff --git a/data_structures/linked_list/rotate_to_the_right.py b/data_structures/linked_list/rotate_to_the_right.py
new file mode 100644
index 000000000000..51b10481c0ce
--- /dev/null
+++ b/data_structures/linked_list/rotate_to_the_right.py
@@ -0,0 +1,156 @@
+from __future__ import annotations
+
+from dataclasses import dataclass
+
+
+@dataclass
+class Node:
+ data: int
+ next_node: Node | None = None
+
+
+def print_linked_list(head: Node | None) -> None:
+ """
+ Print the entire linked list iteratively.
+
+ This function prints the elements of a linked list separated by '->'.
+
+ Parameters:
+ head (Node | None): The head of the linked list to be printed,
+ or None if the linked list is empty.
+
+ >>> head = insert_node(None, 0)
+ >>> head = insert_node(head, 2)
+ >>> head = insert_node(head, 1)
+ >>> print_linked_list(head)
+ 0->2->1
+ >>> head = insert_node(head, 4)
+ >>> head = insert_node(head, 5)
+ >>> print_linked_list(head)
+ 0->2->1->4->5
+ """
+ if head is None:
+ return
+ while head.next_node is not None:
+ print(head.data, end="->")
+ head = head.next_node
+ print(head.data)
+
+
+def insert_node(head: Node | None, data: int) -> Node:
+ """
+ Insert a new node at the end of a linked list and return the new head.
+
+ Parameters:
+ head (Node | None): The head of the linked list.
+ data (int): The data to be inserted into the new node.
+
+ Returns:
+ Node: The new head of the linked list.
+
+ >>> head = insert_node(None, 10)
+ >>> head = insert_node(head, 9)
+ >>> head = insert_node(head, 8)
+ >>> print_linked_list(head)
+ 10->9->8
+ """
+ new_node = Node(data)
+ # If the linked list is empty, the new_node becomes the head
+ if head is None:
+ return new_node
+
+ temp_node = head
+ while temp_node.next_node:
+ temp_node = temp_node.next_node
+
+ temp_node.next_node = new_node # type: ignore
+ return head
+
+
+def rotate_to_the_right(head: Node, places: int) -> Node:
+ """
+ Rotate a linked list to the right by places times.
+
+ Parameters:
+ head: The head of the linked list.
+ places: The number of places to rotate.
+
+ Returns:
+ Node: The head of the rotated linked list.
+
+ >>> rotate_to_the_right(None, places=1)
+ Traceback (most recent call last):
+ ...
+ ValueError: The linked list is empty.
+ >>> head = insert_node(None, 1)
+ >>> rotate_to_the_right(head, places=1) == head
+ True
+ >>> head = insert_node(None, 1)
+ >>> head = insert_node(head, 2)
+ >>> head = insert_node(head, 3)
+ >>> head = insert_node(head, 4)
+ >>> head = insert_node(head, 5)
+ >>> new_head = rotate_to_the_right(head, places=2)
+ >>> print_linked_list(new_head)
+ 4->5->1->2->3
+ """
+ # Check if the list is empty or has only one element
+ if not head:
+ raise ValueError("The linked list is empty.")
+
+ if head.next_node is None:
+ return head
+
+ # Calculate the length of the linked list
+ length = 1
+ temp_node = head
+ while temp_node.next_node is not None:
+ length += 1
+ temp_node = temp_node.next_node
+
+ # Adjust the value of places to avoid places longer than the list.
+ places %= length
+
+ if places == 0:
+ return head # As no rotation is needed.
+
+ # Find the new head position after rotation.
+ new_head_index = length - places
+
+ # Traverse to the new head position
+ temp_node = head
+ for _ in range(new_head_index - 1):
+ assert temp_node.next_node
+ temp_node = temp_node.next_node
+
+ # Update pointers to perform rotation
+ assert temp_node.next_node
+ new_head = temp_node.next_node
+ temp_node.next_node = None
+ temp_node = new_head
+ while temp_node.next_node:
+ temp_node = temp_node.next_node
+ temp_node.next_node = head
+
+ assert new_head
+ return new_head
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+ head = insert_node(None, 5)
+ head = insert_node(head, 1)
+ head = insert_node(head, 2)
+ head = insert_node(head, 4)
+ head = insert_node(head, 3)
+
+ print("Original list: ", end="")
+ print_linked_list(head)
+
+ places = 3
+ new_head = rotate_to_the_right(head, places)
+
+ print(f"After {places} iterations: ", end="")
+ print_linked_list(new_head)
From 18cdbc416504391bc9246f1874bd752ea730c710 Mon Sep 17 00:00:00 2001
From: aryan1165 <111041731+aryan1165@users.noreply.github.com>
Date: Sun, 1 Oct 2023 22:24:05 +0530
Subject: [PATCH 226/808] binary_search_traversals.py made memory-friendly
using generators. Fixes #8725 completely. (#9237)
* Made binary tree memory-friendly using generators based travels. Fixes
#8725
* Made binary tree memory-friendly using generators based travels. Fixes
#8725
* Fixed pre-commit errors
---
.../binary_tree/binary_tree_traversals.py | 57 ++++++++-----------
1 file changed, 23 insertions(+), 34 deletions(-)
diff --git a/data_structures/binary_tree/binary_tree_traversals.py b/data_structures/binary_tree/binary_tree_traversals.py
index 5dbbbe623906..2b33cdca4fed 100644
--- a/data_structures/binary_tree/binary_tree_traversals.py
+++ b/data_structures/binary_tree/binary_tree_traversals.py
@@ -1,9 +1,8 @@
from __future__ import annotations
from collections import deque
-from collections.abc import Generator, Sequence
+from collections.abc import Generator
from dataclasses import dataclass
-from typing import Any
# https://en.wikipedia.org/wiki/Tree_traversal
@@ -94,96 +93,86 @@ def height(root: Node | None) -> int:
return (max(height(root.left), height(root.right)) + 1) if root else 0
-def level_order(root: Node | None) -> Sequence[Node | None]:
+def level_order(root: Node | None) -> Generator[int, None, None]:
"""
Returns a list of nodes value from a whole binary tree in Level Order Traverse.
Level Order traverse: Visit nodes of the tree level-by-level.
"""
- output: list[Any] = []
if root is None:
- return output
+ return
process_queue = deque([root])
while process_queue:
node = process_queue.popleft()
- output.append(node.data)
+ yield node.data
if node.left:
process_queue.append(node.left)
if node.right:
process_queue.append(node.right)
- return output
def get_nodes_from_left_to_right(
root: Node | None, level: int
-) -> Sequence[Node | None]:
+) -> Generator[int, None, None]:
"""
Returns a list of nodes value from a particular level:
Left to right direction of the binary tree.
"""
- output: list[Any] = []
- def populate_output(root: Node | None, level: int) -> None:
+ def populate_output(root: Node | None, level: int) -> Generator[int, None, None]:
if not root:
return
if level == 1:
- output.append(root.data)
+ yield root.data
elif level > 1:
- populate_output(root.left, level - 1)
- populate_output(root.right, level - 1)
+ yield from populate_output(root.left, level - 1)
+ yield from populate_output(root.right, level - 1)
- populate_output(root, level)
- return output
+ yield from populate_output(root, level)
def get_nodes_from_right_to_left(
root: Node | None, level: int
-) -> Sequence[Node | None]:
+) -> Generator[int, None, None]:
"""
Returns a list of nodes value from a particular level:
Right to left direction of the binary tree.
"""
- output: list[Any] = []
- def populate_output(root: Node | None, level: int) -> None:
+ def populate_output(root: Node | None, level: int) -> Generator[int, None, None]:
if root is None:
return
if level == 1:
- output.append(root.data)
+ yield root.data
elif level > 1:
- populate_output(root.right, level - 1)
- populate_output(root.left, level - 1)
+ yield from populate_output(root.right, level - 1)
+ yield from populate_output(root.left, level - 1)
- populate_output(root, level)
- return output
+ yield from populate_output(root, level)
-def zigzag(root: Node | None) -> Sequence[Node | None] | list[Any]:
+def zigzag(root: Node | None) -> Generator[int, None, None]:
"""
ZigZag traverse:
Returns a list of nodes value from left to right and right to left, alternatively.
"""
if root is None:
- return []
-
- output: list[Sequence[Node | None]] = []
+ return
flag = 0
height_tree = height(root)
for h in range(1, height_tree + 1):
if not flag:
- output.append(get_nodes_from_left_to_right(root, h))
+ yield from get_nodes_from_left_to_right(root, h)
flag = 1
else:
- output.append(get_nodes_from_right_to_left(root, h))
+ yield from get_nodes_from_right_to_left(root, h)
flag = 0
- return output
-
def main() -> None: # Main function for testing.
# Create binary tree.
@@ -198,15 +187,15 @@ def main() -> None: # Main function for testing.
print(f"Height of Tree: {height(root)}", "\n")
print("Complete Level Order Traversal: ")
- print(level_order(root), "\n")
+ print(f"{list(level_order(root))} \n")
print("Level-wise order Traversal: ")
for level in range(1, height(root) + 1):
- print(f"Level {level}:", get_nodes_from_left_to_right(root, level=level))
+ print(f"Level {level}:", list(get_nodes_from_left_to_right(root, level=level)))
print("\nZigZag order Traversal: ")
- print(zigzag(root))
+ print(f"{list(zigzag(root))}")
if __name__ == "__main__":
From 8d94f7745f81c8f7c33bdd3d0c0740861b9c98e7 Mon Sep 17 00:00:00 2001
From: Kamil <32775019+quant12345@users.noreply.github.com>
Date: Sun, 1 Oct 2023 23:14:58 +0500
Subject: [PATCH 227/808] Euler072 - application of vector operations to reduce
calculation time and refactoring numpy (#9229)
* Replacing the generator with numpy vector operations from lu_decomposition.
* Revert "Replacing the generator with numpy vector operations from lu_decomposition."
This reverts commit ad217c66165898d62b76cc89ba09c2d7049b6448.
* Application of vector operations to reduce calculation time and refactoring numpy.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
project_euler/problem_072/sol1.py | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/project_euler/problem_072/sol1.py b/project_euler/problem_072/sol1.py
index a2a0eeeb31c5..5a28be564556 100644
--- a/project_euler/problem_072/sol1.py
+++ b/project_euler/problem_072/sol1.py
@@ -21,6 +21,8 @@
Time: 1 sec
"""
+import numpy as np
+
def solution(limit: int = 1_000_000) -> int:
"""
@@ -33,14 +35,15 @@ def solution(limit: int = 1_000_000) -> int:
304191
"""
- phi = [i - 1 for i in range(limit + 1)]
+ # generating an array from -1 to limit
+ phi = np.arange(-1, limit)
for i in range(2, limit + 1):
if phi[i] == i - 1:
- for j in range(2 * i, limit + 1, i):
- phi[j] -= phi[j] // i
+ ind = np.arange(2 * i, limit + 1, i) # indexes for selection
+ phi[ind] -= phi[ind] // i
- return sum(phi[2 : limit + 1])
+ return np.sum(phi[2 : limit + 1])
if __name__ == "__main__":
From 24e7edbe5bc771023335544a7a9cf7895140c1fe Mon Sep 17 00:00:00 2001
From: Dhruv Manilawala
Date: Mon, 2 Oct 2023 02:48:16 +0530
Subject: [PATCH 228/808] Remove myself from CODEOWNERS (#9325)
---
.github/CODEOWNERS | 2 +-
DIRECTORY.md | 3 +++
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS
index abf99ab227be..05cd709a8f62 100644
--- a/.github/CODEOWNERS
+++ b/.github/CODEOWNERS
@@ -69,7 +69,7 @@
# /other/ @cclauss # TODO: Uncomment this line after Hacktoberfest
-/project_euler/ @dhruvmanila
+# /project_euler/
# /quantum/
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 4ae1c69f7099..7d3ceee144be 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -51,6 +51,7 @@
* [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py)
* [Is Even](bit_manipulation/is_even.py)
* [Is Power Of Two](bit_manipulation/is_power_of_two.py)
+ * [Missing Number](bit_manipulation/missing_number.py)
* [Numbers Different Signs](bit_manipulation/numbers_different_signs.py)
* [Reverse Bits](bit_manipulation/reverse_bits.py)
* [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py)
@@ -232,6 +233,7 @@
* [Merge Two Lists](data_structures/linked_list/merge_two_lists.py)
* [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py)
* [Print Reverse](data_structures/linked_list/print_reverse.py)
+ * [Rotate To The Right](data_structures/linked_list/rotate_to_the_right.py)
* [Singly Linked List](data_structures/linked_list/singly_linked_list.py)
* [Skip List](data_structures/linked_list/skip_list.py)
* [Swap Nodes](data_structures/linked_list/swap_nodes.py)
@@ -676,6 +678,7 @@
* [Sylvester Sequence](maths/sylvester_sequence.py)
* [Tanh](maths/tanh.py)
* [Test Prime Check](maths/test_prime_check.py)
+ * [Three Sum](maths/three_sum.py)
* [Trapezoidal Rule](maths/trapezoidal_rule.py)
* [Triplet Sum](maths/triplet_sum.py)
* [Twin Prime](maths/twin_prime.py)
From e798e5acdee69416d61c8ab65cea4da8a5c16355 Mon Sep 17 00:00:00 2001
From: Bama Charan Chhandogi
Date: Mon, 2 Oct 2023 05:49:39 +0530
Subject: [PATCH 229/808] add reverse k group linkedlist (#9323)
* add reverse k group linkedlist
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update
* Update reverse_k_group.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update reverse_k_group.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update reverse_k_group.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.../linked_list/reverse_k_group.py | 118 ++++++++++++++++++
1 file changed, 118 insertions(+)
create mode 100644 data_structures/linked_list/reverse_k_group.py
diff --git a/data_structures/linked_list/reverse_k_group.py b/data_structures/linked_list/reverse_k_group.py
new file mode 100644
index 000000000000..5fc45491a540
--- /dev/null
+++ b/data_structures/linked_list/reverse_k_group.py
@@ -0,0 +1,118 @@
+from __future__ import annotations
+
+from collections.abc import Iterable, Iterator
+from dataclasses import dataclass
+
+
+@dataclass
+class Node:
+ data: int
+ next_node: Node | None = None
+
+
+class LinkedList:
+ def __init__(self, ints: Iterable[int]) -> None:
+ self.head: Node | None = None
+ for i in ints:
+ self.append(i)
+
+ def __iter__(self) -> Iterator[int]:
+ """
+ >>> ints = []
+ >>> list(LinkedList(ints)) == ints
+ True
+ >>> ints = tuple(range(5))
+ >>> tuple(LinkedList(ints)) == ints
+ True
+ """
+ node = self.head
+ while node:
+ yield node.data
+ node = node.next_node
+
+ def __len__(self) -> int:
+ """
+ >>> for i in range(3):
+ ... len(LinkedList(range(i))) == i
+ True
+ True
+ True
+ >>> len(LinkedList("abcdefgh"))
+ 8
+ """
+ return sum(1 for _ in self)
+
+ def __str__(self) -> str:
+ """
+ >>> str(LinkedList([]))
+ ''
+ >>> str(LinkedList(range(5)))
+ '0 -> 1 -> 2 -> 3 -> 4'
+ """
+ return " -> ".join([str(node) for node in self])
+
+ def append(self, data: int) -> None:
+ """
+ >>> ll = LinkedList([1, 2])
+ >>> tuple(ll)
+ (1, 2)
+ >>> ll.append(3)
+ >>> tuple(ll)
+ (1, 2, 3)
+ >>> ll.append(4)
+ >>> tuple(ll)
+ (1, 2, 3, 4)
+ >>> len(ll)
+ 4
+ """
+ if not self.head:
+ self.head = Node(data)
+ return
+ node = self.head
+ while node.next_node:
+ node = node.next_node
+ node.next_node = Node(data)
+
+ def reverse_k_nodes(self, group_size: int) -> None:
+ """
+ reverse nodes within groups of size k
+ >>> ll = LinkedList([1, 2, 3, 4, 5])
+ >>> ll.reverse_k_nodes(2)
+ >>> tuple(ll)
+ (2, 1, 4, 3, 5)
+ >>> str(ll)
+ '2 -> 1 -> 4 -> 3 -> 5'
+ """
+ if self.head is None or self.head.next_node is None:
+ return
+
+ length = len(self)
+ dummy_head = Node(0)
+ dummy_head.next_node = self.head
+ previous_node = dummy_head
+
+ while length >= group_size:
+ current_node = previous_node.next_node
+ assert current_node
+ next_node = current_node.next_node
+ for _ in range(1, group_size):
+ assert next_node, current_node
+ current_node.next_node = next_node.next_node
+ assert previous_node
+ next_node.next_node = previous_node.next_node
+ previous_node.next_node = next_node
+ next_node = current_node.next_node
+ previous_node = current_node
+ length -= group_size
+ self.head = dummy_head.next_node
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+ ll = LinkedList([1, 2, 3, 4, 5])
+ print(f"Original Linked List: {ll}")
+ k = 2
+ ll.reverse_k_nodes(k)
+ print(f"After reversing groups of size {k}: {ll}")
From 9640a4041a7b331e506daab1b31dd30fb47b228d Mon Sep 17 00:00:00 2001
From: Saksham Chawla <51916697+saksham-chawla@users.noreply.github.com>
Date: Mon, 2 Oct 2023 19:58:36 +0530
Subject: [PATCH 230/808] Add typing to binary_exponentiation_2.py (#9475)
---
maths/binary_exponentiation_2.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/maths/binary_exponentiation_2.py b/maths/binary_exponentiation_2.py
index 51ec4baf2598..af8f776dd266 100644
--- a/maths/binary_exponentiation_2.py
+++ b/maths/binary_exponentiation_2.py
@@ -11,7 +11,7 @@
"""
-def b_expo(a, b):
+def b_expo(a: int, b: int) -> int:
res = 0
while b > 0:
if b & 1:
@@ -23,7 +23,7 @@ def b_expo(a, b):
return res
-def b_expo_mod(a, b, c):
+def b_expo_mod(a: int, b: int, c: int) -> int:
res = 0
while b > 0:
if b & 1:
From 89a65a861724d2eb8c6a60a9e1655d7af9cdc836 Mon Sep 17 00:00:00 2001
From: Saksham Chawla <51916697+saksham-chawla@users.noreply.github.com>
Date: Mon, 2 Oct 2023 19:59:06 +0530
Subject: [PATCH 231/808] Add typing to binary_exponentiation.py (#9471)
* Add typing to binary_exponentiation.py
* Update binary_exponentiation.py
* float to int division change as per review
---
maths/binary_exponentiation.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/maths/binary_exponentiation.py b/maths/binary_exponentiation.py
index 147b4285ffa1..05de939d1bde 100644
--- a/maths/binary_exponentiation.py
+++ b/maths/binary_exponentiation.py
@@ -4,7 +4,7 @@
# Time Complexity : O(logn)
-def binary_exponentiation(a, n):
+def binary_exponentiation(a: int, n: int) -> int:
if n == 0:
return 1
@@ -12,7 +12,7 @@ def binary_exponentiation(a, n):
return binary_exponentiation(a, n - 1) * a
else:
- b = binary_exponentiation(a, n / 2)
+ b = binary_exponentiation(a, n // 2)
return b * b
From 97154cfa351e35ddf0727691a92998cfd7be4e5b Mon Sep 17 00:00:00 2001
From: Saksham Chawla <51916697+saksham-chawla@users.noreply.github.com>
Date: Mon, 2 Oct 2023 20:00:34 +0530
Subject: [PATCH 232/808] Add typing to binary_exp_mod.py (#9469)
* Add typing to binary_exp_mod.py
* Update binary_exp_mod.py
* review changes
---
maths/binary_exp_mod.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/maths/binary_exp_mod.py b/maths/binary_exp_mod.py
index df688892d690..8893182a3496 100644
--- a/maths/binary_exp_mod.py
+++ b/maths/binary_exp_mod.py
@@ -1,4 +1,4 @@
-def bin_exp_mod(a, n, b):
+def bin_exp_mod(a: int, n: int, b: int) -> int:
"""
>>> bin_exp_mod(3, 4, 5)
1
@@ -13,7 +13,7 @@ def bin_exp_mod(a, n, b):
if n % 2 == 1:
return (bin_exp_mod(a, n - 1, b) * a) % b
- r = bin_exp_mod(a, n / 2, b)
+ r = bin_exp_mod(a, n // 2, b)
return (r * r) % b
From 73118b9f67f49fae14eb9a39e47ec9127ef1f155 Mon Sep 17 00:00:00 2001
From: Saksham Chawla <51916697+saksham-chawla@users.noreply.github.com>
Date: Mon, 2 Oct 2023 20:11:34 +0530
Subject: [PATCH 233/808] Add typing to binary_exponentiation_3.py (#9477)
---
maths/binary_exponentiation_3.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/maths/binary_exponentiation_3.py b/maths/binary_exponentiation_3.py
index dd4e70e74129..9cd143e09207 100644
--- a/maths/binary_exponentiation_3.py
+++ b/maths/binary_exponentiation_3.py
@@ -11,7 +11,7 @@
"""
-def b_expo(a, b):
+def b_expo(a: int, b: int) -> int:
res = 1
while b > 0:
if b & 1:
@@ -23,7 +23,7 @@ def b_expo(a, b):
return res
-def b_expo_mod(a, b, c):
+def b_expo_mod(a: int, b: int, c: int) -> int:
res = 1
while b > 0:
if b & 1:
From 95345f6f5b0e6ae10f54a33850298634e05766ee Mon Sep 17 00:00:00 2001
From: Saksham Chawla <51916697+saksham-chawla@users.noreply.github.com>
Date: Mon, 2 Oct 2023 20:51:45 +0530
Subject: [PATCH 234/808] Add typng to binomial_coefficient.py (#9480)
---
maths/binomial_coefficient.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/maths/binomial_coefficient.py b/maths/binomial_coefficient.py
index 0d4b3d1a8d9a..6d5b46cb5861 100644
--- a/maths/binomial_coefficient.py
+++ b/maths/binomial_coefficient.py
@@ -1,4 +1,4 @@
-def binomial_coefficient(n, r):
+def binomial_coefficient(n: int, r: int) -> int:
"""
Find binomial coefficient using pascals triangle.
From 8c7bd1c48d1e4029aa115d50fb3034e199bef7f9 Mon Sep 17 00:00:00 2001
From: Varshaa Shetty
Date: Tue, 3 Oct 2023 03:17:10 +0530
Subject: [PATCH 235/808] Deleted minmax.py (#9482)
---
backtracking/minmax.py | 69 ------------------------------------------
1 file changed, 69 deletions(-)
delete mode 100644 backtracking/minmax.py
diff --git a/backtracking/minmax.py b/backtracking/minmax.py
deleted file mode 100644
index 9b87183cfdb7..000000000000
--- a/backtracking/minmax.py
+++ /dev/null
@@ -1,69 +0,0 @@
-"""
-Minimax helps to achieve maximum score in a game by checking all possible moves.
-
-"""
-from __future__ import annotations
-
-import math
-
-
-def minimax(
- depth: int, node_index: int, is_max: bool, scores: list[int], height: float
-) -> int:
- """
- depth is current depth in game tree.
- node_index is index of current node in scores[].
- scores[] contains the leaves of game tree.
- height is maximum height of game tree.
-
- >>> scores = [90, 23, 6, 33, 21, 65, 123, 34423]
- >>> height = math.log(len(scores), 2)
- >>> minimax(0, 0, True, scores, height)
- 65
- >>> minimax(-1, 0, True, scores, height)
- Traceback (most recent call last):
- ...
- ValueError: Depth cannot be less than 0
- >>> minimax(0, 0, True, [], 2)
- Traceback (most recent call last):
- ...
- ValueError: Scores cannot be empty
- >>> scores = [3, 5, 2, 9, 12, 5, 23, 23]
- >>> height = math.log(len(scores), 2)
- >>> minimax(0, 0, True, scores, height)
- 12
- """
-
- if depth < 0:
- raise ValueError("Depth cannot be less than 0")
-
- if not scores:
- raise ValueError("Scores cannot be empty")
-
- if depth == height:
- return scores[node_index]
-
- return (
- max(
- minimax(depth + 1, node_index * 2, False, scores, height),
- minimax(depth + 1, node_index * 2 + 1, False, scores, height),
- )
- if is_max
- else min(
- minimax(depth + 1, node_index * 2, True, scores, height),
- minimax(depth + 1, node_index * 2 + 1, True, scores, height),
- )
- )
-
-
-def main() -> None:
- scores = [90, 23, 6, 33, 21, 65, 123, 34423]
- height = math.log(len(scores), 2)
- print(f"Optimal value : {minimax(0, 0, True, scores, height)}")
-
-
-if __name__ == "__main__":
- import doctest
-
- doctest.testmod()
- main()
From f8fe8fe41f74c8ecc5c8555ca43d65bd12b4f073 Mon Sep 17 00:00:00 2001
From: aryan1165 <111041731+aryan1165@users.noreply.github.com>
Date: Tue, 3 Oct 2023 03:27:00 +0530
Subject: [PATCH 236/808] Removed maths/miller_rabin.py , Double
implementation. #8098 (#9228)
* Removed ciphers/rabin_miller.py as it is already there maths/miller_rabin.py
* Renamed miller_rabin.py to rabain_miller.py
* Restore ciphers/rabin_miller.py and removed maths/rabin_miller.py
---
maths/miller_rabin.py | 51 -------------------------------------------
1 file changed, 51 deletions(-)
delete mode 100644 maths/miller_rabin.py
diff --git a/maths/miller_rabin.py b/maths/miller_rabin.py
deleted file mode 100644
index 9f2668dbab14..000000000000
--- a/maths/miller_rabin.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import random
-
-from .binary_exp_mod import bin_exp_mod
-
-
-# This is a probabilistic check to test primality, useful for big numbers!
-# if it's a prime, it will return true
-# if it's not a prime, the chance of it returning true is at most 1/4**prec
-def is_prime_big(n, prec=1000):
- """
- >>> from maths.prime_check import is_prime
- >>> # all(is_prime_big(i) == is_prime(i) for i in range(1000)) # 3.45s
- >>> all(is_prime_big(i) == is_prime(i) for i in range(256))
- True
- """
- if n < 2:
- return False
-
- if n % 2 == 0:
- return n == 2
-
- # this means n is odd
- d = n - 1
- exp = 0
- while d % 2 == 0:
- d /= 2
- exp += 1
-
- # n - 1=d*(2**exp)
- count = 0
- while count < prec:
- a = random.randint(2, n - 1)
- b = bin_exp_mod(a, d, n)
- if b != 1:
- flag = True
- for _ in range(exp):
- if b == n - 1:
- flag = False
- break
- b = b * b
- b %= n
- if flag:
- return False
- count += 1
- return True
-
-
-if __name__ == "__main__":
- n = abs(int(input("Enter bound : ").strip()))
- print("Here's the list of primes:")
- print(", ".join(str(i) for i in range(n + 1) if is_prime_big(i)))
From f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Tue, 3 Oct 2023 05:05:43 +0200
Subject: [PATCH 237/808] pre-commit autoupdate && pre-commit run --all-files
(#9516)
* pre-commit autoupdate && pre-commit run --all-files
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.pre-commit-config.yaml | 4 ++--
DIRECTORY.md | 1 +
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 809b841d0ea3..dbf7ff341243 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -16,7 +16,7 @@ repos:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.291
+ rev: v0.0.292
hooks:
- id: ruff
@@ -33,7 +33,7 @@ repos:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
- rev: "1.1.0"
+ rev: "1.2.0"
hooks:
- id: pyproject-fmt
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 7d3ceee144be..24c68171c9bc 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -233,6 +233,7 @@
* [Merge Two Lists](data_structures/linked_list/merge_two_lists.py)
* [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py)
* [Print Reverse](data_structures/linked_list/print_reverse.py)
+ * [Reverse K Group](data_structures/linked_list/reverse_k_group.py)
* [Rotate To The Right](data_structures/linked_list/rotate_to_the_right.py)
* [Singly Linked List](data_structures/linked_list/singly_linked_list.py)
* [Skip List](data_structures/linked_list/skip_list.py)
From 0f4e51245f33175b4fb311f633d3821210741bdd Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Tue, 3 Oct 2023 11:17:10 +0200
Subject: [PATCH 238/808] Upgrade to Python 3.12 (#9576)
* DRAFT: GitHub Actions: Test on Python 3.12
Repeats #8777
* #8777
Some of our dependencies will not be ready yet.
* Python 3.12: Disable qiskit and tensorflow algorithms
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
.github/workflows/build.yml | 5 +++--
.github/workflows/ruff.yml | 2 +-
CONTRIBUTING.md | 2 +-
DIRECTORY.md | 19 -------------------
backtracking/combination_sum.py | 2 +-
....py => cnn_classification.py.DISABLED.txt} | 0
...ans_clustering_tensorflow.py.DISABLED.txt} | 0
...ns.py => fuzzy_operations.py.DISABLED.txt} | 0
...ion.py => lstm_prediction.py.DISABLED.txt} | 0
maths/maclaurin_series.py | 8 ++++----
quantum/{bb84.py => bb84.py.DISABLED.txt} | 0
...jozsa.py => deutsch_jozsa.py.DISABLED.txt} | 1 +
...lf_adder.py => half_adder.py.DISABLED.txt} | 1 +
.../{not_gate.py => not_gate.py.DISABLED.txt} | 0
..._adder.py => q_full_adder.py.DISABLED.txt} | 0
...y => quantum_entanglement.py.DISABLED.txt} | 0
... => quantum_teleportation.py.DISABLED.txt} | 0
...y => ripple_adder_classic.py.DISABLED.txt} | 0
...y => single_qubit_measure.py.DISABLED.txt} | 0
...g.py => superdense_coding.py.DISABLED.txt} | 0
requirements.txt | 6 +++---
21 files changed, 15 insertions(+), 31 deletions(-)
rename computer_vision/{cnn_classification.py => cnn_classification.py.DISABLED.txt} (100%)
rename dynamic_programming/{k_means_clustering_tensorflow.py => k_means_clustering_tensorflow.py.DISABLED.txt} (100%)
rename fuzzy_logic/{fuzzy_operations.py => fuzzy_operations.py.DISABLED.txt} (100%)
rename machine_learning/lstm/{lstm_prediction.py => lstm_prediction.py.DISABLED.txt} (100%)
rename quantum/{bb84.py => bb84.py.DISABLED.txt} (100%)
rename quantum/{deutsch_jozsa.py => deutsch_jozsa.py.DISABLED.txt} (99%)
mode change 100755 => 100644
rename quantum/{half_adder.py => half_adder.py.DISABLED.txt} (99%)
mode change 100755 => 100644
rename quantum/{not_gate.py => not_gate.py.DISABLED.txt} (100%)
rename quantum/{q_full_adder.py => q_full_adder.py.DISABLED.txt} (100%)
rename quantum/{quantum_entanglement.py => quantum_entanglement.py.DISABLED.txt} (100%)
rename quantum/{quantum_teleportation.py => quantum_teleportation.py.DISABLED.txt} (100%)
rename quantum/{ripple_adder_classic.py => ripple_adder_classic.py.DISABLED.txt} (100%)
rename quantum/{single_qubit_measure.py => single_qubit_measure.py.DISABLED.txt} (100%)
rename quantum/{superdense_coding.py => superdense_coding.py.DISABLED.txt} (100%)
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index fc8cb636979e..60c1d6d119d0 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -9,10 +9,11 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
- python-version: 3.11
+ python-version: 3.12
+ allow-prereleases: true
- uses: actions/cache@v3
with:
path: ~/.cache/pip
diff --git a/.github/workflows/ruff.yml b/.github/workflows/ruff.yml
index e71ac8a4e933..496f1460e074 100644
--- a/.github/workflows/ruff.yml
+++ b/.github/workflows/ruff.yml
@@ -11,6 +11,6 @@ jobs:
ruff:
runs-on: ubuntu-latest
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
- run: pip install --user ruff
- run: ruff --output-format=github .
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 4a1bb652738f..7a67ce33cd62 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -73,7 +73,7 @@ pre-commit run --all-files --show-diff-on-failure
We want your work to be readable by others; therefore, we encourage you to note the following:
-- Please write in Python 3.11+. For instance: `print()` is a function in Python 3 so `print "Hello"` will *not* work but `print("Hello")` will.
+- Please write in Python 3.12+. For instance: `print()` is a function in Python 3 so `print "Hello"` will *not* work but `print("Hello")` will.
- Please focus hard on the naming of functions, classes, and variables. Help your reader by using __descriptive names__ that can help you to remove redundant comments.
- Single letter variable names are *old school* so please avoid them unless their life only spans a few lines.
- Expand acronyms because `gcd()` is hard to understand but `greatest_common_divisor()` is not.
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 24c68171c9bc..9a913aa786e1 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -26,7 +26,6 @@
* [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py)
* [Knight Tour](backtracking/knight_tour.py)
* [Minimax](backtracking/minimax.py)
- * [Minmax](backtracking/minmax.py)
* [N Queens](backtracking/n_queens.py)
* [N Queens Math](backtracking/n_queens_math.py)
* [Power Sum](backtracking/power_sum.py)
@@ -133,7 +132,6 @@
* [Run Length Encoding](compression/run_length_encoding.py)
## Computer Vision
- * [Cnn Classification](computer_vision/cnn_classification.py)
* [Flip Augmentation](computer_vision/flip_augmentation.py)
* [Haralick Descriptors](computer_vision/haralick_descriptors.py)
* [Harris Corner](computer_vision/harris_corner.py)
@@ -321,7 +319,6 @@
* [Floyd Warshall](dynamic_programming/floyd_warshall.py)
* [Integer Partition](dynamic_programming/integer_partition.py)
* [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py)
- * [K Means Clustering Tensorflow](dynamic_programming/k_means_clustering_tensorflow.py)
* [Knapsack](dynamic_programming/knapsack.py)
* [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py)
* [Longest Common Substring](dynamic_programming/longest_common_substring.py)
@@ -384,9 +381,6 @@
* [Mandelbrot](fractals/mandelbrot.py)
* [Sierpinski Triangle](fractals/sierpinski_triangle.py)
-## Fuzzy Logic
- * [Fuzzy Operations](fuzzy_logic/fuzzy_operations.py)
-
## Genetic Algorithm
* [Basic String](genetic_algorithm/basic_string.py)
@@ -517,8 +511,6 @@
* Local Weighted Learning
* [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py)
* [Logistic Regression](machine_learning/logistic_regression.py)
- * Lstm
- * [Lstm Prediction](machine_learning/lstm/lstm_prediction.py)
* [Mfcc](machine_learning/mfcc.py)
* [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py)
* [Polynomial Regression](machine_learning/polynomial_regression.py)
@@ -613,7 +605,6 @@
* [Matrix Exponentiation](maths/matrix_exponentiation.py)
* [Max Sum Sliding Window](maths/max_sum_sliding_window.py)
* [Median Of Two Arrays](maths/median_of_two_arrays.py)
- * [Miller Rabin](maths/miller_rabin.py)
* [Mobius Function](maths/mobius_function.py)
* [Modular Exponential](maths/modular_exponential.py)
* [Monte Carlo](maths/monte_carlo.py)
@@ -1071,17 +1062,7 @@
* [Sol1](project_euler/problem_800/sol1.py)
## Quantum
- * [Bb84](quantum/bb84.py)
- * [Deutsch Jozsa](quantum/deutsch_jozsa.py)
- * [Half Adder](quantum/half_adder.py)
- * [Not Gate](quantum/not_gate.py)
* [Q Fourier Transform](quantum/q_fourier_transform.py)
- * [Q Full Adder](quantum/q_full_adder.py)
- * [Quantum Entanglement](quantum/quantum_entanglement.py)
- * [Quantum Teleportation](quantum/quantum_teleportation.py)
- * [Ripple Adder Classic](quantum/ripple_adder_classic.py)
- * [Single Qubit Measure](quantum/single_qubit_measure.py)
- * [Superdense Coding](quantum/superdense_coding.py)
## Scheduling
* [First Come First Served](scheduling/first_come_first_served.py)
diff --git a/backtracking/combination_sum.py b/backtracking/combination_sum.py
index f555adb751d0..3c6ed81f44f0 100644
--- a/backtracking/combination_sum.py
+++ b/backtracking/combination_sum.py
@@ -47,7 +47,7 @@ def combination_sum(candidates: list, target: int) -> list:
>>> combination_sum([-8, 2.3, 0], 1)
Traceback (most recent call last):
...
- RecursionError: maximum recursion depth exceeded in comparison
+ RecursionError: maximum recursion depth exceeded
"""
path = [] # type: list[int]
answer = [] # type: list[int]
diff --git a/computer_vision/cnn_classification.py b/computer_vision/cnn_classification.py.DISABLED.txt
similarity index 100%
rename from computer_vision/cnn_classification.py
rename to computer_vision/cnn_classification.py.DISABLED.txt
diff --git a/dynamic_programming/k_means_clustering_tensorflow.py b/dynamic_programming/k_means_clustering_tensorflow.py.DISABLED.txt
similarity index 100%
rename from dynamic_programming/k_means_clustering_tensorflow.py
rename to dynamic_programming/k_means_clustering_tensorflow.py.DISABLED.txt
diff --git a/fuzzy_logic/fuzzy_operations.py b/fuzzy_logic/fuzzy_operations.py.DISABLED.txt
similarity index 100%
rename from fuzzy_logic/fuzzy_operations.py
rename to fuzzy_logic/fuzzy_operations.py.DISABLED.txt
diff --git a/machine_learning/lstm/lstm_prediction.py b/machine_learning/lstm/lstm_prediction.py.DISABLED.txt
similarity index 100%
rename from machine_learning/lstm/lstm_prediction.py
rename to machine_learning/lstm/lstm_prediction.py.DISABLED.txt
diff --git a/maths/maclaurin_series.py b/maths/maclaurin_series.py
index e55839bc15ba..806e5f9b0788 100644
--- a/maths/maclaurin_series.py
+++ b/maths/maclaurin_series.py
@@ -17,9 +17,9 @@ def maclaurin_sin(theta: float, accuracy: int = 30) -> float:
>>> all(isclose(maclaurin_sin(x, 50), sin(x)) for x in range(-25, 25))
True
>>> maclaurin_sin(10)
- -0.544021110889369
+ -0.5440211108893691
>>> maclaurin_sin(-10)
- 0.5440211108893703
+ 0.5440211108893704
>>> maclaurin_sin(10, 15)
-0.5440211108893689
>>> maclaurin_sin(-10, 15)
@@ -69,9 +69,9 @@ def maclaurin_cos(theta: float, accuracy: int = 30) -> float:
>>> all(isclose(maclaurin_cos(x, 50), cos(x)) for x in range(-25, 25))
True
>>> maclaurin_cos(5)
- 0.28366218546322675
+ 0.2836621854632268
>>> maclaurin_cos(-5)
- 0.2836621854632266
+ 0.2836621854632265
>>> maclaurin_cos(10, 15)
-0.8390715290764525
>>> maclaurin_cos(-10, 15)
diff --git a/quantum/bb84.py b/quantum/bb84.py.DISABLED.txt
similarity index 100%
rename from quantum/bb84.py
rename to quantum/bb84.py.DISABLED.txt
diff --git a/quantum/deutsch_jozsa.py b/quantum/deutsch_jozsa.py.DISABLED.txt
old mode 100755
new mode 100644
similarity index 99%
rename from quantum/deutsch_jozsa.py
rename to quantum/deutsch_jozsa.py.DISABLED.txt
index 95c3e65b5edf..5c8a379debfc
--- a/quantum/deutsch_jozsa.py
+++ b/quantum/deutsch_jozsa.py.DISABLED.txt
@@ -1,3 +1,4 @@
+# DISABLED!!
#!/usr/bin/env python3
"""
Deutsch-Jozsa Algorithm is one of the first examples of a quantum
diff --git a/quantum/half_adder.py b/quantum/half_adder.py.DISABLED.txt
old mode 100755
new mode 100644
similarity index 99%
rename from quantum/half_adder.py
rename to quantum/half_adder.py.DISABLED.txt
index 21a57ddcf2dd..800d563ec76f
--- a/quantum/half_adder.py
+++ b/quantum/half_adder.py.DISABLED.txt
@@ -1,3 +1,4 @@
+# DISABLED!!
#!/usr/bin/env python3
"""
Build a half-adder quantum circuit that takes two bits as input,
diff --git a/quantum/not_gate.py b/quantum/not_gate.py.DISABLED.txt
similarity index 100%
rename from quantum/not_gate.py
rename to quantum/not_gate.py.DISABLED.txt
diff --git a/quantum/q_full_adder.py b/quantum/q_full_adder.py.DISABLED.txt
similarity index 100%
rename from quantum/q_full_adder.py
rename to quantum/q_full_adder.py.DISABLED.txt
diff --git a/quantum/quantum_entanglement.py b/quantum/quantum_entanglement.py.DISABLED.txt
similarity index 100%
rename from quantum/quantum_entanglement.py
rename to quantum/quantum_entanglement.py.DISABLED.txt
diff --git a/quantum/quantum_teleportation.py b/quantum/quantum_teleportation.py.DISABLED.txt
similarity index 100%
rename from quantum/quantum_teleportation.py
rename to quantum/quantum_teleportation.py.DISABLED.txt
diff --git a/quantum/ripple_adder_classic.py b/quantum/ripple_adder_classic.py.DISABLED.txt
similarity index 100%
rename from quantum/ripple_adder_classic.py
rename to quantum/ripple_adder_classic.py.DISABLED.txt
diff --git a/quantum/single_qubit_measure.py b/quantum/single_qubit_measure.py.DISABLED.txt
similarity index 100%
rename from quantum/single_qubit_measure.py
rename to quantum/single_qubit_measure.py.DISABLED.txt
diff --git a/quantum/superdense_coding.py b/quantum/superdense_coding.py.DISABLED.txt
similarity index 100%
rename from quantum/superdense_coding.py
rename to quantum/superdense_coding.py.DISABLED.txt
diff --git a/requirements.txt b/requirements.txt
index 1128e9d66820..25dba6f5a250 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9,15 +9,15 @@ opencv-python
pandas
pillow
projectq
-qiskit
-qiskit-aer
+qiskit ; python_version < '3.12'
+qiskit-aer ; python_version < '3.12'
requests
rich
scikit-fuzzy
scikit-learn
statsmodels
sympy
-tensorflow
+tensorflow ; python_version < '3.12'
texttable
tweepy
xgboost
From da03c14d39ec8c6a3c253951541b902172bb92fc Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Tue, 3 Oct 2023 11:48:58 +0200
Subject: [PATCH 239/808] Fix accuracy in maclaurin_series on Python 3.12
(#9581)
---
maths/maclaurin_series.py | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/maths/maclaurin_series.py b/maths/maclaurin_series.py
index 806e5f9b0788..d5c3c3ab958b 100644
--- a/maths/maclaurin_series.py
+++ b/maths/maclaurin_series.py
@@ -21,9 +21,9 @@ def maclaurin_sin(theta: float, accuracy: int = 30) -> float:
>>> maclaurin_sin(-10)
0.5440211108893704
>>> maclaurin_sin(10, 15)
- -0.5440211108893689
+ -0.544021110889369
>>> maclaurin_sin(-10, 15)
- 0.5440211108893703
+ 0.5440211108893704
>>> maclaurin_sin("10")
Traceback (most recent call last):
...
@@ -73,7 +73,7 @@ def maclaurin_cos(theta: float, accuracy: int = 30) -> float:
>>> maclaurin_cos(-5)
0.2836621854632265
>>> maclaurin_cos(10, 15)
- -0.8390715290764525
+ -0.8390715290764524
>>> maclaurin_cos(-10, 15)
-0.8390715290764521
>>> maclaurin_cos("10")
From e60779c202880275e786f0f857f4261b90a41d51 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Tue, 3 Oct 2023 12:04:59 +0200
Subject: [PATCH 240/808] Upgrade our Devcontainer to Python 3.12 on Debian
bookworm (#9580)
---
.devcontainer/Dockerfile | 2 +-
.devcontainer/README.md | 1 +
.devcontainer/devcontainer.json | 4 ++--
3 files changed, 4 insertions(+), 3 deletions(-)
create mode 100644 .devcontainer/README.md
diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile
index b5a5347c66b0..6aa0073bf95b 100644
--- a/.devcontainer/Dockerfile
+++ b/.devcontainer/Dockerfile
@@ -1,5 +1,5 @@
# https://github.com/microsoft/vscode-dev-containers/blob/main/containers/python-3/README.md
-ARG VARIANT=3.11-bookworm
+ARG VARIANT=3.12-bookworm
FROM mcr.microsoft.com/vscode/devcontainers/python:${VARIANT}
COPY requirements.txt /tmp/pip-tmp/
RUN python3 -m pip install --upgrade pip \
diff --git a/.devcontainer/README.md b/.devcontainer/README.md
new file mode 100644
index 000000000000..ec3cdb61de7a
--- /dev/null
+++ b/.devcontainer/README.md
@@ -0,0 +1 @@
+https://code.visualstudio.com/docs/devcontainers/tutorial
diff --git a/.devcontainer/devcontainer.json b/.devcontainer/devcontainer.json
index c5a855b2550c..ae1d4fb7494d 100644
--- a/.devcontainer/devcontainer.json
+++ b/.devcontainer/devcontainer.json
@@ -4,10 +4,10 @@
"dockerfile": "Dockerfile",
"context": "..",
"args": {
- // Update 'VARIANT' to pick a Python version: 3, 3.10, 3.9, 3.8, 3.7, 3.6
+ // Update 'VARIANT' to pick a Python version: 3, 3.11, 3.10, 3.9, 3.8
// Append -bullseye or -buster to pin to an OS version.
// Use -bullseye variants on local on arm64/Apple Silicon.
- "VARIANT": "3.11-bookworm",
+ "VARIANT": "3.12-bookworm",
}
},
From b60a94b5b305487ca5f5755ab6de2bf0adeb3d78 Mon Sep 17 00:00:00 2001
From: dekomori_sanae09
Date: Tue, 3 Oct 2023 19:23:27 +0530
Subject: [PATCH 241/808] merge double_factorial (#9431)
* merge double_factorial
* fix ruff error
* fix merge issues
* change test case
* fix import error
---
maths/double_factorial.py | 60 +++++++++++++++++++++++++++++
maths/double_factorial_iterative.py | 33 ----------------
maths/double_factorial_recursive.py | 31 ---------------
3 files changed, 60 insertions(+), 64 deletions(-)
create mode 100644 maths/double_factorial.py
delete mode 100644 maths/double_factorial_iterative.py
delete mode 100644 maths/double_factorial_recursive.py
diff --git a/maths/double_factorial.py b/maths/double_factorial.py
new file mode 100644
index 000000000000..3c3a28304e95
--- /dev/null
+++ b/maths/double_factorial.py
@@ -0,0 +1,60 @@
+def double_factorial_recursive(n: int) -> int:
+ """
+ Compute double factorial using recursive method.
+ Recursion can be costly for large numbers.
+
+ To learn about the theory behind this algorithm:
+ https://en.wikipedia.org/wiki/Double_factorial
+
+ >>> from math import prod
+ >>> all(double_factorial_recursive(i) == prod(range(i, 0, -2)) for i in range(20))
+ True
+ >>> double_factorial_recursive(0.1)
+ Traceback (most recent call last):
+ ...
+ ValueError: double_factorial_recursive() only accepts integral values
+ >>> double_factorial_recursive(-1)
+ Traceback (most recent call last):
+ ...
+ ValueError: double_factorial_recursive() not defined for negative values
+ """
+ if not isinstance(n, int):
+ raise ValueError("double_factorial_recursive() only accepts integral values")
+ if n < 0:
+ raise ValueError("double_factorial_recursive() not defined for negative values")
+ return 1 if n <= 1 else n * double_factorial_recursive(n - 2)
+
+
+def double_factorial_iterative(num: int) -> int:
+ """
+ Compute double factorial using iterative method.
+
+ To learn about the theory behind this algorithm:
+ https://en.wikipedia.org/wiki/Double_factorial
+
+ >>> from math import prod
+ >>> all(double_factorial_iterative(i) == prod(range(i, 0, -2)) for i in range(20))
+ True
+ >>> double_factorial_iterative(0.1)
+ Traceback (most recent call last):
+ ...
+ ValueError: double_factorial_iterative() only accepts integral values
+ >>> double_factorial_iterative(-1)
+ Traceback (most recent call last):
+ ...
+ ValueError: double_factorial_iterative() not defined for negative values
+ """
+ if not isinstance(num, int):
+ raise ValueError("double_factorial_iterative() only accepts integral values")
+ if num < 0:
+ raise ValueError("double_factorial_iterative() not defined for negative values")
+ value = 1
+ for i in range(num, 0, -2):
+ value *= i
+ return value
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
diff --git a/maths/double_factorial_iterative.py b/maths/double_factorial_iterative.py
deleted file mode 100644
index b2b58aa04c28..000000000000
--- a/maths/double_factorial_iterative.py
+++ /dev/null
@@ -1,33 +0,0 @@
-def double_factorial(num: int) -> int:
- """
- Compute double factorial using iterative method.
-
- To learn about the theory behind this algorithm:
- https://en.wikipedia.org/wiki/Double_factorial
-
- >>> import math
- >>> all(double_factorial(i) == math.prod(range(i, 0, -2)) for i in range(20))
- True
- >>> double_factorial(0.1)
- Traceback (most recent call last):
- ...
- ValueError: double_factorial() only accepts integral values
- >>> double_factorial(-1)
- Traceback (most recent call last):
- ...
- ValueError: double_factorial() not defined for negative values
- """
- if not isinstance(num, int):
- raise ValueError("double_factorial() only accepts integral values")
- if num < 0:
- raise ValueError("double_factorial() not defined for negative values")
- value = 1
- for i in range(num, 0, -2):
- value *= i
- return value
-
-
-if __name__ == "__main__":
- import doctest
-
- doctest.testmod()
diff --git a/maths/double_factorial_recursive.py b/maths/double_factorial_recursive.py
deleted file mode 100644
index 05c9b29680a7..000000000000
--- a/maths/double_factorial_recursive.py
+++ /dev/null
@@ -1,31 +0,0 @@
-def double_factorial(n: int) -> int:
- """
- Compute double factorial using recursive method.
- Recursion can be costly for large numbers.
-
- To learn about the theory behind this algorithm:
- https://en.wikipedia.org/wiki/Double_factorial
-
- >>> import math
- >>> all(double_factorial(i) == math.prod(range(i, 0, -2)) for i in range(20))
- True
- >>> double_factorial(0.1)
- Traceback (most recent call last):
- ...
- ValueError: double_factorial() only accepts integral values
- >>> double_factorial(-1)
- Traceback (most recent call last):
- ...
- ValueError: double_factorial() not defined for negative values
- """
- if not isinstance(n, int):
- raise ValueError("double_factorial() only accepts integral values")
- if n < 0:
- raise ValueError("double_factorial() not defined for negative values")
- return 1 if n <= 1 else n * double_factorial(n - 2)
-
-
-if __name__ == "__main__":
- import doctest
-
- doctest.testmod()
From 0a84b8f842c4c72f400d96313d992b608d621d07 Mon Sep 17 00:00:00 2001
From: Aasheesh <126905285+AasheeshLikePanner@users.noreply.github.com>
Date: Tue, 3 Oct 2023 21:10:11 +0530
Subject: [PATCH 242/808] Changing Name of file and adding doctests in file.
(#9513)
* Adding doctests and changing file name
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update binary_multiplication.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update binary_multiplication.py
* Changing comment and changing name function
* Changing comment and changing name function
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update binary_multiplication.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update binary_multiplication.py
* Update binary_multiplication.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
---
maths/binary_exponentiation_2.py | 50 ---------------
maths/binary_multiplication.py | 101 +++++++++++++++++++++++++++++++
2 files changed, 101 insertions(+), 50 deletions(-)
delete mode 100644 maths/binary_exponentiation_2.py
create mode 100644 maths/binary_multiplication.py
diff --git a/maths/binary_exponentiation_2.py b/maths/binary_exponentiation_2.py
deleted file mode 100644
index af8f776dd266..000000000000
--- a/maths/binary_exponentiation_2.py
+++ /dev/null
@@ -1,50 +0,0 @@
-"""
-* Binary Exponentiation with Multiplication
-* This is a method to find a*b in a time complexity of O(log b)
-* This is one of the most commonly used methods of finding result of multiplication.
-* Also useful in cases where solution to (a*b)%c is required,
-* where a,b,c can be numbers over the computers calculation limits.
-* Done using iteration, can also be done using recursion
-
-* @author chinmoy159
-* @version 1.0 dated 10/08/2017
-"""
-
-
-def b_expo(a: int, b: int) -> int:
- res = 0
- while b > 0:
- if b & 1:
- res += a
-
- a += a
- b >>= 1
-
- return res
-
-
-def b_expo_mod(a: int, b: int, c: int) -> int:
- res = 0
- while b > 0:
- if b & 1:
- res = ((res % c) + (a % c)) % c
-
- a += a
- b >>= 1
-
- return res
-
-
-"""
-* Wondering how this method works !
-* It's pretty simple.
-* Let's say you need to calculate a ^ b
-* RULE 1 : a * b = (a+a) * (b/2) ---- example : 4 * 4 = (4+4) * (4/2) = 8 * 2
-* RULE 2 : IF b is ODD, then ---- a * b = a + (a * (b - 1)) :: where (b - 1) is even.
-* Once b is even, repeat the process to get a * b
-* Repeat the process till b = 1 OR b = 0, because a*1 = a AND a*0 = 0
-*
-* As far as the modulo is concerned,
-* the fact : (a+b) % c = ((a%c) + (b%c)) % c
-* Now apply RULE 1 OR 2, whichever is required.
-"""
diff --git a/maths/binary_multiplication.py b/maths/binary_multiplication.py
new file mode 100644
index 000000000000..0cc5a575f445
--- /dev/null
+++ b/maths/binary_multiplication.py
@@ -0,0 +1,101 @@
+"""
+Binary Multiplication
+This is a method to find a*b in a time complexity of O(log b)
+This is one of the most commonly used methods of finding result of multiplication.
+Also useful in cases where solution to (a*b)%c is required,
+where a,b,c can be numbers over the computers calculation limits.
+Done using iteration, can also be done using recursion
+
+Let's say you need to calculate a * b
+RULE 1 : a * b = (a+a) * (b/2) ---- example : 4 * 4 = (4+4) * (4/2) = 8 * 2
+RULE 2 : IF b is odd, then ---- a * b = a + (a * (b - 1)), where (b - 1) is even.
+Once b is even, repeat the process to get a * b
+Repeat the process until b = 1 or b = 0, because a*1 = a and a*0 = 0
+
+As far as the modulo is concerned,
+the fact : (a+b) % c = ((a%c) + (b%c)) % c
+Now apply RULE 1 or 2, whichever is required.
+
+@author chinmoy159
+"""
+
+
+def binary_multiply(a: int, b: int) -> int:
+ """
+ Multiply 'a' and 'b' using bitwise multiplication.
+
+ Parameters:
+ a (int): The first number.
+ b (int): The second number.
+
+ Returns:
+ int: a * b
+
+ Examples:
+ >>> binary_multiply(2, 3)
+ 6
+ >>> binary_multiply(5, 0)
+ 0
+ >>> binary_multiply(3, 4)
+ 12
+ >>> binary_multiply(10, 5)
+ 50
+ >>> binary_multiply(0, 5)
+ 0
+ >>> binary_multiply(2, 1)
+ 2
+ >>> binary_multiply(1, 10)
+ 10
+ """
+ res = 0
+ while b > 0:
+ if b & 1:
+ res += a
+
+ a += a
+ b >>= 1
+
+ return res
+
+
+def binary_mod_multiply(a: int, b: int, modulus: int) -> int:
+ """
+ Calculate (a * b) % c using binary multiplication and modular arithmetic.
+
+ Parameters:
+ a (int): The first number.
+ b (int): The second number.
+ modulus (int): The modulus.
+
+ Returns:
+ int: (a * b) % modulus.
+
+ Examples:
+ >>> binary_mod_multiply(2, 3, 5)
+ 1
+ >>> binary_mod_multiply(5, 0, 7)
+ 0
+ >>> binary_mod_multiply(3, 4, 6)
+ 0
+ >>> binary_mod_multiply(10, 5, 13)
+ 11
+ >>> binary_mod_multiply(2, 1, 5)
+ 2
+ >>> binary_mod_multiply(1, 10, 3)
+ 1
+ """
+ res = 0
+ while b > 0:
+ if b & 1:
+ res = ((res % modulus) + (a % modulus)) % modulus
+
+ a += a
+ b >>= 1
+
+ return res
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 81661bd2d0c34363de7d3e1e802fe2f75b9a1fa4 Mon Sep 17 00:00:00 2001
From: Ayush Yadav <115359450+ayush-yadavv@users.noreply.github.com>
Date: Wed, 4 Oct 2023 05:17:26 +0530
Subject: [PATCH 243/808] Update newtons_law_of_gravitation.py : Typo(Space
Removed) (#9351)
---
physics/newtons_law_of_gravitation.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/physics/newtons_law_of_gravitation.py b/physics/newtons_law_of_gravitation.py
index 4bbeddd61d5b..ae9da2f1e949 100644
--- a/physics/newtons_law_of_gravitation.py
+++ b/physics/newtons_law_of_gravitation.py
@@ -3,7 +3,7 @@
provided that the other three parameters are given.
Description : Newton's Law of Universal Gravitation explains the presence of force of
-attraction between bodies having a definite mass situated at a distance. It is usually
+attraction between bodies having a definite mass situated at a distance. It is usually
stated as that, every particle attracts every other particle in the universe with a
force that is directly proportional to the product of their masses and inversely
proportional to the square of the distance between their centers. The publication of the
From 12431389e32c290aae8c046ce9d8504d698d5f41 Mon Sep 17 00:00:00 2001
From: "Tan Kai Qun, Jeremy"
Date: Wed, 4 Oct 2023 10:47:03 +0900
Subject: [PATCH 244/808] Add typing to topological_sort.py (#9650)
* Add typing
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: Jeremy Tan
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
sorts/topological_sort.py | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/sorts/topological_sort.py b/sorts/topological_sort.py
index 59a0c8571b53..efce8165fcac 100644
--- a/sorts/topological_sort.py
+++ b/sorts/topological_sort.py
@@ -5,11 +5,17 @@
# b c
# / \
# d e
-edges = {"a": ["c", "b"], "b": ["d", "e"], "c": [], "d": [], "e": []}
-vertices = ["a", "b", "c", "d", "e"]
+edges: dict[str, list[str]] = {
+ "a": ["c", "b"],
+ "b": ["d", "e"],
+ "c": [],
+ "d": [],
+ "e": [],
+}
+vertices: list[str] = ["a", "b", "c", "d", "e"]
-def topological_sort(start, visited, sort):
+def topological_sort(start: str, visited: list[str], sort: list[str]) -> list[str]:
"""Perform topological sort on a directed acyclic graph."""
current = start
# add current to visited
From 28f1e68f005f99eb628efd1af899bdfe1c1bc99e Mon Sep 17 00:00:00 2001
From: "Tan Kai Qun, Jeremy"
Date: Wed, 4 Oct 2023 11:05:47 +0900
Subject: [PATCH 245/808] Add typing (#9651)
Co-authored-by: Jeremy Tan
---
sorts/stooge_sort.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/sorts/stooge_sort.py b/sorts/stooge_sort.py
index 9a5bedeae21b..767c6a05924f 100644
--- a/sorts/stooge_sort.py
+++ b/sorts/stooge_sort.py
@@ -1,4 +1,4 @@
-def stooge_sort(arr):
+def stooge_sort(arr: list[int]) -> list[int]:
"""
Examples:
>>> stooge_sort([18.1, 0, -7.1, -1, 2, 2])
@@ -11,7 +11,7 @@ def stooge_sort(arr):
return arr
-def stooge(arr, i, h):
+def stooge(arr: list[int], i: int, h: int) -> None:
if i >= h:
return
From a7133eca13d312fa729e2872048c7d9a662f6c8c Mon Sep 17 00:00:00 2001
From: "Tan Kai Qun, Jeremy"
Date: Wed, 4 Oct 2023 11:06:52 +0900
Subject: [PATCH 246/808] Add typing (#9652)
Co-authored-by: Jeremy Tan
---
sorts/shell_sort.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sorts/shell_sort.py b/sorts/shell_sort.py
index 10ae9ba407ec..b65609c974b7 100644
--- a/sorts/shell_sort.py
+++ b/sorts/shell_sort.py
@@ -3,7 +3,7 @@
"""
-def shell_sort(collection):
+def shell_sort(collection: list[int]) -> list[int]:
"""Pure implementation of shell sort algorithm in Python
:param collection: Some mutable ordered collection with heterogeneous
comparable items inside
From 8c23cc5117b338ea907045260274ac40301a4e0e Mon Sep 17 00:00:00 2001
From: "Tan Kai Qun, Jeremy"
Date: Wed, 4 Oct 2023 11:07:25 +0900
Subject: [PATCH 247/808] Add typing (#9654)
Co-authored-by: Jeremy Tan
---
sorts/selection_sort.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sorts/selection_sort.py b/sorts/selection_sort.py
index f3beb31b7070..28971a5e1aad 100644
--- a/sorts/selection_sort.py
+++ b/sorts/selection_sort.py
@@ -11,7 +11,7 @@
"""
-def selection_sort(collection):
+def selection_sort(collection: list[int]) -> list[int]:
"""Pure implementation of the selection sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
From 700df39ad446da895d413c0383632871459f0e9f Mon Sep 17 00:00:00 2001
From: aryan1165 <111041731+aryan1165@users.noreply.github.com>
Date: Wed, 4 Oct 2023 09:04:55 +0530
Subject: [PATCH 248/808] Fixed file name in
transposition_cipher_encrypt_decrypt_file.py. Fixing bug file not found.
(#9426)
* Fixed file name in trnasposition_cipher_encrypt_decrypt_file.py
* Removed Output.txt
* Removed Output.txt
* Fixed build errors
---
ciphers/prehistoric_men.txt | 1196 ++++++++---------
...ansposition_cipher_encrypt_decrypt_file.py | 4 +-
2 files changed, 600 insertions(+), 600 deletions(-)
diff --git a/ciphers/prehistoric_men.txt b/ciphers/prehistoric_men.txt
index a58e533a8405..8d1b2bd8c8d1 100644
--- a/ciphers/prehistoric_men.txt
+++ b/ciphers/prehistoric_men.txt
@@ -40,8 +40,8 @@ Transcriber's note:
version referred to above. One example of this might
occur in the second paragraph under "Choppers and
Adze-like Tools", page 46, which contains the phrase
- an adze cutting edge is ? shaped. The symbol before
- shaped looks like a sharply-italicized sans-serif L.
+ �an adze cutting edge is ? shaped�. The symbol before
+ �shaped� looks like a sharply-italicized sans-serif �L�.
Devices that cannot display that symbol may substitute
a question mark, a square, or other symbol.
@@ -98,7 +98,7 @@ forced or pedantic; at least I have done my very best to tell the story
simply and clearly.
Many friends have aided in the preparation of the book. The whimsical
-charm of Miss Susan Richerts illustrations add enormously to the
+charm of Miss Susan Richert�s illustrations add enormously to the
spirit I wanted. She gave freely of her own time on the drawings and
in planning the book with me. My colleagues at the University of
Chicago, especially Professor Wilton M. Krogman (now of the University
@@ -108,7 +108,7 @@ the Department of Anthropology, gave me counsel in matters bearing on
their special fields, and the Department of Anthropology bore some of
the expense of the illustrations. From Mrs. Irma Hunter and Mr. Arnold
Maremont, who are not archeologists at all and have only an intelligent
-laymans notion of archeology, I had sound advice on how best to tell
+layman�s notion of archeology, I had sound advice on how best to tell
the story. I am deeply indebted to all these friends.
While I was preparing the second edition, I had the great fortune
@@ -117,13 +117,13 @@ Washburn, now of the Department of Anthropology of the University of
California, and the fourth, fifth, and sixth chapters with Professor
Hallum L. Movius, Jr., of the Peabody Museum, Harvard University. The
book has gained greatly in accuracy thereby. In matters of dating,
-Professor Movius and the indications of Professor W. F. Libbys Carbon
+Professor Movius and the indications of Professor W. F. Libby�s Carbon
14 chronology project have both encouraged me to choose the lowest
dates now current for the events of the Pleistocene Ice Age. There is
still no certain way of fixing a direct chronology for most of the
-Pleistocene, but Professor Libbys method appears very promising for
+Pleistocene, but Professor Libby�s method appears very promising for
its end range and for proto-historic dates. In any case, this book
-names periods, and new dates may be written in against mine, if new
+names �periods,� and new dates may be written in against mine, if new
and better dating systems appear.
I wish to thank Dr. Clifford C. Gregg, Director of Chicago Natural
@@ -150,7 +150,7 @@ Clark Howell of the Department of Anthropology of the University of
Chicago in reworking the earlier chapters, and he was very patient in
the matter, which I sincerely appreciate.
-All of Mrs. Susan Richert Allens original drawings appear, but a few
+All of Mrs. Susan Richert Allen�s original drawings appear, but a few
necessary corrections have been made in some of the charts and some new
drawings have been added by Mr. John Pfiffner, Staff Artist, Chicago
Natural History Museum.
@@ -200,7 +200,7 @@ HOW WE LEARN about Prehistoric Men
Prehistory means the time before written history began. Actually, more
-than 99 per cent of mans story is prehistory. Man is at least half a
+than 99 per cent of man�s story is prehistory. Man is at least half a
million years old, but he did not begin to write history (or to write
anything) until about 5,000 years ago.
@@ -216,7 +216,7 @@ The scientists who study the bones and teeth and any other parts
they find of the bodies of prehistoric men, are called _physical
anthropologists_. Physical anthropologists are trained, much like
doctors, to know all about the human body. They study living people,
-too; they know more about the biological facts of human races than
+too; they know more about the biological facts of human �races� than
anybody else. If the police find a badly decayed body in a trunk,
they ask a physical anthropologist to tell them what the person
originally looked like. The physical anthropologists who specialize in
@@ -228,14 +228,14 @@ ARCHEOLOGISTS
There is a kind of scientist who studies the things that prehistoric
men made and did. Such a scientist is called an _archeologist_. It is
-the archeologists business to look for the stone and metal tools, the
+the archeologist�s business to look for the stone and metal tools, the
pottery, the graves, and the caves or huts of the men who lived before
history began.
But there is more to archeology than just looking for things. In
-Professor V. Gordon Childes words, archeology furnishes a sort of
+Professor V. Gordon Childe�s words, archeology �furnishes a sort of
history of human activity, provided always that the actions have
-produced concrete results and left recognizable material traces. You
+produced concrete results and left recognizable material traces.� You
will see that there are at least three points in what Childe says:
1. The archeologists have to find the traces of things left behind by
@@ -245,7 +245,7 @@ will see that there are at least three points in what Childe says:
too soft or too breakable to last through the years. However,
3. The archeologist must use whatever he can find to tell a story--to
- make a sort of history--from the objects and living-places and
+ make a �sort of history�--from the objects and living-places and
graves that have escaped destruction.
What I mean is this: Let us say you are walking through a dump yard,
@@ -253,8 +253,8 @@ and you find a rusty old spark plug. If you want to think about what
the spark plug means, you quickly remember that it is a part of an
automobile motor. This tells you something about the man who threw
the spark plug on the dump. He either had an automobile, or he knew
-or lived near someone who did. He cant have lived so very long ago,
-youll remember, because spark plugs and automobiles are only about
+or lived near someone who did. He can�t have lived so very long ago,
+you�ll remember, because spark plugs and automobiles are only about
sixty years old.
When you think about the old spark plug in this way you have
@@ -264,8 +264,8 @@ It is the same way with the man-made things we archeologists find
and put in museums. Usually, only a few of these objects are pretty
to look at; but each of them has some sort of story to tell. Making
the interpretation of his finds is the most important part of the
-archeologists job. It is the way he gets at the sort of history of
-human activity which is expected of archeology.
+archeologist�s job. It is the way he gets at the �sort of history of
+human activity� which is expected of archeology.
SOME OTHER SCIENTISTS
@@ -274,7 +274,7 @@ There are many other scientists who help the archeologist and the
physical anthropologist find out about prehistoric men. The geologists
help us tell the age of the rocks or caves or gravel beds in which
human bones or man-made objects are found. There are other scientists
-with names which all begin with paleo (the Greek word for old). The
+with names which all begin with �paleo� (the Greek word for �old�). The
_paleontologists_ study fossil animals. There are also, for example,
such scientists as _paleobotanists_ and _paleoclimatologists_, who
study ancient plants and climates. These scientists help us to know
@@ -306,20 +306,20 @@ systems.
The rate of disappearance of radioactivity as time passes.[1]]
[1] It is important that the limitations of the radioactive carbon
- dating system be held in mind. As the statistics involved in
+ �dating� system be held in mind. As the statistics involved in
the system are used, there are two chances in three that the
- date of the sample falls within the range given as plus or
- minus an added number of years. For example, the date for the
- Jarmo village (see chart), given as 6750 200 B.C., really
+ �date� of the sample falls within the range given as plus or
+ minus an added number of years. For example, the �date� for the
+ Jarmo village (see chart), given as 6750 � 200 B.C., really
means that there are only two chances in three that the real
date of the charcoal sampled fell between 6950 and 6550 B.C.
We have also begun to suspect that there are ways in which the
- samples themselves may have become contaminated, either on
+ samples themselves may have become �contaminated,� either on
the early or on the late side. We now tend to be suspicious of
single radioactive carbon determinations, or of determinations
from one site alone. But as a fabric of consistent
determinations for several or more sites of one archeological
- period, we gain confidence in the dates.
+ period, we gain confidence in the dates.
HOW THE SCIENTISTS FIND OUT
@@ -330,9 +330,9 @@ about prehistoric men. We also need a word about _how_ they find out.
All our finds came by accident until about a hundred years ago. Men
digging wells, or digging in caves for fertilizer, often turned up
ancient swords or pots or stone arrowheads. People also found some odd
-pieces of stone that didnt look like natural forms, but they also
-didnt look like any known tool. As a result, the people who found them
-gave them queer names; for example, thunderbolts. The people thought
+pieces of stone that didn�t look like natural forms, but they also
+didn�t look like any known tool. As a result, the people who found them
+gave them queer names; for example, �thunderbolts.� The people thought
the strange stones came to earth as bolts of lightning. We know now
that these strange stones were prehistoric stone tools.
@@ -349,7 +349,7 @@ story of cave men on Mount Carmel, in Palestine, began to be known.
Planned archeological digging is only about a century old. Even before
this, however, a few men realized the significance of objects they dug
from the ground; one of these early archeologists was our own Thomas
-Jefferson. The first real mound-digger was a German grocers clerk,
+Jefferson. The first real mound-digger was a German grocer�s clerk,
Heinrich Schliemann. Schliemann made a fortune as a merchant, first
in Europe and then in the California gold-rush of 1849. He became an
American citizen. Then he retired and had both money and time to test
@@ -389,16 +389,16 @@ used had been a soft, unbaked mud-brick, and most of the debris
consisted of fallen or rain-melted mud from these mud-bricks.
This idea of _stratification_, like the cake layers, was already a
-familiar one to the geologists by Schliemanns time. They could show
+familiar one to the geologists by Schliemann�s time. They could show
that their lowest layer of rock was oldest or earliest, and that the
-overlying layers became more recent as one moved upward. Schliemanns
+overlying layers became more recent as one moved upward. Schliemann�s
digging proved the same thing at Troy. His first (lowest and earliest)
city had at least nine layers above it; he thought that the second
-layer contained the remains of Homers Troy. We now know that Homeric
+layer contained the remains of Homer�s Troy. We now know that Homeric
Troy was layer VIIa from the bottom; also, we count eleven layers or
sub-layers in total.
-Schliemanns work marks the beginnings of modern archeology. Scholars
+Schliemann�s work marks the beginnings of modern archeology. Scholars
soon set out to dig on ancient sites, from Egypt to Central America.
@@ -410,21 +410,21 @@ Archeologists began to get ideas as to the kinds of objects that
belonged together. If you compared a mail-order catalogue of 1890 with
one of today, you would see a lot of differences. If you really studied
the two catalogues hard, you would also begin to see that certain
-objects go together. Horseshoes and metal buggy tires and pieces of
+objects �go together.� Horseshoes and metal buggy tires and pieces of
harness would begin to fit into a picture with certain kinds of coal
stoves and furniture and china dishes and kerosene lamps. Our friend
the spark plug, and radios and electric refrigerators and light bulbs
would fit into a picture with different kinds of furniture and dishes
-and tools. You wont be old enough to remember the kind of hats that
-women wore in 1890, but youve probably seen pictures of them, and you
-know very well they couldnt be worn with the fashions of today.
+and tools. You won�t be old enough to remember the kind of hats that
+women wore in 1890, but you�ve probably seen pictures of them, and you
+know very well they couldn�t be worn with the fashions of today.
This is one of the ways that archeologists study their materials.
The various tools and weapons and jewelry, the pottery, the kinds
of houses, and even the ways of burying the dead tend to fit into
pictures. Some archeologists call all of the things that go together to
make such a picture an _assemblage_. The assemblage of the first layer
-of Schliemanns Troy was as different from that of the seventh layer as
+of Schliemann�s Troy was as different from that of the seventh layer as
our 1900 mail-order catalogue is from the one of today.
The archeologists who came after Schliemann began to notice other
@@ -433,23 +433,23 @@ idea that people will buy better mousetraps goes back into very
ancient times. Today, if we make good automobiles or radios, we can
sell some of them in Turkey or even in Timbuktu. This means that a
few present-day types of American automobiles and radios form part
-of present-day assemblages in both Turkey and Timbuktu. The total
-present-day assemblage of Turkey is quite different from that of
+of present-day �assemblages� in both Turkey and Timbuktu. The total
+present-day �assemblage� of Turkey is quite different from that of
Timbuktu or that of America, but they have at least some automobiles
and some radios in common.
Now these automobiles and radios will eventually wear out. Let us
suppose we could go to some remote part of Turkey or to Timbuktu in a
-dream. We dont know what the date is, in our dream, but we see all
+dream. We don�t know what the date is, in our dream, but we see all
sorts of strange things and ways of living in both places. Nobody
tells us what the date is. But suddenly we see a 1936 Ford; so we
know that in our dream it has to be at least the year 1936, and only
as many years after that as we could reasonably expect a Ford to keep
-in running order. The Ford would probably break down in twenty years
-time, so the Turkish or Timbuktu assemblage were seeing in our dream
+in running order. The Ford would probably break down in twenty years�
+time, so the Turkish or Timbuktu �assemblage� we�re seeing in our dream
has to date at about A.D. 1936-56.
-Archeologists not only date their ancient materials in this way; they
+Archeologists not only �date� their ancient materials in this way; they
also see over what distances and between which peoples trading was
done. It turns out that there was a good deal of trading in ancient
times, probably all on a barter and exchange basis.
@@ -480,13 +480,13 @@ site. They find the remains of everything that would last through
time, in several different layers. They know that the assemblage in
the bottom layer was laid down earlier than the assemblage in the next
layer above, and so on up to the topmost layer, which is the latest.
-They look at the results of other digs and find that some other
+They look at the results of other �digs� and find that some other
archeologist 900 miles away has found ax-heads in his lowest layer,
exactly like the ax-heads of their fifth layer. This means that their
fifth layer must have been lived in at about the same time as was the
first layer in the site 200 miles away. It also may mean that the
people who lived in the two layers knew and traded with each other. Or
-it could mean that they didnt necessarily know each other, but simply
+it could mean that they didn�t necessarily know each other, but simply
that both traded with a third group at about the same time.
You can see that the more we dig and find, the more clearly the main
@@ -501,8 +501,8 @@ those of domesticated animals, for instance, sheep or cattle, and
therefore the people must have kept herds.
More important than anything else--as our structure grows more
-complicated and our materials increase--is the fact that a sort
-of history of human activity does begin to appear. The habits or
+complicated and our materials increase--is the fact that �a sort
+of history of human activity� does begin to appear. The habits or
traditions that men formed in the making of their tools and in the
ways they did things, begin to stand out for us. How characteristic
were these habits and traditions? What areas did they spread over?
@@ -519,7 +519,7 @@ method--chemical tests of the bones--that will enable them to discover
what the blood-type may have been. One thing is sure. We have never
found a group of skeletons so absolutely similar among themselves--so
cast from a single mould, so to speak--that we could claim to have a
-pure race. I am sure we never shall.
+�pure� race. I am sure we never shall.
We become particularly interested in any signs of change--when new
materials and tool types and ways of doing things replace old ones. We
@@ -527,7 +527,7 @@ watch for signs of social change and progress in one way or another.
We must do all this without one word of written history to aid us.
Everything we are concerned with goes back to the time _before_ men
-learned to write. That is the prehistorians job--to find out what
+learned to write. That is the prehistorian�s job--to find out what
happened before history began.
@@ -538,9 +538,9 @@ THE CHANGING WORLD in which Prehistoric Men Lived
[Illustration]
-Mankind, well say, is at least a half million years old. It is very
+Mankind, we�ll say, is at least a half million years old. It is very
hard to understand how long a time half a million years really is.
-If we were to compare this whole length of time to one day, wed get
+If we were to compare this whole length of time to one day, we�d get
something like this: The present time is midnight, and Jesus was
born just five minutes and thirty-six seconds ago. Earliest history
began less than fifteen minutes ago. Everything before 11:45 was in
@@ -569,7 +569,7 @@ book; it would mainly affect the dates earlier than 25,000 years ago.
CHANGES IN ENVIRONMENT
-The earth probably hasnt changed much in the last 5,000 years (250
+The earth probably hasn�t changed much in the last 5,000 years (250
generations). Men have built things on its surface and dug into it and
drawn boundaries on maps of it, but the places where rivers, lakes,
seas, and mountains now stand have changed very little.
@@ -605,7 +605,7 @@ the glaciers covered most of Canada and the northern United States and
reached down to southern England and France in Europe. Smaller ice
sheets sat like caps on the Rockies, the Alps, and the Himalayas. The
continental glaciation only happened north of the equator, however, so
-remember that Ice Age is only half true.
+remember that �Ice Age� is only half true.
As you know, the amount of water on and about the earth does not vary.
These large glaciers contained millions of tons of water frozen into
@@ -677,9 +677,9 @@ their dead.
At about the time when the last great glacier was finally melting away,
men in the Near East made the first basic change in human economy.
They began to plant grain, and they learned to raise and herd certain
-animals. This meant that they could store food in granaries and on the
-hoof against the bad times of the year. This first really basic change
-in mans way of living has been called the food-producing revolution.
+animals. This meant that they could store food in granaries and �on the
+hoof� against the bad times of the year. This first really basic change
+in man�s way of living has been called the �food-producing revolution.�
By the time it happened, a modern kind of climate was beginning. Men
had already grown to look as they do now. Know-how in ways of living
had developed and progressed, slowly but surely, up to a point. It was
@@ -698,25 +698,25 @@ Prehistoric Men THEMSELVES
DO WE KNOW WHERE MAN ORIGINATED?
-For a long time some scientists thought the cradle of mankind was in
+For a long time some scientists thought the �cradle of mankind� was in
central Asia. Other scientists insisted it was in Africa, and still
-others said it might have been in Europe. Actually, we dont know
-where it was. We dont even know that there was only _one_ cradle.
-If we had to choose a cradle at this moment, we would probably say
+others said it might have been in Europe. Actually, we don�t know
+where it was. We don�t even know that there was only _one_ �cradle.�
+If we had to choose a �cradle� at this moment, we would probably say
Africa. But the southern portions of Asia and Europe may also have been
included in the general area. The scene of the early development of
-mankind was certainly the Old World. It is pretty certain men didnt
+mankind was certainly the Old World. It is pretty certain men didn�t
reach North or South America until almost the end of the Ice Age--had
they done so earlier we would certainly have found some trace of them
by now.
The earliest tools we have yet found come from central and south
-Africa. By the dating system Im using, these tools must be over
+Africa. By the dating system I�m using, these tools must be over
500,000 years old. There are now reports that a few such early tools
have been found--at the Sterkfontein cave in South Africa--along with
-the bones of small fossil men called australopithecines.
+the bones of small fossil men called �australopithecines.�
-Not all scientists would agree that the australopithecines were men,
+Not all scientists would agree that the australopithecines were �men,�
or would agree that the tools were made by the australopithecines
themselves. For these sticklers, the earliest bones of men come from
the island of Java. The date would be about 450,000 years ago. So far,
@@ -727,12 +727,12 @@ Let me say it another way. How old are the earliest traces of men we
now have? Over half a million years. This was a time when the first
alpine glaciation was happening in the north. What has been found so
far? The tools which the men of those times made, in different parts
-of Africa. It is now fairly generally agreed that the men who made
-the tools were the australopithecines. There is also a more man-like
+of Africa. It is now fairly generally agreed that the �men� who made
+the tools were the australopithecines. There is also a more �man-like�
jawbone at Kanam in Kenya, but its find-spot has been questioned. The
next earliest bones we have were found in Java, and they may be almost
a hundred thousand years younger than the earliest African finds. We
-havent yet found the tools of these early Javanese. Our knowledge of
+haven�t yet found the tools of these early Javanese. Our knowledge of
tool-using in Africa spreads quickly as time goes on: soon after the
appearance of tools in the south we shall have them from as far north
as Algeria.
@@ -758,30 +758,30 @@ prove it.
MEN AND APES
Many people used to get extremely upset at the ill-formed notion
-that man descended from the apes. Such words were much more likely
-to start fights or monkey trials than the correct notion that all
+that �man descended from the apes.� Such words were much more likely
+to start fights or �monkey trials� than the correct notion that all
living animals, including man, ascended or evolved from a single-celled
organism which lived in the primeval seas hundreds of millions of years
-ago. Men are mammals, of the order called Primates, and mans living
-relatives are the great apes. Men didnt descend from the apes or
+ago. Men are mammals, of the order called Primates, and man�s living
+relatives are the great apes. Men didn�t �descend� from the apes or
apes from men, and mankind must have had much closer relatives who have
since become extinct.
Men stand erect. They also walk and run on their two feet. Apes are
happiest in trees, swinging with their arms from branch to branch.
Few branches of trees will hold the mighty gorilla, although he still
-manages to sleep in trees. Apes cant stand really erect in our sense,
+manages to sleep in trees. Apes can�t stand really erect in our sense,
and when they have to run on the ground, they use the knuckles of their
hands as well as their feet.
A key group of fossil bones here are the south African
australopithecines. These are called the _Australopithecinae_ or
-man-apes or sometimes even ape-men. We do not _know_ that they were
+�man-apes� or sometimes even �ape-men.� We do not _know_ that they were
directly ancestral to men but they can hardly have been so to apes.
-Presently Ill describe them a bit more. The reason I mention them
+Presently I�ll describe them a bit more. The reason I mention them
here is that while they had brains no larger than those of apes, their
hipbones were enough like ours so that they must have stood erect.
-There is no good reason to think they couldnt have walked as we do.
+There is no good reason to think they couldn�t have walked as we do.
BRAINS, HANDS, AND TOOLS
@@ -801,12 +801,12 @@ Nobody knows which of these three is most important, or which came
first. Most probably the growth of all three things was very much
blended together. If you think about each of the things, you will see
what I mean. Unless your hand is more flexible than a paw, and your
-thumb will work against (or oppose) your fingers, you cant hold a tool
-very well. But you wouldnt get the idea of using a tool unless you had
+thumb will work against (or oppose) your fingers, you can�t hold a tool
+very well. But you wouldn�t get the idea of using a tool unless you had
enough brain to help you see cause and effect. And it is rather hard to
see how your hand and brain would develop unless they had something to
-practice on--like using tools. In Professor Krogmans words, the hand
-must become the obedient servant of the eye and the brain. It is the
+practice on--like using tools. In Professor Krogman�s words, �the hand
+must become the obedient servant of the eye and the brain.� It is the
_co-ordination_ of these things that counts.
Many other things must have been happening to the bodies of the
@@ -820,17 +820,17 @@ little by little, all together. Men became men very slowly.
WHEN SHALL WE CALL MEN MEN?
-What do I mean when I say men? People who looked pretty much as we
+What do I mean when I say �men�? People who looked pretty much as we
do, and who used different tools to do different things, are men to me.
-Well probably never know whether the earliest ones talked or not. They
+We�ll probably never know whether the earliest ones talked or not. They
probably had vocal cords, so they could make sounds, but did they know
how to make sounds work as symbols to carry meanings? But if the fossil
-bones look like our skeletons, and if we find tools which well agree
-couldnt have been made by nature or by animals, then Id say we had
+bones look like our skeletons, and if we find tools which we�ll agree
+couldn�t have been made by nature or by animals, then I�d say we had
traces of _men_.
The australopithecine finds of the Transvaal and Bechuanaland, in
-south Africa, are bound to come into the discussion here. Ive already
+south Africa, are bound to come into the discussion here. I�ve already
told you that the australopithecines could have stood upright and
walked on their two hind legs. They come from the very base of the
Pleistocene or Ice Age, and a few coarse stone tools have been found
@@ -848,17 +848,17 @@ bones. The doubt as to whether the australopithecines used the tools
themselves goes like this--just suppose some man-like creature (whose
bones we have not yet found) made the tools and used them to kill
and butcher australopithecines. Hence a few experts tend to let
-australopithecines still hang in limbo as man-apes.
+australopithecines still hang in limbo as �man-apes.�
THE EARLIEST MEN WE KNOW
-Ill postpone talking about the tools of early men until the next
+I�ll postpone talking about the tools of early men until the next
chapter. The men whose bones were the earliest of the Java lot have
been given the name _Meganthropus_. The bones are very fragmentary. We
would not understand them very well unless we had the somewhat later
-Javanese lot--the more commonly known _Pithecanthropus_ or Java
-man--against which to refer them for study. One of the less well-known
+Javanese lot--the more commonly known _Pithecanthropus_ or �Java
+man�--against which to refer them for study. One of the less well-known
and earliest fragments, a piece of lower jaw and some teeth, rather
strongly resembles the lower jaws and teeth of the australopithecine
type. Was _Meganthropus_ a sort of half-way point between the
@@ -872,7 +872,7 @@ finds of Java man were made in 1891-92 by Dr. Eugene Dubois, a Dutch
doctor in the colonial service. Finds have continued to be made. There
are now bones enough to account for four skulls. There are also four
jaws and some odd teeth and thigh bones. Java man, generally speaking,
-was about five feet six inches tall, and didnt hold his head very
+was about five feet six inches tall, and didn�t hold his head very
erect. His skull was very thick and heavy and had room for little more
than two-thirds as large a brain as we have. He had big teeth and a big
jaw and enormous eyebrow ridges.
@@ -885,22 +885,22 @@ belonged to his near descendants.
Remember that there are several varieties of men in the whole early
Java lot, at least two of which are earlier than the _Pithecanthropus_,
-Java man. Some of the earlier ones seem to have gone in for
+�Java man.� Some of the earlier ones seem to have gone in for
bigness, in tooth-size at least. _Meganthropus_ is one of these
earlier varieties. As we said, he _may_ turn out to be a link to
the australopithecines, who _may_ or _may not_ be ancestral to men.
_Meganthropus_ is best understandable in terms of _Pithecanthropus_,
who appeared later in the same general area. _Pithecanthropus_ is
pretty well understandable from the bones he left us, and also because
-of his strong resemblance to the fully tool-using cave-dwelling Peking
-man, _Sinanthropus_, about whom we shall talk next. But you can see
+of his strong resemblance to the fully tool-using cave-dwelling �Peking
+man,� _Sinanthropus_, about whom we shall talk next. But you can see
that the physical anthropologists and prehistoric archeologists still
have a lot of work to do on the problem of earliest men.
PEKING MEN AND SOME EARLY WESTERNERS
-The earliest known Chinese are called _Sinanthropus_, or Peking man,
+The earliest known Chinese are called _Sinanthropus_, or �Peking man,�
because the finds were made near that city. In World War II, the United
States Marine guard at our Embassy in Peking tried to help get the
bones out of the city before the Japanese attack. Nobody knows where
@@ -913,9 +913,9 @@ casts of the bones.
Peking man lived in a cave in a limestone hill, made tools, cracked
animal bones to get the marrow out, and used fire. Incidentally, the
bones of Peking man were found because Chinese dig for what they call
-dragon bones and dragon teeth. Uneducated Chinese buy these things
+�dragon bones� and �dragon teeth.� Uneducated Chinese buy these things
in their drug stores and grind them into powder for medicine. The
-dragon teeth and bones are really fossils of ancient animals, and
+�dragon teeth� and �bones� are really fossils of ancient animals, and
sometimes of men. The people who supply the drug stores have learned
where to dig for strange bones and teeth. Paleontologists who get to
China go to the drug stores to buy fossils. In a roundabout way, this
@@ -924,7 +924,7 @@ is how the fallen-in cave of Peking man at Choukoutien was discovered.
Peking man was not quite as tall as Java man but he probably stood
straighter. His skull looked very much like that of the Java skull
except that it had room for a slightly larger brain. His face was less
-brutish than was Java mans face, but this isnt saying much.
+brutish than was Java man�s face, but this isn�t saying much.
Peking man dates from early in the interglacial period following the
second alpine glaciation. He probably lived close to 350,000 years
@@ -946,9 +946,9 @@ big ridges over the eyes. The more fragmentary skull from Swanscombe in
England (p. 11) has been much more carefully studied. Only the top and
back of that skull have been found. Since the skull rounds up nicely,
it has been assumed that the face and forehead must have been quite
-modern. Careful comparison with Steinheim shows that this was not
+�modern.� Careful comparison with Steinheim shows that this was not
necessarily so. This is important because it bears on the question of
-how early truly modern man appeared.
+how early truly �modern� man appeared.
Recently two fragmentary jaws were found at Ternafine in Algeria,
northwest Africa. They look like the jaws of Peking man. Tools were
@@ -971,22 +971,22 @@ modern Australian natives. During parts of the Ice Age there was a land
bridge all the way from Java to Australia.
-TWO ENGLISHMEN WHO WERENT OLD
+TWO ENGLISHMEN WHO WEREN�T OLD
The older textbooks contain descriptions of two English finds which
were thought to be very old. These were called Piltdown (_Eoanthropus
dawsoni_) and Galley Hill. The skulls were very modern in appearance.
In 1948-49, British scientists began making chemical tests which proved
that neither of these finds is very old. It is now known that both
-Piltdown man and the tools which were said to have been found with
+�Piltdown man� and the tools which were said to have been found with
him were part of an elaborate fake!
-TYPICAL CAVE MEN
+TYPICAL �CAVE MEN�
The next men we have to talk about are all members of a related group.
-These are the Neanderthal group. Neanderthal man himself was found in
-the Neander Valley, near Dsseldorf, Germany, in 1856. He was the first
+These are the Neanderthal group. �Neanderthal man� himself was found in
+the Neander Valley, near D�sseldorf, Germany, in 1856. He was the first
human fossil to be recognized as such.
[Illustration: PRINCIPAL KNOWN TYPES OF FOSSIL MEN
@@ -999,7 +999,7 @@ human fossil to be recognized as such.
PITHECANTHROPUS]
Some of us think that the neanderthaloids proper are only those people
-of western Europe who didnt get out before the beginning of the last
+of western Europe who didn�t get out before the beginning of the last
great glaciation, and who found themselves hemmed in by the glaciers
in the Alps and northern Europe. Being hemmed in, they intermarried
a bit too much and developed into a special type. Professor F. Clark
@@ -1010,7 +1010,7 @@ pre-neanderthaloids. There are traces of these pre-neanderthaloids
pretty much throughout Europe during the third interglacial period--say
100,000 years ago. The pre-neanderthaloids are represented by such
finds as the ones at Ehringsdorf in Germany and Saccopastore in Italy.
-I wont describe them for you, since they are simply less extreme than
+I won�t describe them for you, since they are simply less extreme than
the neanderthaloids proper--about half way between Steinheim and the
classic Neanderthal people.
@@ -1019,24 +1019,24 @@ get caught in the pocket of the southwest corner of Europe at the onset
of the last great glaciation became the classic Neanderthalers. Out in
the Near East, Howell thinks, it is possible to see traces of people
evolving from the pre-neanderthaloid type toward that of fully modern
-man. Certainly, we dont see such extreme cases of neanderthaloidism
+man. Certainly, we don�t see such extreme cases of �neanderthaloidism�
outside of western Europe.
There are at least a dozen good examples in the main or classic
Neanderthal group in Europe. They date to just before and in the
earlier part of the last great glaciation (85,000 to 40,000 years ago).
-Many of the finds have been made in caves. The cave men the movies
+Many of the finds have been made in caves. The �cave men� the movies
and the cartoonists show you are probably meant to be Neanderthalers.
-Im not at all sure they dragged their women by the hair; the women
+I�m not at all sure they dragged their women by the hair; the women
were probably pretty tough, too!
Neanderthal men had large bony heads, but plenty of room for brains.
Some had brain cases even larger than the average for modern man. Their
faces were heavy, and they had eyebrow ridges of bone, but the ridges
were not as big as those of Java man. Their foreheads were very low,
-and they didnt have much chin. They were about five feet three inches
-tall, but were heavy and barrel-chested. But the Neanderthalers didnt
-slouch as much as theyve been blamed for, either.
+and they didn�t have much chin. They were about five feet three inches
+tall, but were heavy and barrel-chested. But the Neanderthalers didn�t
+slouch as much as they�ve been blamed for, either.
One important thing about the Neanderthal group is that there is a fair
number of them to study. Just as important is the fact that we know
@@ -1059,10 +1059,10 @@ different-looking people.
EARLY MODERN MEN
-How early is modern man (_Homo sapiens_), the wise man? Some people
+How early is modern man (_Homo sapiens_), the �wise man�? Some people
have thought that he was very early, a few still think so. Piltdown
and Galley Hill, which were quite modern in anatomical appearance and
-_supposedly_ very early in date, were the best evidence for very
+_supposedly_ very early in date, were the best �evidence� for very
early modern men. Now that Piltdown has been liquidated and Galley Hill
is known to be very late, what is left of the idea?
@@ -1073,13 +1073,13 @@ the Ternafine jaws, you might come to the conclusion that the crown of
the Swanscombe head was that of a modern-like man.
Two more skulls, again without faces, are available from a French
-cave site, Fontchevade. They come from the time of the last great
+cave site, Font�chevade. They come from the time of the last great
interglacial, as did the pre-neanderthaloids. The crowns of the
-Fontchevade skulls also look quite modern. There is a bit of the
+Font�chevade skulls also look quite modern. There is a bit of the
forehead preserved on one of these skulls and the brow-ridge is not
heavy. Nevertheless, there is a suggestion that the bones belonged to
an immature individual. In this case, his (or even more so, if _her_)
-brow-ridges would have been weak anyway. The case for the Fontchevade
+brow-ridges would have been weak anyway. The case for the Font�chevade
fossils, as modern type men, is little stronger than that for
Swanscombe, although Professor Vallois believes it a good case.
@@ -1101,8 +1101,8 @@ of the onset of colder weather, when the last glaciation was beginning
in the north--say 75,000 years ago.
The 70 per cent modern group came from only one cave, Mugharet es-Skhul
-(cave of the kids). The other group, from several caves, had bones of
-men of the type weve been calling pre-neanderthaloid which we noted
+(�cave of the kids�). The other group, from several caves, had bones of
+men of the type we�ve been calling pre-neanderthaloid which we noted
were widespread in Europe and beyond. The tools which came with each
of these finds were generally similar, and McCown and Keith, and other
scholars since their study, have tended to assume that both the Skhul
@@ -1131,26 +1131,26 @@ important fossil men of later Europe are shown in the chart on page
DIFFERENCES IN THE EARLY MODERNS
The main early European moderns have been divided into two groups, the
-Cro-Magnon group and the Combe Capelle-Brnn group. Cro-Magnon people
+Cro-Magnon group and the Combe Capelle-Br�nn group. Cro-Magnon people
were tall and big-boned, with large, long, and rugged heads. They
must have been built like many present-day Scandinavians. The Combe
-Capelle-Brnn people were shorter; they had narrow heads and faces, and
-big eyebrow-ridges. Of course we dont find the skin or hair of these
-people. But there is little doubt they were Caucasoids (Whites).
+Capelle-Br�nn people were shorter; they had narrow heads and faces, and
+big eyebrow-ridges. Of course we don�t find the skin or hair of these
+people. But there is little doubt they were Caucasoids (�Whites�).
Another important find came in the Italian Riviera, near Monte Carlo.
Here, in a cave near Grimaldi, there was a grave containing a woman
and a young boy, buried together. The two skeletons were first called
-Negroid because some features of their bones were thought to resemble
+�Negroid� because some features of their bones were thought to resemble
certain features of modern African Negro bones. But more recently,
Professor E. A. Hooton and other experts questioned the use of the word
-Negroid in describing the Grimaldi skeletons. It is true that nothing
+�Negroid� in describing the Grimaldi skeletons. It is true that nothing
is known of the skin color, hair form, or any other fleshy feature of
-the Grimaldi people, so that the word Negroid in its usual meaning is
+the Grimaldi people, so that the word �Negroid� in its usual meaning is
not proper here. It is also not clear whether the features of the bones
-claimed to be Negroid are really so at all.
+claimed to be �Negroid� are really so at all.
-From a place called Wadjak, in Java, we have proto-Australoid skulls
+From a place called Wadjak, in Java, we have �proto-Australoid� skulls
which closely resemble those of modern Australian natives. Some of
the skulls found in South Africa, especially the Boskop skull, look
like those of modern Bushmen, but are much bigger. The ancestors of
@@ -1159,12 +1159,12 @@ Desert. True African Negroes were forest people who apparently expanded
out of the west central African area only in the last several thousand
years. Although dark in skin color, neither the Australians nor the
Bushmen are Negroes; neither the Wadjak nor the Boskop skulls are
-Negroid.
+�Negroid.�
-As weve already mentioned, Professor Weidenreich believed that Peking
+As we�ve already mentioned, Professor Weidenreich believed that Peking
man was already on the way to becoming a Mongoloid. Anyway, the
-Mongoloids would seem to have been present by the time of the Upper
-Cave at Choukoutien, the _Sinanthropus_ find-spot.
+Mongoloids would seem to have been present by the time of the �Upper
+Cave� at Choukoutien, the _Sinanthropus_ find-spot.
WHAT THE DIFFERENCES MEAN
@@ -1175,14 +1175,14 @@ From area to area, men tended to look somewhat different, just as
they do today. This is all quite natural. People _tended_ to mate
near home; in the anthropological jargon, they made up geographically
localized breeding populations. The simple continental division of
-stocks--black = Africa, yellow = Asia, white = Europe--is too simple
+�stocks�--black = Africa, yellow = Asia, white = Europe--is too simple
a picture to fit the facts. People became accustomed to life in some
-particular area within a continent (we might call it a natural area).
+particular area within a continent (we might call it a �natural area�).
As they went on living there, they evolved towards some particular
physical variety. It would, of course, have been difficult to draw
a clear boundary between two adjacent areas. There must always have
been some mating across the boundaries in every case. One thing human
-beings dont do, and never have done, is to mate for purity. It is
+beings don�t do, and never have done, is to mate for �purity.� It is
self-righteous nonsense when we try to kid ourselves into thinking that
they do.
@@ -1195,28 +1195,28 @@ and they must do the writing about races. I shall, however, give two
modern definitions of race, and then make one comment.
Dr. William G. Boyd, professor of Immunochemistry, School of
- Medicine, Boston University: We may define a human race as a
+ Medicine, Boston University: �We may define a human race as a
population which differs significantly from other human populations
in regard to the frequency of one or more of the genes it
- possesses.
+ possesses.�
Professor Sherwood L. Washburn, professor of Physical Anthropology,
- Department of Anthropology, the University of California: A race
+ Department of Anthropology, the University of California: �A �race�
is a group of genetically similar populations, and races intergrade
- because there are always intermediate populations.
+ because there are always intermediate populations.�
My comment is that the ideas involved here are all biological: they
concern groups, _not_ individuals. Boyd and Washburn may differ a bit
-on what they want to consider a population, but a population is a
+on what they want to consider a �population,� but a population is a
group nevertheless, and genetics is biology to the hilt. Now a lot of
people still think of race in terms of how people dress or fix their
food or of other habits or customs they have. The next step is to talk
-about racial purity. None of this has anything whatever to do with
+about racial �purity.� None of this has anything whatever to do with
race proper, which is a matter of the biology of groups.
-Incidentally, Im told that if man very carefully _controls_
+Incidentally, I�m told that if man very carefully _controls_
the breeding of certain animals over generations--dogs, cattle,
-chickens--he might achieve a pure race of animals. But he doesnt do
+chickens--he might achieve a �pure� race of animals. But he doesn�t do
it. Some unfortunate genetic trait soon turns up, so this has just as
carefully to be bred out again, and so on.
@@ -1240,20 +1240,20 @@ date to the second great interglacial period, about 350,000 years ago.
Piltdown and Galley Hill are out, and with them, much of the starch
in the old idea that there were two distinct lines of development
-in human evolution: (1) a line of paleoanthropic development from
+in human evolution: (1) a line of �paleoanthropic� development from
Heidelberg to the Neanderthalers where it became extinct, and (2) a
-very early modern line, through Piltdown, Galley Hill, Swanscombe, to
+very early �modern� line, through Piltdown, Galley Hill, Swanscombe, to
us. Swanscombe, Steinheim, and Ternafine are just as easily cases of
very early pre-neanderthaloids.
The pre-neanderthaloids were very widespread during the third
interglacial: Ehringsdorf, Saccopastore, some of the Mount Carmel
-people, and probably Fontchevade are cases in point. A variety of
+people, and probably Font�chevade are cases in point. A variety of
their descendants can be seen, from Java (Solo), Africa (Rhodesian
man), and about the Mediterranean and in western Europe. As the acute
cold of the last glaciation set in, the western Europeans found
themselves surrounded by water, ice, or bitter cold tundra. To vastly
-over-simplify it, they bred in and became classic neanderthaloids.
+over-simplify it, they �bred in� and became classic neanderthaloids.
But on Mount Carmel, the Skhul cave-find with its 70 per cent modern
features shows what could happen elsewhere at the same time.
@@ -1263,12 +1263,12 @@ modern skeletons of men. The modern skeletons differ from place to
place, just as different groups of men living in different places still
look different.
-What became of the Neanderthalers? Nobody can tell me for sure. Ive a
-hunch they were simply bred out again when the cold weather was over.
+What became of the Neanderthalers? Nobody can tell me for sure. I�ve a
+hunch they were simply �bred out� again when the cold weather was over.
Many Americans, as the years go by, are no longer ashamed to claim they
-have Indian blood in their veins. Give us a few more generations
+have �Indian blood in their veins.� Give us a few more generations
and there will not be very many other Americans left to whom we can
-brag about it. It certainly isnt inconceivable to me to imagine a
+brag about it. It certainly isn�t inconceivable to me to imagine a
little Cro-Magnon boy bragging to his friends about his tough, strong,
Neanderthaler great-great-great-great-grandfather!
@@ -1281,15 +1281,15 @@ Cultural BEGINNINGS
Men, unlike the lower animals, are made up of much more than flesh and
-blood and bones; for men have culture.
+blood and bones; for men have �culture.�
WHAT IS CULTURE?
-Culture is a word with many meanings. The doctors speak of making a
-culture of a certain kind of bacteria, and ants are said to have a
-culture. Then there is the Emily Post kind of culture--you say a
-person is cultured, or that he isnt, depending on such things as
+�Culture� is a word with many meanings. The doctors speak of making a
+�culture� of a certain kind of bacteria, and ants are said to have a
+�culture.� Then there is the Emily Post kind of �culture�--you say a
+person is �cultured,� or that he isn�t, depending on such things as
whether or not he eats peas with his knife.
The anthropologists use the word too, and argue heatedly over its finer
@@ -1300,7 +1300,7 @@ men from another. In this sense, a CULTURE means the way the members
of a group of people think and believe and live, the tools they make,
and the way they do things. Professor Robert Redfield says a culture
is an organized or formalized body of conventional understandings.
-Conventional understandings means the whole set of rules, beliefs,
+�Conventional understandings� means the whole set of rules, beliefs,
and standards which a group of people lives by. These understandings
show themselves in art, and in the other things a people may make and
do. The understandings continue to last, through tradition, from one
@@ -1325,12 +1325,12 @@ Egyptians. I mean their beliefs as to why grain grew, as well as their
ability to make tools with which to reap the grain. I mean their
beliefs about life after death. What I am thinking about as culture is
a thing which lasted in time. If any one Egyptian, even the Pharaoh,
-died, it didnt affect the Egyptian culture of that particular moment.
+died, it didn�t affect the Egyptian culture of that particular moment.
PREHISTORIC CULTURES
-For that long period of mans history that is all prehistory, we have
+For that long period of man�s history that is all prehistory, we have
no written descriptions of cultures. We find only the tools men made,
the places where they lived, the graves in which they buried their
dead. Fortunately for us, these tools and living places and graves all
@@ -1345,15 +1345,15 @@ of the classic European Neanderthal group of men, we have found few
cave-dwelling places of very early prehistoric men. First, there is the
fallen-in cave where Peking man was found, near Peking. Then there are
two or three other _early_, but not _very early_, possibilities. The
-finds at the base of the French cave of Fontchevade, those in one of
+finds at the base of the French cave of Font�chevade, those in one of
the Makapan caves in South Africa, and several open sites such as Dr.
-L. S. B. Leakeys Olorgesailie in Kenya doubtless all lie earlier than
+L. S. B. Leakey�s Olorgesailie in Kenya doubtless all lie earlier than
the time of the main European Neanderthal group, but none are so early
as the Peking finds.
You can see that we know very little about the home life of earlier
prehistoric men. We find different kinds of early stone tools, but we
-cant even be really sure which tools may have been used together.
+can�t even be really sure which tools may have been used together.
WHY LITTLE HAS LASTED FROM EARLY TIMES
@@ -1380,11 +1380,11 @@ there first! The front of this enormous sheet of ice moved down over
the country, crushing and breaking and plowing up everything, like a
gigantic bulldozer. You can see what happened to our camp site.
-Everything the glacier couldnt break, it pushed along in front of it
+Everything the glacier couldn�t break, it pushed along in front of it
or plowed beneath it. Rocks were ground to gravel, and soil was caught
into the ice, which afterwards melted and ran off as muddy water. Hard
-tools of flint sometimes remained whole. Human bones werent so hard;
-its a wonder _any_ of them lasted. Gushing streams of melt water
+tools of flint sometimes remained whole. Human bones weren�t so hard;
+it�s a wonder _any_ of them lasted. Gushing streams of melt water
flushed out the debris from underneath the glacier, and water flowed
off the surface and through great crevasses. The hard materials these
waters carried were even more rolled and ground up. Finally, such
@@ -1407,26 +1407,26 @@ all up, and so we cannot say which particular sets of tools belonged
together in the first place.
-EOLITHS
+�EOLITHS�
But what sort of tools do we find earliest? For almost a century,
people have been picking up odd bits of flint and other stone in the
oldest Ice Age gravels in England and France. It is now thought these
-odd bits of stone werent actually worked by prehistoric men. The
-stones were given a name, _eoliths_, or dawn stones. You can see them
+odd bits of stone weren�t actually worked by prehistoric men. The
+stones were given a name, _eoliths_, or �dawn stones.� You can see them
in many museums; but you can be pretty sure that very few of them were
actually fashioned by men.
-It is impossible to pick out eoliths that seem to be made in any
-one _tradition_. By tradition I mean a set of habits for making one
-kind of tool for some particular job. No two eoliths look very much
+It is impossible to pick out �eoliths� that seem to be made in any
+one _tradition_. By �tradition� I mean a set of habits for making one
+kind of tool for some particular job. No two �eoliths� look very much
alike: tools made as part of some one tradition all look much alike.
-Now its easy to suppose that the very earliest prehistoric men picked
-up and used almost any sort of stone. This wouldnt be surprising; you
-and I do it when we go camping. In other words, some of these eoliths
+Now it�s easy to suppose that the very earliest prehistoric men picked
+up and used almost any sort of stone. This wouldn�t be surprising; you
+and I do it when we go camping. In other words, some of these �eoliths�
may actually have been used by prehistoric men. They must have used
anything that might be handy when they needed it. We could have figured
-that out without the eoliths.
+that out without the �eoliths.�
THE ROAD TO STANDARDIZATION
@@ -1434,7 +1434,7 @@ THE ROAD TO STANDARDIZATION
Reasoning from what we know or can easily imagine, there should have
been three major steps in the prehistory of tool-making. The first step
would have been simple _utilization_ of what was at hand. This is the
-step into which the eoliths would fall. The second step would have
+step into which the �eoliths� would fall. The second step would have
been _fashioning_--the haphazard preparation of a tool when there was a
need for it. Probably many of the earlier pebble tools, which I shall
describe next, fall into this group. The third step would have been
@@ -1447,7 +1447,7 @@ tradition appears.
PEBBLE TOOLS
-At the beginning of the last chapter, youll remember that I said there
+At the beginning of the last chapter, you�ll remember that I said there
were tools from very early geological beds. The earliest bones of men
have not yet been found in such early beds although the Sterkfontein
australopithecine cave approaches this early date. The earliest tools
@@ -1467,7 +1467,7 @@ Old World besides Africa; in fact, some prehistorians already claim
to have identified a few. Since the forms and the distinct ways of
making the earlier pebble tools had not yet sufficiently jelled into
a set tradition, they are difficult for us to recognize. It is not
-so difficult, however, if there are great numbers of possibles
+so difficult, however, if there are great numbers of �possibles�
available. A little later in time the tradition becomes more clearly
set, and pebble tools are easier to recognize. So far, really large
collections of pebble tools have only been found and examined in Africa.
@@ -1475,9 +1475,9 @@ collections of pebble tools have only been found and examined in Africa.
CORE-BIFACE TOOLS
-The next tradition well look at is the _core_ or biface one. The tools
+The next tradition we�ll look at is the _core_ or biface one. The tools
are large pear-shaped pieces of stone trimmed flat on the two opposite
-sides or faces. Hence biface has been used to describe these tools.
+sides or �faces.� Hence �biface� has been used to describe these tools.
The front view is like that of a pear with a rather pointed top, and
the back view looks almost exactly the same. Look at them side on, and
you can see that the front and back faces are the same and have been
@@ -1488,7 +1488,7 @@ illustration.
[Illustration: ABBEVILLIAN BIFACE]
We have very little idea of the way in which these core-bifaces were
-used. They have been called hand axes, but this probably gives the
+used. They have been called �hand axes,� but this probably gives the
wrong idea, for an ax, to us, is not a pointed tool. All of these early
tools must have been used for a number of jobs--chopping, scraping,
cutting, hitting, picking, and prying. Since the core-bifaces tend to
@@ -1505,7 +1505,7 @@ a big block of stone. You had to break off the flake in such a way that
it was broad and thin, and also had a good sharp cutting edge. Once you
really got on to the trick of doing it, this was probably a simpler way
to make a good cutting tool than preparing a biface. You have to know
-how, though; Ive tried it and have mashed my fingers more than once.
+how, though; I�ve tried it and have mashed my fingers more than once.
The flake tools look as if they were meant mainly for chopping,
scraping, and cutting jobs. When one made a flake tool, the idea seems
@@ -1535,9 +1535,9 @@ tradition. It probably has its earliest roots in the pebble tool
tradition of African type. There are several kinds of tools in this
tradition, but all differ from the western core-bifaces and flakes.
There are broad, heavy scrapers or cleavers, and tools with an
-adze-like cutting edge. These last-named tools are called hand adzes,
-just as the core-bifaces of the west have often been called hand
-axes. The section of an adze cutting edge is ? shaped; the section of
+adze-like cutting edge. These last-named tools are called �hand adzes,�
+just as the core-bifaces of the west have often been called �hand
+axes.� The section of an adze cutting edge is ? shaped; the section of
an ax is < shaped.
[Illustration: ANYATHIAN ADZE-LIKE TOOL]
@@ -1581,17 +1581,17 @@ stratification.[3]
Soan (India)
Flake:
- Typical Mousterian
+ �Typical Mousterian�
Levalloiso-Mousterian
Levalloisian
Tayacian
Clactonian (localized in England)
Core-biface:
- Some blended elements in Mousterian
+ Some blended elements in �Mousterian�
Micoquian (= Acheulean 6 and 7)
Acheulean
- Abbevillian (once called Chellean)
+ Abbevillian (once called �Chellean�)
Pebble tool:
Oldowan
@@ -1608,8 +1608,8 @@ out of glacial gravels the easiest thing to do first is to isolate
individual types of tools into groups. First you put a bushel-basketful
of tools on a table and begin matching up types. Then you give names to
the groups of each type. The groups and the types are really matters of
-the archeologists choice; in real life, they were probably less exact
-than the archeologists lists of them. We now know pretty well in which
+the archeologists� choice; in real life, they were probably less exact
+than the archeologists� lists of them. We now know pretty well in which
of the early traditions the various early groups belong.
@@ -1635,9 +1635,9 @@ production must have been passed on from one generation to another.
I could even guess that the notions of the ideal type of one or the
other of these tools stood out in the minds of men of those times
-somewhat like a symbol of perfect tool for good job. If this were
-so--remember its only a wild guess of mine--then men were already
-symbol users. Now lets go on a further step to the fact that the words
+somewhat like a symbol of �perfect tool for good job.� If this were
+so--remember it�s only a wild guess of mine--then men were already
+symbol users. Now let�s go on a further step to the fact that the words
men speak are simply sounds, each different sound being a symbol for a
different meaning. If standardized tool-making suggests symbol-making,
is it also possible that crude word-symbols were also being made? I
@@ -1650,7 +1650,7 @@ of our second step is more suggestive, although we may not yet feel
sure that many of the earlier pebble tools were man-made products. But
with the step to standardization and the appearance of the traditions,
I believe we must surely be dealing with the traces of culture-bearing
-_men_. The conventional understandings which Professor Redfields
+_men_. The �conventional understandings� which Professor Redfield�s
definition of culture suggests are now evidenced for us in the
persistent habits for the preparation of stone tools. Were we able to
see the other things these prehistoric men must have made--in materials
@@ -1666,19 +1666,19 @@ In the last chapter, I told you that many of the older archeologists
and human paleontologists used to think that modern man was very old.
The supposed ages of Piltdown and Galley Hill were given as evidence
of the great age of anatomically modern man, and some interpretations
-of the Swanscombe and Fontchevade fossils were taken to support
+of the Swanscombe and Font�chevade fossils were taken to support
this view. The conclusion was that there were two parallel lines or
-phyla of men already present well back in the Pleistocene. The
-first of these, the more primitive or paleoanthropic line, was
+�phyla� of men already present well back in the Pleistocene. The
+first of these, the more primitive or �paleoanthropic� line, was
said to include Heidelberg, the proto-neanderthaloids and classic
-Neanderthal. The more anatomically modern or neanthropic line was
+Neanderthal. The more anatomically modern or �neanthropic� line was
thought to consist of Piltdown and the others mentioned above. The
Neanderthaler or paleoanthropic line was thought to have become extinct
after the first phase of the last great glaciation. Of course, the
modern or neanthropic line was believed to have persisted into the
-present, as the basis for the worlds population today. But with
+present, as the basis for the world�s population today. But with
Piltdown liquidated, Galley Hill known to be very late, and Swanscombe
-and Fontchevade otherwise interpreted, there is little left of the
+and Font�chevade otherwise interpreted, there is little left of the
so-called parallel phyla theory.
While the theory was in vogue, however, and as long as the European
@@ -1695,9 +1695,9 @@ where they had actually been dropped by the men who made and used
them. The tools came, rather, from the secondary hodge-podge of the
glacial gravels. I tried to give you a picture of the bulldozing action
of glaciers (p. 40) and of the erosion and weathering that were
-side-effects of a glacially conditioned climate on the earths surface.
+side-effects of a glacially conditioned climate on the earth�s surface.
As we said above, if one simply plucks tools out of the redeposited
-gravels, his natural tendency is to type the tools by groups, and to
+gravels, his natural tendency is to �type� the tools by groups, and to
think that the groups stand for something _on their own_.
In 1906, M. Victor Commont actually made a rare find of what seems
@@ -1705,15 +1705,15 @@ to have been a kind of workshop site, on a terrace above the Somme
river in France. Here, Commont realized, flake tools appeared clearly
in direct association with core-biface tools. Few prehistorians paid
attention to Commont or his site, however. It was easier to believe
-that flake tools represented a distinct culture and that this
-culture was that of the Neanderthaler or paleoanthropic line, and
-that the core-bifaces stood for another culture which was that of the
+that flake tools represented a distinct �culture� and that this
+�culture� was that of the Neanderthaler or paleoanthropic line, and
+that the core-bifaces stood for another �culture� which was that of the
supposed early modern or neanthropic line. Of course, I am obviously
skipping many details here. Some later sites with Neanderthal fossils
do seem to have only flake tools, but other such sites have both types
of tools. The flake tools which appeared _with_ the core-bifaces
in the Swanscombe gravels were never made much of, although it
-was embarrassing for the parallel phyla people that Fontchevade
+was embarrassing for the parallel phyla people that Font�chevade
ran heavily to flake tools. All in all, the parallel phyla theory
flourished because it seemed so neat and easy to understand.
@@ -1722,20 +1722,20 @@ TRADITIONS ARE TOOL-MAKING HABITS, NOT CULTURES
In case you think I simply enjoy beating a dead horse, look in any
standard book on prehistory written twenty (or even ten) years ago, or
-in most encyclopedias. Youll find that each of the individual tool
-types, of the West, at least, was supposed to represent a culture.
-The cultures were believed to correspond to parallel lines of human
+in most encyclopedias. You�ll find that each of the individual tool
+types, of the West, at least, was supposed to represent a �culture.�
+The �cultures� were believed to correspond to parallel lines of human
evolution.
In 1937, Mr. Harper Kelley strongly re-emphasized the importance
-of Commonts workshop site and the presence of flake tools with
-core-bifaces. Next followed Dr. Movius clear delineation of the
+of Commont�s workshop site and the presence of flake tools with
+core-bifaces. Next followed Dr. Movius� clear delineation of the
chopper-chopping tool tradition of the Far East. This spoiled the nice
symmetry of the flake-tool = paleoanthropic, core-biface = neanthropic
equations. Then came increasing understanding of the importance of
the pebble tools in Africa, and the location of several more workshop
sites there, especially at Olorgesailie in Kenya. Finally came the
-liquidation of Piltdown and the deflation of Galley Hills date. So it
+liquidation of Piltdown and the deflation of Galley Hill�s date. So it
is at last possible to picture an individual prehistoric man making a
flake tool to do one job and a core-biface tool to do another. Commont
showed us this picture in 1906, but few believed him.
@@ -1751,7 +1751,7 @@ that of the cave on Mount Carmel in Palestine, where the blended
pre-neanderthaloid, 70 per cent modern-type skulls were found. Here, in
the same level with the skulls, were 9,784 flint tools. Of these, only
three--doubtless strays--were core-bifaces; all the rest were flake
-tools or flake chips. We noted above how the Fontchevade cave ran to
+tools or flake chips. We noted above how the Font�chevade cave ran to
flake tools. The only conclusion I would draw from this is that times
and circumstances did exist in which prehistoric men needed only flake
tools. So they only made flake tools for those particular times and
@@ -1773,13 +1773,13 @@ piece of bone. From the gravels which yield the Clactonian flakes of
England comes the fire-hardened point of a wooden spear. There are
also the chance finds of the fossil human bones themselves, of which
we spoke in the last chapter. Aside from the cave of Peking man, none
-of the earliest tools have been found in caves. Open air or workshop
+of the earliest tools have been found in caves. Open air or �workshop�
sites which do not seem to have been disturbed later by some geological
agency are very rare.
The chart on page 65 shows graphically what the situation in
west-central Europe seems to have been. It is not yet certain whether
-there were pebble tools there or not. The Fontchevade cave comes
+there were pebble tools there or not. The Font�chevade cave comes
into the picture about 100,000 years ago or more. But for the earlier
hundreds of thousands of years--below the red-dotted line on the
chart--the tools we find come almost entirely from the haphazard
@@ -1790,13 +1790,13 @@ kinds of all-purpose tools. Almost any one of them could be used for
hacking, chopping, cutting, and scraping; so the men who used them must
have been living in a rough and ready sort of way. They found or hunted
their food wherever they could. In the anthropological jargon, they
-were food-gatherers, pure and simple.
+were �food-gatherers,� pure and simple.
Because of the mixture in the gravels and in the materials they
-carried, we cant be sure which animals these men hunted. Bones of
+carried, we can�t be sure which animals these men hunted. Bones of
the larger animals turn up in the gravels, but they could just as
well belong to the animals who hunted the men, rather than the other
-way about. We dont know. This is why camp sites like Commonts and
+way about. We don�t know. This is why camp sites like Commont�s and
Olorgesailie in Kenya are so important when we do find them. The animal
bones at Olorgesailie belonged to various mammals of extremely large
size. Probably they were taken in pit-traps, but there are a number of
@@ -1809,18 +1809,18 @@ animal.
Professor F. Clark Howell recently returned from excavating another
important open air site at Isimila in Tanganyika. The site yielded
the bones of many fossil animals and also thousands of core-bifaces,
-flakes, and choppers. But Howells reconstruction of the food-getting
-habits of the Isimila people certainly suggests that the word hunting
-is too dignified for what they did; scavenging would be much nearer
+flakes, and choppers. But Howell�s reconstruction of the food-getting
+habits of the Isimila people certainly suggests that the word �hunting�
+is too dignified for what they did; �scavenging� would be much nearer
the mark.
During a great part of this time the climate was warm and pleasant. The
second interglacial period (the time between the second and third great
alpine glaciations) lasted a long time, and during much of this time
-the climate may have been even better than ours is now. We dont know
+the climate may have been even better than ours is now. We don�t know
that earlier prehistoric men in Europe or Africa lived in caves. They
may not have needed to; much of the weather may have been so nice that
-they lived in the open. Perhaps they didnt wear clothes, either.
+they lived in the open. Perhaps they didn�t wear clothes, either.
WHAT THE PEKING CAVE-FINDS TELL US
@@ -1832,7 +1832,7 @@ were bones of dangerous animals, members of the wolf, bear, and cat
families. Some of the cat bones belonged to beasts larger than tigers.
There were also bones of other wild animals: buffalo, camel, deer,
elephants, horses, sheep, and even ostriches. Seventy per cent of the
-animals Peking man killed were fallow deer. Its much too cold and dry
+animals Peking man killed were fallow deer. It�s much too cold and dry
in north China for all these animals to live there today. So this list
helps us know that the weather was reasonably warm, and that there was
enough rain to grow grass for the grazing animals. The list also helps
@@ -1840,7 +1840,7 @@ the paleontologists to date the find.
Peking man also seems to have eaten plant food, for there are hackberry
seeds in the debris of the cave. His tools were made of sandstone and
-quartz and sometimes of a rather bad flint. As weve already seen, they
+quartz and sometimes of a rather bad flint. As we�ve already seen, they
belong in the chopper-tool tradition. It seems fairly clear that some
of the edges were chipped by right-handed people. There are also many
split pieces of heavy bone. Peking man probably split them so he could
@@ -1850,10 +1850,10 @@ Many of these split bones were the bones of Peking men. Each one of the
skulls had already had the base broken out of it. In no case were any
of the bones resting together in their natural relation to one another.
There is nothing like a burial; all of the bones are scattered. Now
-its true that animals could have scattered bodies that were not cared
+it�s true that animals could have scattered bodies that were not cared
for or buried. But splitting bones lengthwise and carefully removing
the base of a skull call for both the tools and the people to use them.
-Its pretty clear who the people were. Peking man was a cannibal.
+It�s pretty clear who the people were. Peking man was a cannibal.
* * * * *
@@ -1862,8 +1862,8 @@ prehistoric men. In those days life was rough. You evidently had to
watch out not only for dangerous animals but also for your fellow men.
You ate whatever you could catch or find growing. But you had sense
enough to build fires, and you had already formed certain habits for
-making the kinds of stone tools you needed. Thats about all we know.
-But I think well have to admit that cultural beginnings had been made,
+making the kinds of stone tools you needed. That�s about all we know.
+But I think we�ll have to admit that cultural beginnings had been made,
and that these early people were really _men_.
@@ -1876,16 +1876,16 @@ MORE EVIDENCE of Culture
While the dating is not yet sure, the material that we get from caves
in Europe must go back to about 100,000 years ago; the time of the
-classic Neanderthal group followed soon afterwards. We dont know why
+classic Neanderthal group followed soon afterwards. We don�t know why
there is no earlier material in the caves; apparently they were not
used before the last interglacial phase (the period just before the
last great glaciation). We know that men of the classic Neanderthal
group were living in caves from about 75,000 to 45,000 years ago.
New radioactive carbon dates even suggest that some of the traces of
-culture well describe in this chapter may have lasted to about 35,000
+culture we�ll describe in this chapter may have lasted to about 35,000
years ago. Probably some of the pre-neanderthaloid types of men had
also lived in caves. But we have so far found their bones in caves only
-in Palestine and at Fontchevade.
+in Palestine and at Font�chevade.
THE CAVE LAYERS
@@ -1893,7 +1893,7 @@ THE CAVE LAYERS
In parts of France, some peasants still live in caves. In prehistoric
time, many generations of people lived in them. As a result, many
caves have deep layers of debris. The first people moved in and lived
-on the rock floor. They threw on the floor whatever they didnt want,
+on the rock floor. They threw on the floor whatever they didn�t want,
and they tracked in mud; nobody bothered to clean house in those days.
Their debris--junk and mud and garbage and what not--became packed
into a layer. As time went on, and generations passed, the layer grew
@@ -1910,20 +1910,20 @@ earliest to latest. This is the _stratification_ we talked about (p.
[Illustration: SECTION OF SHELTER ON LOWER TERRACE, LE MOUSTIER]
-While we may find a mix-up in caves, its not nearly as bad as the
+While we may find a mix-up in caves, it�s not nearly as bad as the
mixing up that was done by glaciers. The animal bones and shells, the
fireplaces, the bones of men, and the tools the men made all belong
-together, if they come from one layer. Thats the reason why the cave
+together, if they come from one layer. That�s the reason why the cave
of Peking man is so important. It is also the reason why the caves in
Europe and the Near East are so important. We can get an idea of which
things belong together and which lot came earliest and which latest.
In most cases, prehistoric men lived only in the mouths of caves.
-They didnt like the dark inner chambers as places to live in. They
+They didn�t like the dark inner chambers as places to live in. They
preferred rock-shelters, at the bases of overhanging cliffs, if there
was enough overhang to give shelter. When the weather was good, they no
-doubt lived in the open air as well. Ill go on using the term cave
-since its more familiar, but remember that I really mean rock-shelter,
+doubt lived in the open air as well. I�ll go on using the term �cave�
+since it�s more familiar, but remember that I really mean rock-shelter,
as a place in which people actually lived.
The most important European cave sites are in Spain, France, and
@@ -1933,29 +1933,29 @@ found when the out-of-the-way parts of Europe, Africa, and Asia are
studied.
-AN INDUSTRY DEFINED
+AN �INDUSTRY� DEFINED
We have already seen that the earliest European cave materials are
-those from the cave of Fontchevade. Movius feels certain that the
+those from the cave of Font�chevade. Movius feels certain that the
lowest materials here date back well into the third interglacial stage,
-that which lay between the Riss (next to the last) and the Wrm I
+that which lay between the Riss (next to the last) and the W�rm I
(first stage of the last) alpine glaciations. This material consists
of an _industry_ of stone tools, apparently all made in the flake
-tradition. This is the first time we have used the word industry.
+tradition. This is the first time we have used the word �industry.�
It is useful to call all of the different tools found together in one
layer and made of _one kind of material_ an industry; that is, the
tools must be found together as men left them. Tools taken from the
glacial gravels (or from windswept desert surfaces or river gravels
-or any geological deposit) are not together in this sense. We might
-say the latter have only geological, not archeological context.
+or any geological deposit) are not �together� in this sense. We might
+say the latter have only �geological,� not �archeological� context.
Archeological context means finding things just as men left them. We
-can tell what tools go together in an industrial sense only if we
+can tell what tools go together in an �industrial� sense only if we
have archeological context.
-Up to now, the only things we could have called industries were the
+Up to now, the only things we could have called �industries� were the
worked stone industry and perhaps the worked (?) bone industry of the
Peking cave. We could add some of the very clear cases of open air
-sites, like Olorgesailie. We couldnt use the term for the stone tools
+sites, like Olorgesailie. We couldn�t use the term for the stone tools
from the glacial gravels, because we do not know which tools belonged
together. But when the cave materials begin to appear in Europe, we can
begin to speak of industries. Most of the European caves of this time
@@ -1964,16 +1964,16 @@ contain industries of flint tools alone.
THE EARLIEST EUROPEAN CAVE LAYERS
-Weve just mentioned the industry from what is said to be the oldest
+We�ve just mentioned the industry from what is said to be the oldest
inhabited cave in Europe; that is, the industry from the deepest layer
-of the site at Fontchevade. Apparently it doesnt amount to much. The
+of the site at Font�chevade. Apparently it doesn�t amount to much. The
tools are made of stone, in the flake tradition, and are very poorly
worked. This industry is called _Tayacian_. Its type tool seems to be
a smallish flake tool, but there are also larger flakes which seem to
have been fashioned for hacking. In fact, the type tool seems to be
simply a smaller edition of the Clactonian tool (pictured on p. 45).
-None of the Fontchevade tools are really good. There are scrapers,
+None of the Font�chevade tools are really good. There are scrapers,
and more or less pointed tools, and tools that may have been used
for hacking and chopping. Many of the tools from the earlier glacial
gravels are better made than those of this first industry we see in
@@ -2005,7 +2005,7 @@ core-biface and the flake traditions. The core-biface tools usually
make up less than half of all the tools in the industry. However,
the name of the biface type of tool is generally given to the whole
industry. It is called the _Acheulean_, actually a late form of it, as
-Acheulean is also used for earlier core-biface tools taken from the
+�Acheulean� is also used for earlier core-biface tools taken from the
glacial gravels. In western Europe, the name used is _Upper Acheulean_
or _Micoquian_. The same terms have been borrowed to name layers E and
F in the Tabun cave, on Mount Carmel in Palestine.
@@ -2029,7 +2029,7 @@ those used for at least one of the flake industries we shall mention
presently.
There is very little else in these early cave layers. We do not have
-a proper industry of bone tools. There are traces of fire, and of
+a proper �industry� of bone tools. There are traces of fire, and of
animal bones, and a few shells. In Palestine, there are many more
bones of deer than of gazelle in these layers; the deer lives in a
wetter climate than does the gazelle. In the European cave layers, the
@@ -2043,18 +2043,18 @@ bones of fossil men definitely in place with this industry.
FLAKE INDUSTRIES FROM THE CAVES
Two more stone industries--the _Levalloisian_ and the
-_Mousterian_--turn up at approximately the same time in the European
+�_Mousterian_�--turn up at approximately the same time in the European
cave layers. Their tools seem to be mainly in the flake tradition,
but according to some of the authorities their preparation also shows
some combination with the habits by which the core-biface tools were
prepared.
-Now notice that I dont tell you the Levalloisian and the Mousterian
+Now notice that I don�t tell you the Levalloisian and the �Mousterian�
layers are both above the late Acheulean layers. Look at the cave
-section (p. 57) and youll find that some Mousterian of Acheulean
-tradition appears above some typical Mousterian. This means that
+section (p. 57) and you�ll find that some �Mousterian of Acheulean
+tradition� appears above some �typical Mousterian.� This means that
there may be some kinds of Acheulean industries that are later than
-some kinds of Mousterian. The same is true of the Levalloisian.
+some kinds of �Mousterian.� The same is true of the Levalloisian.
There were now several different kinds of habits that men used in
making stone tools. These habits were based on either one or the other
@@ -2072,7 +2072,7 @@ were no patent laws in those days.
The extremely complicated interrelationships of the different habits
used by the tool-makers of this range of time are at last being
-systematically studied. M. Franois Bordes has developed a statistical
+systematically studied. M. Fran�ois Bordes has developed a statistical
method of great importance for understanding these tool preparation
habits.
@@ -2081,22 +2081,22 @@ THE LEVALLOISIAN AND MOUSTERIAN
The easiest Levalloisian tool to spot is a big flake tool. The trick
in making it was to fashion carefully a big chunk of stone (called
-the Levalloisian tortoise core, because it resembles the shape of
+the Levalloisian �tortoise core,� because it resembles the shape of
a turtle-shell) and then to whack this in such a way that a large
flake flew off. This large thin flake, with sharp cutting edges, is
the finished Levalloisian tool. There were various other tools in a
Levalloisian industry, but this is the characteristic _Levalloisian_
tool.
-There are several typical Mousterian stone tools. Different from
-the tools of the Levalloisian type, these were made from disc-like
-cores. There are medium-sized flake side scrapers. There are also
-some small pointed tools and some small hand axes. The last of these
+There are several �typical Mousterian� stone tools. Different from
+the tools of the Levalloisian type, these were made from �disc-like
+cores.� There are medium-sized flake �side scrapers.� There are also
+some small pointed tools and some small �hand axes.� The last of these
tool types is often a flake worked on both of the flat sides (that
is, bifacially). There are also pieces of flint worked into the form
of crude balls. The pointed tools may have been fixed on shafts to
make short jabbing spears; the round flint balls may have been used as
-bolas. Actually, we dont _know_ what either tool was used for. The
+bolas. Actually, we don�t _know_ what either tool was used for. The
points and side scrapers are illustrated (pp. 64 and 66).
[Illustration: LEVALLOIS FLAKE]
@@ -2108,9 +2108,9 @@ Nowadays the archeologists are less and less sure of the importance
of any one specific tool type and name. Twenty years ago, they used
to speak simply of Acheulean or Levalloisian or Mousterian tools.
Now, more and more, _all_ of the tools from some one layer in a
-cave are called an industry, which is given a mixed name. Thus we
-have Levalloiso-Mousterian, and Acheuleo-Levalloisian, and even
-Acheuleo-Mousterian (or Mousterian of Acheulean tradition). Bordes
+cave are called an �industry,� which is given a mixed name. Thus we
+have �Levalloiso-Mousterian,� and �Acheuleo-Levalloisian,� and even
+�Acheuleo-Mousterian� (or �Mousterian of Acheulean tradition�). Bordes�
systematic work is beginning to clear up some of our confusion.
The time of these late Acheuleo-Levalloiso-Mousterioid industries
@@ -2120,16 +2120,16 @@ phase of the last great glaciation. It was also the time that the
classic group of Neanderthal men was living in Europe. A number of
the Neanderthal fossil finds come from these cave layers. Before the
different habits of tool preparation were understood it used to be
-popular to say Neanderthal man was Mousterian man. I think this is
-wrong. What used to be called Mousterian is now known to be a variety
+popular to say Neanderthal man was �Mousterian man.� I think this is
+wrong. What used to be called �Mousterian� is now known to be a variety
of industries with tools of both core-biface and flake habits, and
-so mixed that the word Mousterian used alone really doesnt mean
+so mixed that the word �Mousterian� used alone really doesn�t mean
anything. The Neanderthalers doubtless understood the tool preparation
habits by means of which Acheulean, Levalloisian and Mousterian type
tools were produced. We also have the more modern-like Mount Carmel
people, found in a cave layer of Palestine with tools almost entirely
-in the flake tradition, called Levalloiso-Mousterian, and the
-Fontchevade-Tayacian (p. 59).
+in the flake tradition, called �Levalloiso-Mousterian,� and the
+Font�chevade-Tayacian (p. 59).
[Illustration: MOUSTERIAN POINT]
@@ -2165,7 +2165,7 @@ which seem to have served as anvils or chopping blocks, are fairly
common.
Bits of mineral, used as coloring matter, have also been found. We
-dont know what the color was used for.
+don�t know what the color was used for.
[Illustration: MOUSTERIAN SIDE SCRAPER]
@@ -2230,7 +2230,7 @@ might suggest some notion of hoarding up the spirits or the strength of
bears killed in the hunt. Probably the people lived in small groups,
as hunting and food-gathering seldom provide enough food for large
groups of people. These groups probably had some kind of leader or
-chief. Very likely the rude beginnings of rules for community life
+�chief.� Very likely the rude beginnings of rules for community life
and politics, and even law, were being made. But what these were, we
do not know. We can only guess about such things, as we can only guess
about many others; for example, how the idea of a family must have been
@@ -2246,8 +2246,8 @@ small. The mixtures and blendings of the habits used in making stone
tools must mean that there were also mixtures and blends in many of
the other ideas and beliefs of these small groups. And what this
probably means is that there was no one _culture_ of the time. It is
-certainly unlikely that there were simply three cultures, Acheulean,
-Levalloisian, and Mousterian, as has been thought in the past.
+certainly unlikely that there were simply three cultures, �Acheulean,�
+�Levalloisian,� and �Mousterian,� as has been thought in the past.
Rather there must have been a great variety of loosely related cultures
at about the same stage of advancement. We could say, too, that here
we really begin to see, for the first time, that remarkable ability
@@ -2272,7 +2272,7 @@ related habits for the making of tools. But the men who made them must
have looked much like the men of the West. Their tools were different,
but just as useful.
-As to what the men of the West looked like, Ive already hinted at all
+As to what the men of the West looked like, I�ve already hinted at all
we know so far (pp. 29 ff.). The Neanderthalers were present at
the time. Some more modern-like men must have been about, too, since
fossils of them have turned up at Mount Carmel in Palestine, and at
@@ -2306,7 +2306,7 @@ A NEW TRADITION APPEARS
Something new was probably beginning to happen in the
European-Mediterranean area about 40,000 years ago, though all the
rest of the Old World seems to have been going on as it had been. I
-cant be sure of this because the information we are using as a basis
+can�t be sure of this because the information we are using as a basis
for dates is very inaccurate for the areas outside of Europe and the
Mediterranean.
@@ -2325,7 +2325,7 @@ drawing shows. It has sharp cutting edges, and makes a very useful
knife. The real trick is to be able to make one. It is almost
impossible to make a blade out of any stone but flint or a natural
volcanic glass called obsidian. And even if you have flint or obsidian,
-you first have to work up a special cone-shaped blade-core, from
+you first have to work up a special cone-shaped �blade-core,� from
which to whack off blades.
[Illustration: PLAIN BLADE]
@@ -2351,8 +2351,8 @@ found in equally early cave levels in Syria; their popularity there
seems to fluctuate a bit. Some more or less parallel-sided flakes are
known in the Levalloisian industry in France, but they are probably
no earlier than Tabun E. The Tabun blades are part of a local late
-Acheulean industry, which is characterized by core-biface hand
-axes, but which has many flake tools as well. Professor F. E.
+�Acheulean� industry, which is characterized by core-biface �hand
+axes,� but which has many flake tools as well. Professor F. E.
Zeuner believes that this industry may be more than 120,000 years old;
actually its date has not yet been fixed, but it is very old--older
than the fossil finds of modern-like men in the same caves.
@@ -2371,7 +2371,7 @@ We are not sure just where the earliest _persisting_ habits for the
production of blade tools developed. Impressed by the very early
momentary appearance of blades at Tabun on Mount Carmel, Professor
Dorothy A. Garrod first favored the Near East as a center of origin.
-She spoke of some as yet unidentified Asiatic centre, which she
+She spoke of �some as yet unidentified Asiatic centre,� which she
thought might be in the highlands of Iran or just beyond. But more
recent work has been done in this area, especially by Professor Coon,
and the blade tools do not seem to have an early appearance there. When
@@ -2395,21 +2395,21 @@ core (and the striking of the Levalloisian flake from it) might have
followed through to the conical core and punch technique for the
production of blades. Professor Garrod is much impressed with the speed
of change during the later phases of the last glaciation, and its
-probable consequences. She speaks of the greater number of industries
+probable consequences. She speaks of �the greater number of industries
having enough individual character to be classified as distinct ...
-since evolution now starts to outstrip diffusion. Her evolution here
+since evolution now starts to outstrip diffusion.� Her �evolution� here
is of course an industrial evolution rather than a biological one.
Certainly the people of Europe had begun to make blade tools during
the warm spell after the first phase of the last glaciation. By about
40,000 years ago blades were well established. The bones of the blade
-tool makers weve found so far indicate that anatomically modern men
+tool makers we�ve found so far indicate that anatomically modern men
had now certainly appeared. Unfortunately, only a few fossil men have
so far been found from the very beginning of the blade tool range in
Europe (or elsewhere). What I certainly shall _not_ tell you is that
conquering bands of fine, strong, anatomically modern men, armed with
superior blade tools, came sweeping out of the East to exterminate the
-lowly Neanderthalers. Even if we dont know exactly what happened, Id
-lay a good bet it wasnt that simple.
+lowly Neanderthalers. Even if we don�t know exactly what happened, I�d
+lay a good bet it wasn�t that simple.
We do know a good deal about different blade industries in Europe.
Almost all of them come from cave layers. There is a great deal of
@@ -2418,7 +2418,7 @@ this complication; in fact, it doubtless simplifies it too much. But
it may suggest all the complication of industries which is going
on at this time. You will note that the upper portion of my much
simpler chart (p. 65) covers the same material (in the section
-marked Various Blade-Tool Industries). That chart is certainly too
+marked �Various Blade-Tool Industries�). That chart is certainly too
simplified.
You will realize that all this complication comes not only from
@@ -2429,7 +2429,7 @@ a good deal of climatic change at this time. The plants and animals
that men used for food were changing, too. The great variety of tools
and industries we now find reflect these changes and the ability of men
to keep up with the times. Now, for example, is the first time we are
-sure that there are tools to _make_ other tools. They also show mens
+sure that there are tools to _make_ other tools. They also show men�s
increasing ability to adapt themselves.
@@ -2437,15 +2437,15 @@ SPECIAL TYPES OF BLADE TOOLS
The most useful tools that appear at this time were made from blades.
- 1. The backed blade. This is a knife made of a flint blade, with
- one edge purposely blunted, probably to save the users fingers
+ 1. The �backed� blade. This is a knife made of a flint blade, with
+ one edge purposely blunted, probably to save the user�s fingers
from being cut. There are several shapes of backed blades (p.
73).
[Illustration: TWO BURINS]
- 2. The _burin_ or graver. The burin was the original chisel. Its
- cutting edge is _transverse_, like a chisels. Some burins are
+ 2. The _burin_ or �graver.� The burin was the original chisel. Its
+ cutting edge is _transverse_, like a chisel�s. Some burins are
made like a screw-driver, save that burins are sharp. Others have
edges more like the blade of a chisel or a push plane, with
only one bevel. Burins were probably used to make slots in wood
@@ -2456,29 +2456,29 @@ The most useful tools that appear at this time were made from blades.
[Illustration: TANGED POINT]
- 3. The tanged point. These stone points were used to tip arrows or
+ 3. The �tanged� point. These stone points were used to tip arrows or
light spears. They were made from blades, and they had a long tang
at the bottom where they were fixed to the shaft. At the place
where the tang met the main body of the stone point, there was
- a marked shoulder, the beginnings of a barb. Such points had
+ a marked �shoulder,� the beginnings of a barb. Such points had
either one or two shoulders.
[Illustration: NOTCHED BLADE]
- 4. The notched or strangulated blade. Along with the points for
+ 4. The �notched� or �strangulated� blade. Along with the points for
arrows or light spears must go a tool to prepare the arrow or
- spear shaft. Today, such a tool would be called a draw-knife or
- a spoke-shave, and this is what the notched blades probably are.
+ spear shaft. Today, such a tool would be called a �draw-knife� or
+ a �spoke-shave,� and this is what the notched blades probably are.
Our spoke-shaves have sharp straight cutting blades and really
- shave. Notched blades of flint probably scraped rather than cut.
+ �shave.� Notched blades of flint probably scraped rather than cut.
- 5. The awl, drill, or borer. These blade tools are worked out
+ 5. The �awl,� �drill,� or �borer.� These blade tools are worked out
to a spike-like point. They must have been used for making holes
in wood, bone, shell, skin, or other things.
[Illustration: DRILL OR AWL]
- 6. The end-scraper on a blade is a tool with one or both ends
+ 6. The �end-scraper on a blade� is a tool with one or both ends
worked so as to give a good scraping edge. It could have been used
to hollow out wood or bone, scrape hides, remove bark from trees,
and a number of other things (p. 78).
@@ -2489,11 +2489,11 @@ usually made of blades, but the best examples are so carefully worked
on both sides (bifacially) that it is impossible to see the original
blade. This tool is
- 7. The laurel leaf point. Some of these tools were long and
+ 7. The �laurel leaf� point. Some of these tools were long and
dagger-like, and must have been used as knives or daggers. Others
- were small, called willow leaf, and must have been mounted on
+ were small, called �willow leaf,� and must have been mounted on
spear or arrow shafts. Another typical Solutrean tool is the
- shouldered point. Both the laurel leaf and shouldered point
+ �shouldered� point. Both the �laurel leaf� and �shouldered� point
types are illustrated (see above and p. 79).
[Illustration: END-SCRAPER ON A BLADE]
@@ -2507,17 +2507,17 @@ second is a core tool.
[Illustration: SHOULDERED POINT]
- 8. The keel-shaped round scraper is usually small and quite round,
+ 8. The �keel-shaped round scraper� is usually small and quite round,
and has had chips removed up to a peak in the center. It is called
- keel-shaped because it is supposed to look (when upside down)
+ �keel-shaped� because it is supposed to look (when upside down)
like a section through a boat. Actually, it looks more like a tent
or an umbrella. Its outer edges are sharp all the way around, and
it was probably a general purpose scraping tool (see illustration,
p. 81).
- 9. The keel-shaped nosed scraper is a much larger and heavier tool
+ 9. The �keel-shaped nosed scraper� is a much larger and heavier tool
than the round scraper. It was made on a core with a flat bottom,
- and has one nicely worked end or nose. Such tools are usually
+ and has one nicely worked end or �nose.� Such tools are usually
large enough to be easily grasped, and probably were used like
push planes (see illustration, p. 81).
@@ -2530,7 +2530,7 @@ the most easily recognized blade tools, although they show differences
in detail at different times. There are also many other kinds. Not
all of these tools appear in any one industry at one time. Thus the
different industries shown in the chart (p. 72) each have only some
-of the blade tools weve just listed, and also a few flake tools. Some
+of the blade tools we�ve just listed, and also a few flake tools. Some
industries even have a few core tools. The particular types of blade
tools appearing in one cave layer or another, and the frequency of
appearance of the different types, tell which industry we have in each
@@ -2545,15 +2545,15 @@ to appear. There are knives, pins, needles with eyes, and little
double-pointed straight bars of bone that were probably fish-hooks. The
fish-line would have been fastened in the center of the bar; when the
fish swallowed the bait, the bar would have caught cross-wise in the
-fishs mouth.
+fish�s mouth.
One quite special kind of bone tool is a long flat point for a light
spear. It has a deep notch cut up into the breadth of its base, and is
-called a split-based bone point (p. 82). We know examples of bone
+called a �split-based bone point� (p. 82). We know examples of bone
beads from these times, and of bone handles for flint tools. Pierced
teeth of some animals were worn as beads or pendants, but I am not sure
-that elks teeth were worn this early. There are even spool-shaped
-buttons or toggles.
+that elks� teeth were worn this early. There are even spool-shaped
+�buttons� or toggles.
[Illustration: SPLIT-BASED BONE POINT]
@@ -2595,12 +2595,12 @@ almost to have served as sketch blocks. The surfaces of these various
objects may show animals, or rather abstract floral designs, or
geometric designs.
-[Illustration: VENUS FIGURINE FROM WILLENDORF]
+[Illustration: �VENUS� FIGURINE FROM WILLENDORF]
Some of the movable art is not done on tools. The most remarkable
examples of this class are little figures of women. These women seem to
be pregnant, and their most female characteristics are much emphasized.
-It is thought that these Venus or Mother-goddess figurines may be
+It is thought that these �Venus� or �Mother-goddess� figurines may be
meant to show the great forces of nature--fertility and the birth of
life.
@@ -2616,21 +2616,21 @@ are different styles in the cave art. The really great cave art is
pretty well restricted to southern France and Cantabrian (northwestern)
Spain.
-There are several interesting things about the Franco-Cantabrian cave
+There are several interesting things about the �Franco-Cantabrian� cave
art. It was done deep down in the darkest and most dangerous parts of
the caves, although the men lived only in the openings of caves. If you
think what they must have had for lights--crude lamps of hollowed stone
have been found, which must have burned some kind of oil or grease,
with a matted hair or fiber wick--and of the animals that may have
-lurked in the caves, youll understand the part about danger. Then,
-too, were sure the pictures these people painted were not simply to be
+lurked in the caves, you�ll understand the part about danger. Then,
+too, we�re sure the pictures these people painted were not simply to be
looked at and admired, for they painted one picture right over other
pictures which had been done earlier. Clearly, it was the _act_ of
_painting_ that counted. The painter had to go way down into the most
mysterious depths of the earth and create an animal in paint. Possibly
he believed that by doing this he gained some sort of magic power over
the same kind of animal when he hunted it in the open air. It certainly
-doesnt look as if he cared very much about the picture he painted--as
+doesn�t look as if he cared very much about the picture he painted--as
a finished product to be admired--for he or somebody else soon went
down and painted another animal right over the one he had done.
@@ -2683,10 +2683,10 @@ it.
Their art is another example of the direction the human mind was
taking. And when I say human, I mean it in the fullest sense, for this
is the time in which fully modern man has appeared. On page 34, we
-spoke of the Cro-Magnon group and of the Combe Capelle-Brnn group of
-Caucasoids and of the Grimaldi Negroids, who are no longer believed
+spoke of the Cro-Magnon group and of the Combe Capelle-Br�nn group of
+Caucasoids and of the Grimaldi �Negroids,� who are no longer believed
to be Negroid. I doubt that any one of these groups produced most of
-the achievements of the times. Its not yet absolutely sure which
+the achievements of the times. It�s not yet absolutely sure which
particular group produced the great cave art. The artists were almost
certainly a blend of several (no doubt already mixed) groups. The pair
of Grimaldians were buried in a grave with a sprinkling of red ochre,
@@ -2705,9 +2705,9 @@ also found about the shore of the Mediterranean basin, and it moved
into northern Europe as the last glaciation pulled northward. People
began making blade tools of very small size. They learned how to chip
very slender and tiny blades from a prepared core. Then they made these
-little blades into tiny triangles, half-moons (lunates), trapezoids,
+little blades into tiny triangles, half-moons (�lunates�), trapezoids,
and several other geometric forms. These little tools are called
-microliths. They are so small that most of them must have been fixed
+�microliths.� They are so small that most of them must have been fixed
in handles or shafts.
[Illustration: MICROLITHS
@@ -2726,7 +2726,7 @@ One corner of each little triangle stuck out, and the whole thing
made a fine barbed harpoon. In historic times in Egypt, geometric
trapezoidal microliths were still in use as arrowheads. They were
fastened--broad end out--on the end of an arrow shaft. It seems queer
-to give an arrow a point shaped like a T. Actually, the little points
+to give an arrow a point shaped like a �T.� Actually, the little points
were very sharp, and must have pierced the hides of animals very
easily. We also think that the broader cutting edge of the point may
have caused more bleeding than a pointed arrowhead would. In hunting
@@ -2739,7 +2739,7 @@ is some evidence that they appear early in the Near East. Their use
was very common in northwest Africa but this came later. The microlith
makers who reached south Russia and central Europe possibly moved up
out of the Near East. Or it may have been the other way around; we
-simply dont yet know.
+simply don�t yet know.
Remember that the microliths we are talking about here were made from
carefully prepared little blades, and are often geometric in outline.
@@ -2749,7 +2749,7 @@ even some flake scrapers, in most microlithic industries. I emphasize
this bladelet and the geometric character of the microlithic industries
of the western Old World, since there has sometimes been confusion in
the matter. Sometimes small flake chips, utilized as minute pointed
-tools, have been called microliths. They may be _microlithic_ in size
+tools, have been called �microliths.� They may be _microlithic_ in size
in terms of the general meaning of the word, but they do not seem to
belong to the sub-tradition of the blade tool preparation habits which
we have been discussing here.
@@ -2763,10 +2763,10 @@ in western Asia too, and early, although Professor Garrod is no longer
sure that the whole tradition originated in the Near East. If you look
again at my chart (p. 72) you will note that in western Asia I list
some of the names of the western European industries, but with the
-qualification -like (for example, Gravettian-like). The western
+qualification �-like� (for example, �Gravettian-like�). The western
Asiatic blade-tool industries do vaguely recall some aspects of those
of western Europe, but we would probably be better off if we used
-completely local names for them. The Emiran of my chart is such an
+completely local names for them. The �Emiran� of my chart is such an
example; its industry includes a long spike-like blade point which has
no western European counterpart.
@@ -2774,13 +2774,13 @@ When we last spoke of Africa (p. 66), I told you that stone tools
there were continuing in the Levalloisian flake tradition, and were
becoming smaller. At some time during this process, two new tool
types appeared in northern Africa: one was the Aterian point with
-a tang (p. 67), and the other was a sort of laurel leaf point,
-called the Sbaikian. These two tool types were both produced from
+a tang (p. 67), and the other was a sort of �laurel leaf� point,
+called the �Sbaikian.� These two tool types were both produced from
flakes. The Sbaikian points, especially, are roughly similar to some
of the Solutrean points of Europe. It has been suggested that both the
Sbaikian and Aterian points may be seen on their way to France through
their appearance in the Spanish cave deposits of Parpallo, but there is
-also a rival pre-Solutrean in central Europe. We still do not know
+also a rival �pre-Solutrean� in central Europe. We still do not know
whether there was any contact between the makers of these north African
tools and the Solutrean tool-makers. What does seem clear is that the
blade-tool tradition itself arrived late in northern Africa.
@@ -2788,11 +2788,11 @@ blade-tool tradition itself arrived late in northern Africa.
NETHER AFRICA
-Blade tools and laurel leaf points and some other probably late
+Blade tools and �laurel leaf� points and some other probably late
stone tool types also appear in central and southern Africa. There
are geometric microliths on bladelets and even some coarse pottery in
east Africa. There is as yet no good way of telling just where these
-items belong in time; in broad geological terms they are late.
+items belong in time; in broad geological terms they are �late.�
Some people have guessed that they are as early as similar European
and Near Eastern examples, but I doubt it. The makers of small-sized
Levalloisian flake tools occupied much of Africa until very late in
@@ -2823,18 +2823,18 @@ ancestors of the American Indians came from Asia.
The stone-tool traditions of Europe, Africa, the Near and Middle East,
and central Siberia, did _not_ move into the New World. With only a
very few special or late exceptions, there are _no_ core-bifaces,
-flakes, or blade tools of the Old World. Such things just havent been
+flakes, or blade tools of the Old World. Such things just haven�t been
found here.
-This is why I say its a shame we dont know more of the end of the
+This is why I say it�s a shame we don�t know more of the end of the
chopper-tool tradition in the Far East. According to Weidenreich,
the Mongoloids were in the Far East long before the end of the last
glaciation. If the genetics of the blood group types do demand a
non-Mongoloid ancestry for the American Indians, who else may have been
in the Far East 25,000 years ago? We know a little about the habits
for making stone tools which these first people brought with them,
-and these habits dont conform with those of the western Old World.
-Wed better keep our eyes open for whatever happened to the end of
+and these habits don�t conform with those of the western Old World.
+We�d better keep our eyes open for whatever happened to the end of
the chopper-tool tradition in northern China; already there are hints
that it lasted late there. Also we should watch future excavations
in eastern Siberia. Perhaps we shall find the chopper-tool tradition
@@ -2846,13 +2846,13 @@ THE NEW ERA
Perhaps it comes in part from the way I read the evidence and perhaps
in part it is only intuition, but I feel that the materials of this
chapter suggest a new era in the ways of life. Before about 40,000
-years ago, people simply gathered their food, wandering over large
+years ago, people simply �gathered� their food, wandering over large
areas to scavenge or to hunt in a simple sort of way. But here we
-have seen them settling-in more, perhaps restricting themselves in
+have seen them �settling-in� more, perhaps restricting themselves in
their wanderings and adapting themselves to a given locality in more
intensive ways. This intensification might be suggested by the word
-collecting. The ways of life we described in the earlier chapters
-were food-gathering ways, but now an era of food-collecting has
+�collecting.� The ways of life we described in the earlier chapters
+were �food-gathering� ways, but now an era of �food-collecting� has
begun. We shall see further intensifications of it in the next chapter.
@@ -2883,8 +2883,8 @@ The last great glaciation of the Ice Age was a two-part affair, with a
sub-phase at the end of the second part. In Europe the last sub-phase
of this glaciation commenced somewhere around 15,000 years ago. Then
the glaciers began to melt back, for the last time. Remember that
-Professor Antevs (p. 19) isnt sure the Ice Age is over yet! This
-melting sometimes went by fits and starts, and the weather wasnt
+Professor Antevs (p. 19) isn�t sure the Ice Age is over yet! This
+melting sometimes went by fits and starts, and the weather wasn�t
always changing for the better; but there was at least one time when
European weather was even better than it is now.
@@ -2927,16 +2927,16 @@ Sweden. Much of this north European material comes from bogs and swamps
where it had become water-logged and has kept very well. Thus we have
much more complete _assemblages_[4] than for any time earlier.
- [4] Assemblage is a useful word when there are different kinds of
+ [4] �Assemblage� is a useful word when there are different kinds of
archeological materials belonging together, from one area and of
- one time. An assemblage is made up of a number of industries
+ one time. An assemblage is made up of a number of �industries�
(that is, all the tools in chipped stone, all the tools in
bone, all the tools in wood, the traces of houses, etc.) and
everything else that manages to survive, such as the art, the
burials, the bones of the animals used as food, and the traces
of plant foods; in fact, everything that has been left to us
and can be used to help reconstruct the lives of the people to
- whom it once belonged. Our own present-day assemblage would be
+ whom it once belonged. Our own present-day �assemblage� would be
the sum total of all the objects in our mail-order catalogues,
department stores and supply houses of every sort, our churches,
our art galleries and other buildings, together with our roads,
@@ -2976,7 +2976,7 @@ found.
It seems likely that the Maglemosian bog finds are remains of summer
camps, and that in winter the people moved to higher and drier regions.
-Childe calls them the Forest folk; they probably lived much the
+Childe calls them the �Forest folk�; they probably lived much the
same sort of life as did our pre-agricultural Indians of the north
central states. They hunted small game or deer; they did a great deal
of fishing; they collected what plant food they could find. In fact,
@@ -3010,7 +3010,7 @@ South of the north European belt the hunting-food-collecting peoples
were living on as best they could during this time. One interesting
group, which seems to have kept to the regions of sandy soil and scrub
forest, made great quantities of geometric microliths. These are the
-materials called _Tardenoisian_. The materials of the Forest folk of
+materials called _Tardenoisian_. The materials of the �Forest folk� of
France and central Europe generally are called _Azilian_; Dr. Movius
believes the term might best be restricted to the area south of the
Loire River.
@@ -3032,24 +3032,24 @@ to it than this.
Professor Mathiassen of Copenhagen, who knows the archeological remains
of this time very well, poses a question. He speaks of the material
-as being neither rich nor progressive, in fact rather stagnant, but
-he goes on to add that the people had a certain receptiveness and
+as being neither rich nor progressive, in fact �rather stagnant,� but
+he goes on to add that the people had a certain �receptiveness� and
were able to adapt themselves quickly when the next change did come.
-My own understanding of the situation is that the Forest folk made
+My own understanding of the situation is that the �Forest folk� made
nothing as spectacular as had the producers of the earlier Magdalenian
assemblage and the Franco-Cantabrian art. On the other hand, they
_seem_ to have been making many more different kinds of tools for many
more different kinds of tasks than had their Ice Age forerunners. I
-emphasize seem because the preservation in the Maglemosian bogs
+emphasize �seem� because the preservation in the Maglemosian bogs
is very complete; certainly we cannot list anywhere near as many
different things for earlier times as we did for the Maglemosians
(p. 94). I believe this experimentation with all kinds of new tools
and gadgets, this intensification of adaptiveness (p. 91), this
-receptiveness, even if it is still only pointed toward hunting,
+�receptiveness,� even if it is still only pointed toward hunting,
fishing, and food-collecting, is an important thing.
Remember that the only marker we have handy for the _beginning_ of
-this tendency toward receptiveness and experimentation is the
+this tendency toward �receptiveness� and experimentation is the
little microlithic blade tools of various geometric forms. These, we
saw, began before the last ice had melted away, and they lasted on
in use for a very long time. I wish there were a better marker than
@@ -3063,7 +3063,7 @@ CHANGES IN OTHER AREAS?
All this last section was about Europe. How about the rest of the world
when the last glaciers were melting away?
-We simply dont know much about this particular time in other parts
+We simply don�t know much about this particular time in other parts
of the world except in Europe, the Mediterranean basin and the Middle
East. People were certainly continuing to move into the New World by
way of Siberia and the Bering Strait about this time. But for the
@@ -3075,10 +3075,10 @@ clear information.
REAL CHANGE AND PRELUDE IN THE NEAR EAST
The appearance of the microliths and the developments made by the
-Forest folk of northwestern Europe also mark an end. They show us
+�Forest folk� of northwestern Europe also mark an end. They show us
the terminal phase of the old food-collecting way of life. It grows
increasingly clear that at about the same time that the Maglemosian and
-other Forest folk were adapting themselves to hunting, fishing, and
+other �Forest folk� were adapting themselves to hunting, fishing, and
collecting in new ways to fit the post-glacial environment, something
completely new was being made ready in western Asia.
@@ -3098,7 +3098,7 @@ simply gathering or collecting it. When their food-production
became reasonably effective, people could and did settle down in
village-farming communities. With the appearance of the little farming
villages, a new way of life was actually under way. Professor Childe
-has good reason to speak of the food-producing revolution, for it was
+has good reason to speak of the �food-producing revolution,� for it was
indeed a revolution.
@@ -3117,8 +3117,8 @@ before the _how_ and _why_ answers begin to appear. Anthropologically
trained archeologists are fascinated with the cultures of men in times
of great change. About ten or twelve thousand years ago, the general
level of culture in many parts of the world seems to have been ready
-for change. In northwestern Europe, we saw that cultures changed
-just enough so that they would not have to change. We linked this to
+for change. In northwestern Europe, we saw that cultures �changed
+just enough so that they would not have to change.� We linked this to
environmental changes with the coming of post-glacial times.
In western Asia, we archeologists can prove that the food-producing
@@ -3155,7 +3155,7 @@ living as the Maglemosians did? These are the questions we still have
to face.
-CULTURAL RECEPTIVENESS AND PROMISING ENVIRONMENTS
+CULTURAL �RECEPTIVENESS� AND PROMISING ENVIRONMENTS
Until the archeologists and the natural scientists--botanists,
geologists, zoologists, and general ecologists--have spent many more
@@ -3163,15 +3163,15 @@ years on the problem, we shall not have full _how_ and _why_ answers. I
do think, however, that we are beginning to understand what to look for.
We shall have to learn much more of what makes the cultures of men
-receptive and experimental. Did change in the environment alone
-force it? Was it simply a case of Professor Toynbees challenge and
-response? I cannot believe the answer is quite that simple. Were it
-so simple, we should want to know why the change hadnt come earlier,
+�receptive� and experimental. Did change in the environment alone
+force it? Was it simply a case of Professor Toynbee�s �challenge and
+response?� I cannot believe the answer is quite that simple. Were it
+so simple, we should want to know why the change hadn�t come earlier,
along with earlier environmental changes. We shall not know the answer,
however, until we have excavated the traces of many more cultures of
the time in question. We shall doubtless also have to learn more about,
and think imaginatively about, the simpler cultures still left today.
-The mechanics of culture in general will be bound to interest us.
+The �mechanics� of culture in general will be bound to interest us.
It will also be necessary to learn much more of the environments of
10,000 to 12,000 years ago. In which regions of the world were the
@@ -3228,7 +3228,7 @@ THE OLD THEORY TOO SIMPLE FOR THE FACTS
This theory was set up before we really knew anything in detail about
the later prehistory of the Near and Middle East. We now know that
-the facts which have been found dont fit the old theory at all well.
+the facts which have been found don�t fit the old theory at all well.
Also, I have yet to find an American meteorologist who feels that we
know enough about the changes in the weather pattern to say that it can
have been so simple and direct. And, of course, the glacial ice which
@@ -3238,7 +3238,7 @@ of great alpine glaciers, and long periods of warm weather in between.
If the rain belt moved north as the glaciers melted for the last time,
it must have moved in the same direction in earlier times. Thus, the
forced neighborliness of men, plants, and animals in river valleys and
-oases must also have happened earlier. Why didnt domestication happen
+oases must also have happened earlier. Why didn�t domestication happen
earlier, then?
Furthermore, it does not seem to be in the oases and river valleys
@@ -3275,20 +3275,20 @@ archeologists, probably through habit, favor an old scheme of Grecized
names for the subdivisions: paleolithic, mesolithic, neolithic. I
refuse to use these words myself. They have meant too many different
things to too many different people and have tended to hide some pretty
-fuzzy thinking. Probably you havent even noticed my own scheme of
-subdivision up to now, but Id better tell you in general what it is.
+fuzzy thinking. Probably you haven�t even noticed my own scheme of
+subdivision up to now, but I�d better tell you in general what it is.
I think of the earliest great group of archeological materials, from
which we can deduce only a food-gathering way of culture, as the
-_food-gathering stage_. I say stage rather than age, because it
+_food-gathering stage_. I say �stage� rather than �age,� because it
is not quite over yet; there are still a few primitive people in
out-of-the-way parts of the world who remain in the _food-gathering
stage_. In fact, Professor Julian Steward would probably prefer to call
it a food-gathering _level_ of existence, rather than a stage. This
would be perfectly acceptable to me. I also tend to find myself using
_collecting_, rather than _gathering_, for the more recent aspects or
-era of the stage, as the word collecting appears to have more sense
-of purposefulness and specialization than does gathering (see p.
+era of the stage, as the word �collecting� appears to have more sense
+of purposefulness and specialization than does �gathering� (see p.
91).
Now, while I think we could make several possible subdivisions of the
@@ -3297,22 +3297,22 @@ believe the only one which means much to us here is the last or
_terminal sub-era of food-collecting_ of the whole food-gathering
stage. The microliths seem to mark its approach in the northwestern
part of the Old World. It is really shown best in the Old World by
-the materials of the Forest folk, the cultural adaptation to the
+the materials of the �Forest folk,� the cultural adaptation to the
post-glacial environment in northwestern Europe. We talked about
-the Forest folk at the beginning of this chapter, and I used the
+the �Forest folk� at the beginning of this chapter, and I used the
Maglemosian assemblage of Denmark as an example.
[5] It is difficult to find words which have a sequence or gradation
of meaning with respect to both development and a range of time
in the past, or with a range of time from somewhere in the past
which is perhaps not yet ended. One standard Webster definition
- of _stage_ is: One of the steps into which the material
- development of man ... is divided. I cannot find any dictionary
+ of _stage_ is: �One of the steps into which the material
+ development of man ... is divided.� I cannot find any dictionary
definition that suggests which of the words, _stage_ or _era_,
has the meaning of a longer span of time. Therefore, I have
chosen to let my eras be shorter, and to subdivide my stages
- into eras. Webster gives _era_ as: A signal stage of history,
- an epoch. When I want to subdivide my eras, I find myself using
+ into eras. Webster gives _era_ as: �A signal stage of history,
+ an epoch.� When I want to subdivide my eras, I find myself using
_sub-eras_. Thus I speak of the _eras_ within a _stage_ and of
the _sub-eras_ within an _era_; that is, I do so when I feel
that I really have to, and when the evidence is clear enough to
@@ -3328,9 +3328,9 @@ realms of culture. It is rather that for most of prehistoric time the
materials left to the archeologists tend to limit our deductions to
technology and economics.
-Im so soon out of my competence, as conventional ancient history
+I�m so soon out of my competence, as conventional ancient history
begins, that I shall only suggest the earlier eras of the
-food-producing stage to you. This book is about prehistory, and Im not
+food-producing stage to you. This book is about prehistory, and I�m not
a universal historian.
@@ -3339,28 +3339,28 @@ THE TWO EARLIEST ERAS OF THE FOOD-PRODUCING STAGE
The food-producing stage seems to appear in western Asia with really
revolutionary suddenness. It is seen by the relative speed with which
the traces of new crafts appear in the earliest village-farming
-community sites weve dug. It is seen by the spread and multiplication
+community sites we�ve dug. It is seen by the spread and multiplication
of these sites themselves, and the remarkable growth in human
-population we deduce from this increase in sites. Well look at some
+population we deduce from this increase in sites. We�ll look at some
of these sites and the archeological traces they yield in the next
chapter. When such village sites begin to appear, I believe we are in
the _era of the primary village-farming community_. I also believe this
is the second era of the food-producing stage.
The first era of the food-producing stage, I believe, was an _era of
-incipient cultivation and animal domestication_. I keep saying I
-believe because the actual evidence for this earlier era is so slight
+incipient cultivation and animal domestication_. I keep saying �I
+believe� because the actual evidence for this earlier era is so slight
that one has to set it up mainly by playing a hunch for it. The reason
for playing the hunch goes about as follows.
One thing we seem to be able to see, in the food-collecting era in
general, is a tendency for people to begin to settle down. This
settling down seemed to become further intensified in the terminal
-era. How this is connected with Professor Mathiassens receptiveness
+era. How this is connected with Professor Mathiassen�s �receptiveness�
and the tendency to be experimental, we do not exactly know. The
evidence from the New World comes into play here as well as that from
the Old World. With this settling down in one place, the people of the
-terminal era--especially the Forest folk whom we know best--began
+terminal era--especially the �Forest folk� whom we know best--began
making a great variety of new things. I remarked about this earlier in
the chapter. Dr. Robert M. Adams is of the opinion that this atmosphere
of experimentation with new tools--with new ways of collecting food--is
@@ -3368,9 +3368,9 @@ the kind of atmosphere in which one might expect trials at planting
and at animal domestication to have been made. We first begin to find
traces of more permanent life in outdoor camp sites, although caves
were still inhabited at the beginning of the terminal era. It is not
-surprising at all that the Forest folk had already domesticated the
+surprising at all that the �Forest folk� had already domesticated the
dog. In this sense, the whole era of food-collecting was becoming ready
-and almost incipient for cultivation and animal domestication.
+and almost �incipient� for cultivation and animal domestication.
Northwestern Europe was not the place for really effective beginnings
in agriculture and animal domestication. These would have had to take
@@ -3425,13 +3425,13 @@ zone which surrounds the drainage basin of the Tigris and Euphrates
Rivers at elevations of from approximately 2,000 to 5,000 feet. The
lower alluvial land of the Tigris-Euphrates basin itself has very
little rainfall. Some years ago Professor James Henry Breasted called
-the alluvial lands of the Tigris-Euphrates a part of the fertile
-crescent. These alluvial lands are very fertile if irrigated. Breasted
+the alluvial lands of the Tigris-Euphrates a part of the �fertile
+crescent.� These alluvial lands are very fertile if irrigated. Breasted
was most interested in the oriental civilizations of conventional
ancient history, and irrigation had been discovered before they
appeared.
-The country of hilly flanks above Breasteds crescent receives from
+The country of hilly flanks above Breasted�s crescent receives from
10 to 20 or more inches of winter rainfall each year, which is about
what Kansas has. Above the hilly-flanks zone tower the peaks and ridges
of the Lebanon-Amanus chain bordering the coast-line from Palestine
@@ -3440,7 +3440,7 @@ range of the Iraq-Iran borderland. This rugged mountain frame for our
hilly-flanks zone rises to some magnificent alpine scenery, with peaks
of from ten to fifteen thousand feet in elevation. There are several
gaps in the Mediterranean coastal portion of the frame, through which
-the winters rain-bearing winds from the sea may break so as to carry
+the winter�s rain-bearing winds from the sea may break so as to carry
rain to the foothills of the Taurus and the Zagros.
The picture I hope you will have from this description is that of an
@@ -3482,7 +3482,7 @@ hilly-flanks zone in their wild state.
With a single exception--that of the dog--the earliest positive
evidence of domestication includes the two forms of wheat, the barley,
and the goat. The evidence comes from within the hilly-flanks zone.
-However, it comes from a settled village proper, Jarmo (which Ill
+However, it comes from a settled village proper, Jarmo (which I�ll
describe in the next chapter), and is thus from the era of the primary
village-farming community. We are still without positive evidence of
domesticated grain and animals in the first era of the food-producing
@@ -3534,9 +3534,9 @@ and the spread of ideas of people who had passed on into one of the
more developed eras. In many cases, the terminal era of food-collecting
was ended by the incoming of the food-producing peoples themselves.
For example, the practices of food-production were carried into Europe
-by the actual movement of some numbers of peoples (we dont know how
+by the actual movement of some numbers of peoples (we don�t know how
many) who had reached at least the level of the primary village-farming
-community. The Forest folk learned food-production from them. There
+community. The �Forest folk� learned food-production from them. There
was never an era of incipient cultivation and domestication proper in
Europe, if my hunch is right.
@@ -3547,16 +3547,16 @@ The way I see it, two things were required in order that an era of
incipient cultivation and domestication could begin. First, there had
to be the natural environment of a nuclear area, with its whole group
of plants and animals capable of domestication. This is the aspect of
-the matter which weve said is directly given by nature. But it is
+the matter which we�ve said is directly given by nature. But it is
quite possible that such an environment with such a group of plants
and animals in it may have existed well before ten thousand years ago
in the Near East. It is also quite possible that the same promising
condition may have existed in regions which never developed into
nuclear areas proper. Here, again, we come back to the cultural factor.
-I think it was that atmosphere of experimentation weve talked about
-once or twice before. I cant define it for you, other than to say that
+I think it was that �atmosphere of experimentation� we�ve talked about
+once or twice before. I can�t define it for you, other than to say that
by the end of the Ice Age, the general level of many cultures was ready
-for change. Ask me how and why this was so, and Ill tell you we dont
+for change. Ask me how and why this was so, and I�ll tell you we don�t
know yet, and that if we did understand this kind of question, there
would be no need for me to go on being a prehistorian!
@@ -3590,7 +3590,7 @@ such collections for the modern wild forms of animals and plants from
some of our nuclear areas. In the nuclear area in the Near East, some
of the wild animals, at least, have already become extinct. There are
no longer wild cattle or wild horses in western Asia. We know they were
-there from the finds weve made in caves of late Ice Age times, and
+there from the finds we�ve made in caves of late Ice Age times, and
from some slightly later sites.
@@ -3601,7 +3601,7 @@ incipient era of cultivation and animal domestication. I am closing
this chapter with descriptions of two of the best Near Eastern examples
I know of. You may not be satisfied that what I am able to describe
makes a full-bodied era of development at all. Remember, however, that
-Ive told you Im largely playing a kind of a hunch, and also that the
+I�ve told you I�m largely playing a kind of a hunch, and also that the
archeological materials of this era will always be extremely difficult
to interpret. At the beginning of any new way of life, there will be a
great tendency for people to make-do, at first, with tools and habits
@@ -3613,7 +3613,7 @@ THE NATUFIAN, AN ASSEMBLAGE OF THE INCIPIENT ERA
The assemblage called the Natufian comes from the upper layers of a
number of caves in Palestine. Traces of its flint industry have also
-turned up in Syria and Lebanon. We dont know just how old it is. I
+turned up in Syria and Lebanon. We don�t know just how old it is. I
guess that it probably falls within five hundred years either way of
about 5000 B.C.
@@ -3662,7 +3662,7 @@ pendants. There were also beads and pendants of pierced teeth and shell.
A number of Natufian burials have been found in the caves; some burials
were grouped together in one grave. The people who were buried within
the Mount Carmel cave were laid on their backs in an extended position,
-while those on the terrace seem to have been flexed (placed in their
+while those on the terrace seem to have been �flexed� (placed in their
graves in a curled-up position). This may mean no more than that it was
easier to dig a long hole in cave dirt than in the hard-packed dirt of
the terrace. The people often had some kind of object buried with them,
@@ -3679,7 +3679,7 @@ beads.
GROUND STONE
BONE]
-The animal bones of the Natufian layers show beasts of a modern type,
+The animal bones of the Natufian layers show beasts of a �modern� type,
but with some differences from those of present-day Palestine. The
bones of the gazelle far outnumber those of the deer; since gazelles
like a much drier climate than deer, Palestine must then have had much
@@ -3692,9 +3692,9 @@ Maglemosian of northern Europe. More recently, it has been reported
that a domesticated goat is also part of the Natufian finds.
The study of the human bones from the Natufian burials is not yet
-complete. Until Professor McCowns study becomes available, we may note
-Professor Coons assessment that these people were of a basically
-Mediterranean type.
+complete. Until Professor McCown�s study becomes available, we may note
+Professor Coon�s assessment that these people were of a �basically
+Mediterranean type.�
THE KARIM SHAHIR ASSEMBLAGE
@@ -3704,11 +3704,11 @@ of a temporary open site or encampment. It lies on the top of a bluff
in the Kurdish hill-country of northeastern Iraq. It was dug by Dr.
Bruce Howe of the expedition I directed in 1950-51 for the Oriental
Institute and the American Schools of Oriental Research. In 1954-55,
-our expedition located another site, Mlefaat, with general resemblance
+our expedition located another site, M�lefaat, with general resemblance
to Karim Shahir, but about a hundred miles north of it. In 1956, Dr.
Ralph Solecki located still another Karim Shahir type of site called
Zawi Chemi Shanidar. The Zawi Chemi site has a radiocarbon date of 8900
- 300 B.C.
+� 300 B.C.
Karim Shahir has evidence of only one very shallow level of occupation.
It was probably not lived on very long, although the people who lived
@@ -3717,7 +3717,7 @@ layer yielded great numbers of fist-sized cracked pieces of limestone,
which had been carried up from the bed of a stream at the bottom of the
bluff. We think these cracked stones had something to do with a kind of
architecture, but we were unable to find positive traces of hut plans.
-At Mlefaat and Zawi Chemi, there were traces of rounded hut plans.
+At M�lefaat and Zawi Chemi, there were traces of rounded hut plans.
As in the Natufian, the great bulk of small objects of the Karim Shahir
assemblage was in chipped flint. A large proportion of the flint tools
@@ -3737,7 +3737,7 @@ clay figurines which seemed to be of animal form.
UNBAKED CLAY
SHELL
BONE
- ARCHITECTURE]
+ �ARCHITECTURE�]
Karim Shahir did not yield direct evidence of the kind of vegetable
food its people ate. The animal bones showed a considerable
@@ -3746,7 +3746,7 @@ domestication--sheep, goat, cattle, horse, dog--as compared with animal
bones from the earlier cave sites of the area, which have a high
proportion of bones of wild forms like deer and gazelle. But we do not
know that any of the Karim Shahir animals were actually domesticated.
-Some of them may have been, in an incipient way, but we have no means
+Some of them may have been, in an �incipient� way, but we have no means
at the moment that will tell us from the bones alone.
@@ -3761,7 +3761,7 @@ goat, and the general animal situation at Karim Shahir to hint at an
incipient approach to food-production. At Karim Shahir, there was the
tendency to settle down out in the open; this is echoed by the new
reports of open air Natufian sites. The large number of cracked stones
-certainly indicates that it was worth the peoples while to have some
+certainly indicates that it was worth the peoples� while to have some
kind of structure, even if the site as a whole was short-lived.
It is a part of my hunch that these things all point toward
@@ -3771,13 +3771,13 @@ which we shall look at next, are fully food-producing, the Natufian
and Karim Shahir folk had not yet arrived. I think they were part of
a general build-up to full scale food-production. They were possibly
controlling a few animals of several kinds and perhaps one or two
-plants, without realizing the full possibilities of this control as a
+plants, without realizing the full possibilities of this �control� as a
new way of life.
This is why I think of the Karim Shahir and Natufian folk as being at
a level, or in an era, of incipient cultivation and domestication. But
we shall have to do a great deal more excavation in this range of time
-before well get the kind of positive information we need.
+before we�ll get the kind of positive information we need.
SUMMARY
@@ -3798,7 +3798,7 @@ history.
We know the earliest village-farming communities appeared in western
Asia, in a nuclear area. We do not yet know why the Near Eastern
-experiment came first, or why it didnt happen earlier in some other
+experiment came first, or why it didn�t happen earlier in some other
nuclear area. Apparently, the level of culture and the promise of the
natural environment were ready first in western Asia. The next sites
we look at will show a simple but effective food-production already
@@ -3835,7 +3835,7 @@ contrast between food-collecting and food-producing as ways of life.
THE DIFFERENCE BETWEEN FOOD-COLLECTORS AND FOOD-PRODUCERS
-Childe used the word revolution because of the radical change that
+Childe used the word �revolution� because of the radical change that
took place in the habits and customs of man. Food-collectors--that is,
hunters, fishers, berry- and nut-gatherers--had to live in small groups
or bands, for they had to be ready to move wherever their food supply
@@ -3851,7 +3851,7 @@ for clothing beyond the tools that were probably used to dress the
skins of animals; no time to think of much of anything but food and
protection and disposal of the dead when death did come: an existence
which takes nature as it finds it, which does little or nothing to
-modify nature--all in all, a savages existence, and a very tough one.
+modify nature--all in all, a savage�s existence, and a very tough one.
A man who spends his whole life following animals just to kill them to
eat, or moving from one berry patch to another, is really living just
like an animal himself.
@@ -3859,10 +3859,10 @@ like an animal himself.
THE FOOD-PRODUCING ECONOMY
-Against this picture let me try to draw another--that of mans life
-after food-production had begun. His meat was stored on the hoof,
+Against this picture let me try to draw another--that of man�s life
+after food-production had begun. His meat was stored �on the hoof,�
his grain in silos or great pottery jars. He lived in a house: it was
-worth his while to build one, because he couldnt move far from his
+worth his while to build one, because he couldn�t move far from his
fields and flocks. In his neighborhood enough food could be grown
and enough animals bred so that many people were kept busy. They all
lived close to their flocks and fields, in a village. The village was
@@ -3872,7 +3872,7 @@ Children and old men could shepherd the animals by day or help with
the lighter work in the fields. After the crops had been harvested the
younger men might go hunting and some of them would fish, but the food
they brought in was only an addition to the food in the village; the
-villagers wouldnt starve, even if the hunters and fishermen came home
+villagers wouldn�t starve, even if the hunters and fishermen came home
empty-handed.
There was more time to do different things, too. They began to modify
@@ -3885,23 +3885,23 @@ people in the village who were becoming full-time craftsmen.
Other things were changing, too. The villagers must have had
to agree on new rules for living together. The head man of the
village had problems different from those of the chief of the small
-food-collectors band. If somebodys flock of sheep spoiled a wheat
+food-collectors� band. If somebody�s flock of sheep spoiled a wheat
field, the owner wanted payment for the grain he lost. The chief of
the hunters was never bothered with such questions. Even the gods
had changed. The spirits and the magic that had been used by hunters
-werent of any use to the villagers. They needed gods who would watch
+weren�t of any use to the villagers. They needed gods who would watch
over the fields and the flocks, and they eventually began to erect
buildings where their gods might dwell, and where the men who knew most
about the gods might live.
-WAS FOOD-PRODUCTION A REVOLUTION?
+WAS FOOD-PRODUCTION A �REVOLUTION�?
If you can see the difference between these two pictures--between
life in the food-collecting stage and life after food-production
-had begun--youll see why Professor Childe speaks of a revolution.
-By revolution, he doesnt mean that it happened over night or that
-it happened only once. We dont know exactly how long it took. Some
+had begun--you�ll see why Professor Childe speaks of a revolution.
+By revolution, he doesn�t mean that it happened over night or that
+it happened only once. We don�t know exactly how long it took. Some
people think that all these changes may have occurred in less than
500 years, but I doubt that. The incipient era was probably an affair
of some duration. Once the level of the village-farming community had
@@ -3915,7 +3915,7 @@ been achieved with truly revolutionary suddenness.
GAPS IN OUR KNOWLEDGE OF THE NEAR EAST
-If youll look again at the chart (p. 111) youll see that I have
+If you�ll look again at the chart (p. 111) you�ll see that I have
very few sites and assemblages to name in the incipient era of
cultivation and domestication, and not many in the earlier part of
the primary village-farming level either. Thanks in no small part
@@ -3926,20 +3926,20 @@ yard-stick here. But I am far from being able to show you a series of
Sears Roebuck catalogues, even century by century, for any part of
the nuclear area. There is still a great deal of earth to move, and a
great mass of material to recover and interpret before we even begin to
-understand how and why.
+understand �how� and �why.�
Perhaps here, because this kind of archeology is really my specialty,
-youll excuse it if I become personal for a moment. I very much look
+you�ll excuse it if I become personal for a moment. I very much look
forward to having further part in closing some of the gaps in knowledge
-of the Near East. This is not, as Ive told you, the spectacular
+of the Near East. This is not, as I�ve told you, the spectacular
range of Near Eastern archeology. There are no royal tombs, no gold,
no great buildings or sculpture, no writing, in fact nothing to
excite the normal museum at all. Nevertheless it is a range which,
idea-wise, gives the archeologist tremendous satisfaction. The country
of the hilly flanks is an exciting combination of green grasslands
and mountainous ridges. The Kurds, who inhabit the part of the area
-in which Ive worked most recently, are an extremely interesting and
-hospitable people. Archeologists dont become rich, but Ill forego
+in which I�ve worked most recently, are an extremely interesting and
+hospitable people. Archeologists don�t become rich, but I�ll forego
the Cadillac for any bright spring morning in the Kurdish hills, on a
good site with a happy crew of workmen and an interested and efficient
staff. It is probably impossible to convey the full feeling which life
@@ -3965,15 +3965,15 @@ like the use of pottery borrowed from the more developed era of the
same time in the nuclear area. The same general explanation doubtless
holds true for certain materials in Egypt, along the upper Nile and in
the Kharga oasis: these materials, called Sebilian III, the Khartoum
-neolithic, and the Khargan microlithic, are from surface sites,
+�neolithic,� and the Khargan microlithic, are from surface sites,
not from caves. The chart (p. 111) shows where I would place these
materials in era and time.
[Illustration: THE HILLY FLANKS OF THE CRESCENT AND EARLY SITES OF THE
NEAR EAST]
-Both Mlefaat and Dr. Soleckis Zawi Chemi Shanidar site appear to have
-been slightly more settled in than was Karim Shahir itself. But I do
+Both M�lefaat and Dr. Solecki�s Zawi Chemi Shanidar site appear to have
+been slightly more �settled in� than was Karim Shahir itself. But I do
not think they belong to the era of farming-villages proper. The first
site of this era, in the hills of Iraqi Kurdistan, is Jarmo, on which
we have spent three seasons of work. Following Jarmo comes a variety of
@@ -3989,9 +3989,9 @@ times when their various cultures flourished, there must have been
many little villages which shared the same general assemblage. We are
only now beginning to locate them again. Thus, if I speak of Jarmo,
or Jericho, or Sialk as single examples of their particular kinds of
-assemblages, I dont mean that they were unique at all. I think I could
+assemblages, I don�t mean that they were unique at all. I think I could
take you to the sites of at least three more Jarmos, within twenty
-miles of the original one. They are there, but they simply havent yet
+miles of the original one. They are there, but they simply haven�t yet
been excavated. In 1956, a Danish expedition discovered material of
Jarmo type at Shimshara, only two dozen miles northeast of Jarmo, and
below an assemblage of Hassunan type (which I shall describe presently).
@@ -4000,15 +4000,15 @@ below an assemblage of Hassunan type (which I shall describe presently).
THE GAP BETWEEN KARIM SHAHIR AND JARMO
As we see the matter now, there is probably still a gap in the
-available archeological record between the Karim Shahir-Mlefaat-Zawi
+available archeological record between the Karim Shahir-M�lefaat-Zawi
Chemi group (of the incipient era) and that of Jarmo (of the
village-farming era). Although some items of the Jarmo type materials
do reflect the beginnings of traditions set in the Karim Shahir group
(see p. 120), there is not a clear continuity. Moreover--to the
degree that we may trust a few radiocarbon dates--there would appear
to be around two thousand years of difference in time. The single
-available Zawi Chemi date is 8900 300 B.C.; the most reasonable
-group of dates from Jarmo average to about 6750 200 B.C. I am
+available Zawi Chemi �date� is 8900 � 300 B.C.; the most reasonable
+group of �dates� from Jarmo average to about 6750 � 200 B.C. I am
uncertain about this two thousand years--I do not think it can have
been so long.
@@ -4021,7 +4021,7 @@ JARMO, IN THE KURDISH HILLS, IRAQ
The site of Jarmo has a depth of deposit of about twenty-seven feet,
and approximately a dozen layers of architectural renovation and
-change. Nevertheless it is a one period site: its assemblage remains
+change. Nevertheless it is a �one period� site: its assemblage remains
essentially the same throughout, although one or two new items are
added in later levels. It covers about four acres of the top of a
bluff, below which runs a small stream. Jarmo lies in the hill country
@@ -4078,7 +4078,7 @@ human beings in clay; one type of human figurine they favored was that
of a markedly pregnant woman, probably the expression of some sort of
fertility spirit. They provided their house floors with baked-in-place
depressions, either as basins or hearths, and later with domed ovens of
-clay. As weve noted, the houses themselves were of clay or mud; one
+clay. As we�ve noted, the houses themselves were of clay or mud; one
could almost say they were built up like a house-sized pot. Then,
finally, the idea of making portable pottery itself appeared, although
I very much doubt that the people of the Jarmo village discovered the
@@ -4095,11 +4095,11 @@ over three hundred miles to the north. Already a bulk carrying trade
had been established--the forerunner of commerce--and the routes were
set by which, in later times, the metal trade was to move.
-There are now twelve radioactive carbon dates from Jarmo. The most
-reasonable cluster of determinations averages to about 6750 200
-B.C., although there is a completely unreasonable range of dates
+There are now twelve radioactive carbon �dates� from Jarmo. The most
+reasonable cluster of determinations averages to about 6750 � 200
+B.C., although there is a completely unreasonable range of �dates�
running from 3250 to 9250 B.C.! _If_ I am right in what I take to be
-reasonable, the first flush of the food-producing revolution had been
+�reasonable,� the first flush of the food-producing revolution had been
achieved almost nine thousand years ago.
@@ -4117,7 +4117,7 @@ it, but the Hassunan sites seem to cluster at slightly lower elevations
than those we have been talking about so far.
The catalogue of the Hassuna assemblage is of course more full and
-elaborate than that of Jarmo. The Iraqi governments archeologists
+elaborate than that of Jarmo. The Iraqi government�s archeologists
who dug Hassuna itself, exposed evidence of increasing architectural
know-how. The walls of houses were still formed of puddled mud;
sun-dried bricks appear only in later periods. There were now several
@@ -4130,16 +4130,16 @@ largely disappeared by Hassunan times. The flint work of the Hassunan
catalogue is, by and large, a wretched affair. We might guess that the
kinaesthetic concentration of the Hassuna craftsmen now went into other
categories; that is, they suddenly discovered they might have more fun
-working with the newer materials. Its a shame, for example, that none
+working with the newer materials. It�s a shame, for example, that none
of their weaving is preserved for us.
The two available radiocarbon determinations from Hassunan contexts
-stand at about 5100 and 5600 B.C. 250 years.
+stand at about 5100 and 5600 B.C. � 250 years.
OTHER EARLY VILLAGE SITES IN THE NUCLEAR AREA
-Ill now name and very briefly describe a few of the other early
+I�ll now name and very briefly describe a few of the other early
village assemblages either in or adjacent to the hilly flanks of the
crescent. Unfortunately, we do not have radioactive carbon dates for
many of these materials. We may guess that some particular assemblage,
@@ -4177,7 +4177,7 @@ ecological niche, some seven hundred feet below sea level; it is
geographically within the hilly-flanks zone but environmentally not
part of it.
-Several radiocarbon dates for Jericho fall within the range of those
+Several radiocarbon �dates� for Jericho fall within the range of those
I find reasonable for Jarmo, and their internal statistical consistency
is far better than that for the Jarmo determinations. It is not yet
clear exactly what this means.
@@ -4226,7 +4226,7 @@ how things were made are different; the Sialk assemblage represents
still another cultural pattern. I suspect it appeared a bit later
in time than did that of Hassuna. There is an important new item in
the Sialk catalogue. The Sialk people made small drills or pins of
-hammered copper. Thus the metallurgists specialized craft had made its
+hammered copper. Thus the metallurgist�s specialized craft had made its
appearance.
There is at least one very early Iranian site on the inward slopes
@@ -4246,7 +4246,7 @@ shore of the Fayum lake. The Fayum materials come mainly from grain
bins or silos. Another site, Merimde, in the western part of the Nile
delta, shows the remains of a true village, but it may be slightly
later than the settlement of the Fayum. There are radioactive carbon
-dates for the Fayum materials at about 4275 B.C. 320 years, which
+�dates� for the Fayum materials at about 4275 B.C. � 320 years, which
is almost fifteen hundred years later than the determinations suggested
for the Hassunan or Syro-Cilician assemblages. I suspect that this
is a somewhat over-extended indication of the time it took for the
@@ -4260,13 +4260,13 @@ the mound called Shaheinab. The Shaheinab catalogue roughly corresponds
to that of the Fayum; the distance between the two places, as the Nile
flows, is roughly 1,500 miles. Thus it took almost a thousand years for
the new way of life to be carried as far south into Africa as Khartoum;
-the two Shaheinab dates average about 3300 B.C. 400 years.
+the two Shaheinab �dates� average about 3300 B.C. � 400 years.
If the movement was up the Nile (southward), as these dates suggest,
then I suspect that the earliest available village material of middle
Egypt, the so-called Tasian, is also later than that of the Fayum. The
Tasian materials come from a few graves near a village called Deir
-Tasa, and I have an uncomfortable feeling that the Tasian assemblage
+Tasa, and I have an uncomfortable feeling that the Tasian �assemblage�
may be mainly an artificial selection of poor examples of objects which
belong in the following range of time.
@@ -4280,7 +4280,7 @@ spread outward in space from the nuclear area, as time went on. There
is good archeological evidence that both these processes took place.
For the hill country of northeastern Iraq, in the nuclear area, we
have already noticed how the succession (still with gaps) from Karim
-Shahir, through Mlefaat and Jarmo, to Hassuna can be charted (see
+Shahir, through M�lefaat and Jarmo, to Hassuna can be charted (see
chart, p. 111). In the next chapter, we shall continue this charting
and description of what happened in Iraq upward through time. We also
watched traces of the new way of life move through space up the Nile
@@ -4299,7 +4299,7 @@ appearance of the village-farming community there--is still an open
one. In the last chapter, we noted the probability of an independent
nuclear area in southeastern Asia. Professor Carl Sauer strongly
champions the great importance of this area as _the_ original center
-of agricultural pursuits, as a kind of cradle of all incipient eras
+of agricultural pursuits, as a kind of �cradle� of all incipient eras
of the Old World at least. While there is certainly not the slightest
archeological evidence to allow us to go that far, we may easily expect
that an early southeast Asian development would have been felt in
@@ -4311,13 +4311,13 @@ way of life moved well beyond Khartoum in Africa.
THE SPREAD OF THE VILLAGE-FARMING COMMUNITY WAY OF LIFE INTO EUROPE
-How about Europe? I wont give you many details. You can easily imagine
+How about Europe? I won�t give you many details. You can easily imagine
that the late prehistoric prelude to European history is a complicated
affair. We all know very well how complicated an area Europe is now,
with its welter of different languages and cultures. Remember, however,
that a great deal of archeology has been done on the late prehistory of
Europe, and very little on that of further Asia and Africa. If we knew
-as much about these areas as we do of Europe, I expect wed find them
+as much about these areas as we do of Europe, I expect we�d find them
just as complicated.
This much is clear for Europe, as far as the spread of the
@@ -4329,21 +4329,21 @@ in western Asia. I do not, of course, mean that there were traveling
salesmen who carried these ideas and things to Europe with a commercial
gleam in their eyes. The process took time, and the ideas and things
must have been passed on from one group of people to the next. There
-was also some actual movement of peoples, but we dont know the size of
+was also some actual movement of peoples, but we don�t know the size of
the groups that moved.
-The story of the colonization of Europe by the first farmers is
+The story of the �colonization� of Europe by the first farmers is
thus one of (1) the movement from the eastern Mediterranean lands
of some people who were farmers; (2) the spread of ideas and things
beyond the Near East itself and beyond the paths along which the
-colonists moved; and (3) the adaptations of the ideas and things
-by the indigenous Forest folk, about whose receptiveness Professor
+�colonists� moved; and (3) the adaptations of the ideas and things
+by the indigenous �Forest folk�, about whose �receptiveness� Professor
Mathiassen speaks (p. 97). It is important to note that the resulting
cultures in the new European environment were European, not Near
-Eastern. The late Professor Childe remarked that the peoples of the
+Eastern. The late Professor Childe remarked that �the peoples of the
West were not slavish imitators; they adapted the gifts from the East
... into a new and organic whole capable of developing on its own
-original lines.
+original lines.�
THE WAYS TO EUROPE
@@ -4389,19 +4389,19 @@ Hill, the earliest known trace of village-farming communities in
England, is about 2500 B.C. I would expect about 5500 B.C. to be a
safe date to give for the well-developed early village communities of
Syro-Cilicia. We suspect that the spread throughout Europe did not
-proceed at an even rate. Professor Piggott writes that at a date
+proceed at an even rate. Professor Piggott writes that �at a date
probably about 2600 B.C., simple agricultural communities were being
established in Spain and southern France, and from the latter region a
spread northwards can be traced ... from points on the French seaboard
of the [English] Channel ... there were emigrations of a certain number
of these tribes by boat, across to the chalk lands of Wessex and Sussex
[in England], probably not more than three or four generations later
-than the formation of the south French colonies.
+than the formation of the south French colonies.�
New radiocarbon determinations are becoming available all the
time--already several suggest that the food-producing way of life
had reached the lower Rhine and Holland by 4000 B.C. But not all
-prehistorians accept these dates, so I do not show them on my map
+prehistorians accept these �dates,� so I do not show them on my map
(p. 139).
@@ -4427,7 +4427,7 @@ concentric sets of banks and ditches. Traces of oblong timber houses
have been found, but not within the enclosures. The second type of
structure is mine-shafts, dug down into the chalk beds where good
flint for the making of axes or hoes could be found. The third type
-of structure is long simple mounds or unchambered barrows, in one
+of structure is long simple mounds or �unchambered barrows,� in one
end of which burials were made. It has been commonly believed that the
Windmill Hill assemblage belonged entirely to the cultural tradition
which moved up through France to the Channel. Professor Piggott is now
@@ -4443,12 +4443,12 @@ consists mainly of tombs and the contents of tombs, with only very
rare settlement sites. The tombs were of some size and received the
bodies of many people. The tombs themselves were built of stone, heaped
over with earth; the stones enclosed a passage to a central chamber
-(passage graves), or to a simple long gallery, along the sides of
-which the bodies were laid (gallery graves). The general type of
-construction is called megalithic (= great stone), and the whole
+(�passage graves�), or to a simple long gallery, along the sides of
+which the bodies were laid (�gallery graves�). The general type of
+construction is called �megalithic� (= great stone), and the whole
earth-mounded structure is often called a _barrow_. Since many have
-proper chambers, in one sense or another, we used the term unchambered
-barrow above to distinguish those of the Windmill Hill type from these
+proper chambers, in one sense or another, we used the term �unchambered
+barrow� above to distinguish those of the Windmill Hill type from these
megalithic structures. There is some evidence for sacrifice, libations,
and ceremonial fires, and it is clear that some form of community
ritual was focused on the megalithic tombs.
@@ -4466,7 +4466,7 @@ The third early British group of antiquities of this general time
It is not so certain that the people who made this assemblage, called
Peterborough, were actually farmers. While they may on occasion have
practiced a simple agriculture, many items of their assemblage link
-them closely with that of the Forest folk of earlier times in
+them closely with that of the �Forest folk� of earlier times in
England and in the Baltic countries. Their pottery is decorated with
impressions of cords and is quite different from that of Windmill Hill
and the megalithic builders. In addition, the distribution of their
@@ -4479,7 +4479,7 @@ to acquire the raw material for stone axes.
A probably slightly later culture, whose traces are best known from
Skara Brae on Orkney, also had its roots in those cultures of the
-Baltic area which fused out of the meeting of the Forest folk and
+Baltic area which fused out of the meeting of the �Forest folk� and
the peoples who took the eastern way into Europe. Skara Brae is very
well preserved, having been built of thin stone slabs about which
dune-sand drifted after the village died. The individual houses, the
@@ -4498,14 +4498,14 @@ details which I have omitted in order to shorten the story.
I believe some of the difficulty we have in understanding the
establishment of the first farming communities in Europe is with
-the word colonization. We have a natural tendency to think of
-colonization as it has happened within the last few centuries. In the
+the word �colonization.� We have a natural tendency to think of
+�colonization� as it has happened within the last few centuries. In the
case of the colonization of the Americas, for example, the colonists
came relatively quickly, and in increasingly vast numbers. They had
vastly superior technical, political, and war-making skills, compared
with those of the Indians. There was not much mixing with the Indians.
The case in Europe five or six thousand years ago must have been very
-different. I wonder if it is even proper to call people colonists
+different. I wonder if it is even proper to call people �colonists�
who move some miles to a new region, settle down and farm it for some
years, then move on again, generation after generation? The ideas and
the things which these new people carried were only _potentially_
@@ -4521,12 +4521,12 @@ migrants were moving by boat, long distances may have been covered in
a short time. Remember, however, we seem to have about three thousand
years between the early Syro-Cilician villages and Windmill Hill.
-Let me repeat Professor Childe again. The peoples of the West were
+Let me repeat Professor Childe again. �The peoples of the West were
not slavish imitators: they adapted the gifts from the East ... into
a new and organic whole capable of developing on its own original
-lines. Childe is of course completely conscious of the fact that his
-peoples of the West were in part the descendants of migrants who came
-originally from the East, bringing their gifts with them. This
+lines.� Childe is of course completely conscious of the fact that his
+�peoples of the West� were in part the descendants of migrants who came
+originally from the �East,� bringing their �gifts� with them. This
was the late prehistoric achievement of Europe--to take new ideas and
things and some migrant peoples and, by mixing them with the old in its
own environments, to forge a new and unique series of cultures.
@@ -4553,14 +4553,14 @@ things first happened there and also because I know it best.
There is another interesting thing, too. We have seen that the first
experiment in village-farming took place in the Near East. So did
-the first experiment in civilization. Both experiments took. The
+the first experiment in civilization. Both experiments �took.� The
traditions we live by today are based, ultimately, on those ancient
beginnings in food-production and civilization in the Near East.
-WHAT CIVILIZATION MEANS
+WHAT �CIVILIZATION� MEANS
-I shall not try to define civilization for you; rather, I shall
+I shall not try to define �civilization� for you; rather, I shall
tell you what the word brings to my mind. To me civilization means
urbanization: the fact that there are cities. It means a formal
political set-up--that there are kings or governing bodies that the
@@ -4606,7 +4606,7 @@ of Mexico, the Mayas of Yucatan and Guatemala, and the Incas of the
Andes were civilized.
-WHY DIDNT CIVILIZATION COME TO ALL FOOD-PRODUCERS?
+WHY DIDN�T CIVILIZATION COME TO ALL FOOD-PRODUCERS?
Once you have food-production, even at the well-advanced level of
the village-farming community, what else has to happen before you
@@ -4625,13 +4625,13 @@ early civilization, is still an open and very interesting question.
WHERE CIVILIZATION FIRST APPEARED IN THE NEAR EAST
You remember that our earliest village-farming communities lay along
-the hilly flanks of a great crescent. (See map on p. 125.)
-Professor Breasteds fertile crescent emphasized the rich river
+the hilly flanks of a great �crescent.� (See map on p. 125.)
+Professor Breasted�s �fertile crescent� emphasized the rich river
valleys of the Nile and the Tigris-Euphrates Rivers. Our hilly-flanks
area of the crescent zone arches up from Egypt through Palestine and
Syria, along southern Turkey into northern Iraq, and down along the
southwestern fringe of Iran. The earliest food-producing villages we
-know already existed in this area by about 6750 B.C. ( 200 years).
+know already existed in this area by about 6750 B.C. (� 200 years).
Now notice that this hilly-flanks zone does not include southern
Mesopotamia, the alluvial land of the lower Tigris and Euphrates in
@@ -4639,7 +4639,7 @@ Iraq, or the Nile Valley proper. The earliest known villages of classic
Mesopotamia and Egypt seem to appear fifteen hundred or more years
after those of the hilly-flanks zone. For example, the early Fayum
village which lies near a lake west of the Nile Valley proper (see p.
-135) has a radiocarbon date of 4275 B.C. 320 years. It was in the
+135) has a radiocarbon date of 4275 B.C. � 320 years. It was in the
river lands, however, that the immediate beginnings of civilization
were made.
@@ -4657,8 +4657,8 @@ THE HILLY-FLANKS ZONE VERSUS THE RIVER LANDS
Why did these two civilizations spring up in these two river
lands which apparently were not even part of the area where the
-village-farming community began? Why didnt we have the first
-civilizations in Palestine, Syria, north Iraq, or Iran, where were
+village-farming community began? Why didn�t we have the first
+civilizations in Palestine, Syria, north Iraq, or Iran, where we�re
sure food-production had had a long time to develop? I think the
probable answer gives a clue to the ways in which civilization began in
Egypt and Mesopotamia.
@@ -4669,7 +4669,7 @@ and Syria. There are pleasant mountain slopes, streams running out to
the sea, and rain, at least in the winter months. The rain belt and the
foothills of the Turkish mountains also extend to northern Iraq and on
to the Iranian plateau. The Iranian plateau has its mountain valleys,
-streams, and some rain. These hilly flanks of the crescent, through
+streams, and some rain. These hilly flanks of the �crescent,� through
most of its arc, are almost made-to-order for beginning farmers. The
grassy slopes of the higher hills would be pasture for their herds
and flocks. As soon as the earliest experiments with agriculture and
@@ -4720,10 +4720,10 @@ Obviously, we can no longer find the first dikes or reservoirs of
the Nile Valley, or the first canals or ditches of Mesopotamia. The
same land has been lived on far too long for any traces of the first
attempts to be left; or, especially in Egypt, it has been covered by
-the yearly deposits of silt, dropped by the river floods. But were
+the yearly deposits of silt, dropped by the river floods. But we�re
pretty sure the first food-producers of Egypt and southern Mesopotamia
must have made such dikes, canals, and ditches. In the first place,
-there cant have been enough rain for them to grow things otherwise.
+there can�t have been enough rain for them to grow things otherwise.
In the second place, the patterns for such projects seem to have been
pretty well set by historic times.
@@ -4733,10 +4733,10 @@ CONTROL OF THE RIVERS THE BUSINESS OF EVERYONE
Here, then, is a _part_ of the reason why civilization grew in Egypt
and Mesopotamia first--not in Palestine, Syria, or Iran. In the latter
areas, people could manage to produce their food as individuals. It
-wasnt too hard; there were rain and some streams, and good pasturage
+wasn�t too hard; there were rain and some streams, and good pasturage
for the animals even if a crop or two went wrong. In Egypt and
Mesopotamia, people had to put in a much greater amount of work, and
-this work couldnt be individual work. Whole villages or groups of
+this work couldn�t be individual work. Whole villages or groups of
people had to turn out to fix dikes or dig ditches. The dikes had to be
repaired and the ditches carefully cleared of silt each year, or they
would become useless.
@@ -4745,7 +4745,7 @@ There also had to be hard and fast rules. The person who lived nearest
the ditch or the reservoir must not be allowed to take all the water
and leave none for his neighbors. It was not only a business of
learning to control the rivers and of making their waters do the
-farmers work. It also meant controlling men. But once these men had
+farmer�s work. It also meant controlling men. But once these men had
managed both kinds of controls, what a wonderful yield they had! The
soil was already fertile, and the silt which came in the floods and
ditches kept adding fertile soil.
@@ -4756,7 +4756,7 @@ THE GERM OF CIVILIZATION IN EGYPT AND MESOPOTAMIA
This learning to work together for the common good was the real germ of
the Egyptian and the Mesopotamian civilizations. The bare elements of
civilization were already there: the need for a governing hand and for
-laws to see that the communities work was done and that the water was
+laws to see that the communities� work was done and that the water was
justly shared. You may object that there is a sort of chicken and egg
paradox in this idea. How could the people set up the rules until they
had managed to get a way to live, and how could they manage to get a
@@ -4781,12 +4781,12 @@ My explanation has been pointed particularly at Egypt and Mesopotamia.
I have already told you that the irrigation and water-control part of
it does not apply to the development of the Aztecs or the Mayas, or
perhaps anybody else. But I think that a fair part of the story of
-Egypt and Mesopotamia must be as Ive just told you.
+Egypt and Mesopotamia must be as I�ve just told you.
I am particularly anxious that you do _not_ understand me to mean that
irrigation _caused_ civilization. I am sure it was not that simple at
all. For, in fact, a complex and highly engineered irrigation system
-proper did not come until later times. Lets say rather that the simple
+proper did not come until later times. Let�s say rather that the simple
beginnings of irrigation allowed and in fact encouraged a great number
of things in the technological, political, social, and moral realms of
culture. We do not yet understand what all these things were or how
@@ -4842,7 +4842,7 @@ the mound which later became the holy Sumerian city of Eridu, Iraqi
archeologists uncovered a handsome painted pottery. Pottery of the same
type had been noticed earlier by German archeologists on the surface
of a small mound, awash in the spring floods, near the remains of the
-Biblical city of Erich (Sumerian = Uruk; Arabic = Warka). This Eridu
+Biblical city of Erich (Sumerian = Uruk; Arabic = Warka). This �Eridu�
pottery, which is about all we have of the assemblage of the people who
once produced it, may be seen as a blend of the Samarran and Halafian
painted pottery styles. This may over-simplify the case, but as yet we
@@ -4864,7 +4864,7 @@ seems to move into place before the Halaf manifestation is finished,
and to blend with it. The Ubaidian assemblage in the south is by far
the more spectacular. The development of the temple has been traced
at Eridu from a simple little structure to a monumental building some
-62 feet long, with a pilaster-decorated faade and an altar in its
+62 feet long, with a pilaster-decorated fa�ade and an altar in its
central chamber. There is painted Ubaidian pottery, but the style is
hurried and somewhat careless and gives the _impression_ of having been
a cheap mass-production means of decoration when compared with the
@@ -4879,7 +4879,7 @@ turtle-like faces are another item in the southern Ubaidian assemblage.
There is a large Ubaid cemetery at Eridu, much of it still awaiting
excavation. The few skeletons so far tentatively studied reveal a
-completely modern type of Mediterraneanoid; the individuals whom the
+completely modern type of �Mediterraneanoid�; the individuals whom the
skeletons represent would undoubtedly blend perfectly into the modern
population of southern Iraq. What the Ubaidian assemblage says to us is
that these people had already adapted themselves and their culture to
@@ -4925,7 +4925,7 @@ woven stuffs must have been the mediums of exchange. Over what area did
the trading net-work of Ubaid extend? We start with the idea that the
Ubaidian assemblage is most richly developed in the south. We assume, I
think, correctly, that it represents a cultural flowering of the south.
-On the basis of the pottery of the still elusive Eridu immigrants
+On the basis of the pottery of the still elusive �Eridu� immigrants
who had first followed the rivers into alluvial Mesopotamia, we get
the notion that the characteristic painted pottery style of Ubaid
was developed in the southland. If this reconstruction is correct
@@ -4935,7 +4935,7 @@ assemblage of (and from the southern point of view, _fairly_ pure)
Ubaidian material in northern Iraq. The pottery appears all along the
Iranian flanks, even well east of the head of the Persian Gulf, and
ends in a later and spectacular flourish in an extremely handsome
-painted style called the Susa style. Ubaidian pottery has been noted
+painted style called the �Susa� style. Ubaidian pottery has been noted
up the valleys of both of the great rivers, well north of the Iraqi
and Syrian borders on the southern flanks of the Anatolian plateau.
It reaches the Mediterranean Sea and the valley of the Orontes in
@@ -4965,10 +4965,10 @@ Mesopotamia.
Next, much to our annoyance, we have what is almost a temporary
black-out. According to the system of terminology I favor, our next
-assemblage after that of Ubaid is called the _Warka_ phase, from
+�assemblage� after that of Ubaid is called the _Warka_ phase, from
the Arabic name for the site of Uruk or Erich. We know it only from
six or seven levels in a narrow test-pit at Warka, and from an even
-smaller hole at another site. This assemblage, so far, is known only
+smaller hole at another site. This �assemblage,� so far, is known only
by its pottery, some of which still bears Ubaidian style painting. The
characteristic Warkan pottery is unpainted, with smoothed red or gray
surfaces and peculiar shapes. Unquestionably, there must be a great
@@ -4979,7 +4979,7 @@ have to excavate it!
THE DAWN OF CIVILIZATION
After our exasperation with the almost unknown Warka interlude,
-following the brilliant false dawn of Ubaid, we move next to an
+following the brilliant �false dawn� of Ubaid, we move next to an
assemblage which yields traces of a preponderance of those elements
which we noted (p. 144) as meaning civilization. This assemblage
is that called _Proto-Literate_; it already contains writing. On
@@ -4988,8 +4988,8 @@ history--and no longer prehistory--the assemblage is named for the
historical implications of its content, and no longer after the name of
the site where it was first found. Since some of the older books used
site-names for this assemblage, I will tell you that the Proto-Literate
-includes the latter half of what used to be called the Uruk period
-_plus_ all of what used to be called the Jemdet Nasr period. It shows
+includes the latter half of what used to be called the �Uruk period�
+_plus_ all of what used to be called the �Jemdet Nasr period.� It shows
a consistent development from beginning to end.
I shall, in fact, leave much of the description and the historic
@@ -5033,18 +5033,18 @@ mental block seems to have been removed.
Clay tablets bearing pictographic signs are the Proto-Literate
forerunners of cuneiform writing. The earliest examples are not well
-understood but they seem to be devices for making accounts and
-for remembering accounts. Different from the later case in Egypt,
+understood but they seem to be �devices for making accounts and
+for remembering accounts.� Different from the later case in Egypt,
where writing appears fully formed in the earliest examples, the
development from simple pictographic signs to proper cuneiform writing
may be traced, step by step, in Mesopotamia. It is most probable
that the development of writing was connected with the temple and
-the need for keeping account of the temples possessions. Professor
+the need for keeping account of the temple�s possessions. Professor
Jacobsen sees writing as a means for overcoming space, time, and the
-increasing complications of human affairs: Literacy, which began
+increasing complications of human affairs: �Literacy, which began
with ... civilization, enhanced mightily those very tendencies in its
development which characterize it as a civilization and mark it off as
-such from other types of culture.
+such from other types of culture.�
[Illustration: RELIEF ON A PROTO-LITERATE STONE VASE, WARKA
@@ -5098,7 +5098,7 @@ civilized way of life.
I suppose you could say that the difference in the approach is that as
a prehistorian I have been looking forward or upward in time, while the
-historians look backward to glimpse what Ive been describing here. My
+historians look backward to glimpse what I�ve been describing here. My
base-line was half a million years ago with a being who had little more
than the capacity to make tools and fire to distinguish him from the
animals about him. Thus my point of view and that of the conventional
@@ -5114,17 +5114,17 @@ End of PREHISTORY
[Illustration]
-Youll doubtless easily recall your general course in ancient history:
+You�ll doubtless easily recall your general course in ancient history:
how the Sumerian dynasties of Mesopotamia were supplanted by those of
Babylonia, how the Hittite kingdom appeared in Anatolian Turkey, and
about the three great phases of Egyptian history. The literate kingdom
of Crete arose, and by 1500 B.C. there were splendid fortified Mycenean
towns on the mainland of Greece. This was the time--about the whole
eastern end of the Mediterranean--of what Professor Breasted called the
-first great internationalism, with flourishing trade, international
+�first great internationalism,� with flourishing trade, international
treaties, and royal marriages between Egyptians, Babylonians, and
-Hittites. By 1200 B.C., the whole thing had fragmented: the peoples of
-the sea were restless in their isles, and the great ancient centers in
+Hittites. By 1200 B.C., the whole thing had fragmented: �the peoples of
+the sea were restless in their isles,� and the great ancient centers in
Egypt, Mesopotamia, and Anatolia were eclipsed. Numerous smaller states
arose--Assyria, Phoenicia, Israel--and the Trojan war was fought.
Finally Assyria became the paramount power of all the Near East,
@@ -5135,7 +5135,7 @@ but casting them with its own tradition into a new mould, arose in
mainland Greece.
I once shocked my Classical colleagues to the core by referring to
-Greece as a second degree derived civilization, but there is much
+Greece as �a second degree derived civilization,� but there is much
truth in this. The principles of bronze- and then of iron-working, of
the alphabet, and of many other elements in Greek culture were borrowed
from western Asia. Our debt to the Greeks is too well known for me even
@@ -5146,7 +5146,7 @@ Greece fell in its turn to Rome, and in 55 B.C. Caesar invaded Britain.
I last spoke of Britain on page 142; I had chosen it as my single
example for telling you something of how the earliest farming
communities were established in Europe. Now I will continue with
-Britains later prehistory, so you may sense something of the end of
+Britain�s later prehistory, so you may sense something of the end of
prehistory itself. Remember that Britain is simply a single example
we select; the same thing could be done for all the other countries
of Europe, and will be possible also, some day, for further Asia and
@@ -5186,20 +5186,20 @@ few Battle-axe folk elements, including, in fact, stone battle-axes,
reached England with the earliest Beaker folk,[6] coming from the
Rhineland.
- [6] The British authors use the term Beaker folk to mean both
+ [6] The British authors use the term �Beaker folk� to mean both
archeological assemblage and human physical type. They speak
- of a ... tall, heavy-boned, rugged, and round-headed strain
+ of a �... tall, heavy-boned, rugged, and round-headed� strain
which they take to have developed, apparently in the Rhineland,
by a mixture of the original (Spanish?) beaker-makers and
the northeast European battle-axe makers. However, since the
science of physical anthropology is very much in flux at the
moment, and since I am not able to assess the evidence for these
- physical types, I _do not_ use the term folk in this book with
+ physical types, I _do not_ use the term �folk� in this book with
its usual meaning of standardized physical type. When I use
- folk here, I mean simply _the makers of a given archeological
+ �folk� here, I mean simply _the makers of a given archeological
assemblage_. The difficulty only comes when assemblages are
named for some item in them; it is too clumsy to make an
- adjective of the item and refer to a beakerian assemblage.
+ adjective of the item and refer to a �beakerian� assemblage.
The Beaker folk settled earliest in the agriculturally fertile south
and east. There seem to have been several phases of Beaker folk
@@ -5211,7 +5211,7 @@ folk are known. They buried their dead singly, sometimes in conspicuous
individual barrows with the dead warrior in his full trappings. The
spectacular element in the assemblage of the Beaker folk is a group
of large circular monuments with ditches and with uprights of wood or
-stone. These henges became truly monumental several hundred years
+stone. These �henges� became truly monumental several hundred years
later; while they were occasionally dedicated with a burial, they were
not primarily tombs. The effect of the invasion of the Beaker folk
seems to cut across the whole fabric of life in Britain.
@@ -5221,7 +5221,7 @@ seems to cut across the whole fabric of life in Britain.
There was, however, a second major element in British life at this
time. It shows itself in the less well understood traces of a group
again called after one of the items in their catalogue, the Food-vessel
-folk. There are many burials in these food-vessel pots in northern
+folk. There are many burials in these �food-vessel� pots in northern
England, Scotland, and Ireland, and the pottery itself seems to
link back to that of the Peterborough assemblage. Like the earlier
Peterborough people in the highland zone before them, the makers of
@@ -5238,8 +5238,8 @@ MORE INVASIONS
About 1500 B.C., the situation became further complicated by the
arrival of new people in the region of southern England anciently
called Wessex. The traces suggest the Brittany coast of France as a
-source, and the people seem at first to have been a small but heroic
-group of aristocrats. Their heroes are buried with wealth and
+source, and the people seem at first to have been a small but �heroic�
+group of aristocrats. Their �heroes� are buried with wealth and
ceremony, surrounded by their axes and daggers of bronze, their gold
ornaments, and amber and jet beads. These rich finds show that the
trade-linkage these warriors patronized spread from the Baltic sources
@@ -5265,10 +5265,10 @@ which must have been necessary before such a great monument could have
been built.
-THIS ENGLAND
+�THIS ENGLAND�
The range from 1900 to about 1400 B.C. includes the time of development
-of the archeological features usually called the Early Bronze Age
+of the archeological features usually called the �Early Bronze Age�
in Britain. In fact, traces of the Wessex warriors persisted down to
about 1200 B.C. The main regions of the island were populated, and the
adjustments to the highland and lowland zones were distinct and well
@@ -5279,7 +5279,7 @@ trading role, separated from the European continent but conveniently
adjacent to it. The tin of Cornwall--so important in the production
of good bronze--as well as the copper of the west and of Ireland,
taken with the gold of Ireland and the general excellence of Irish
-metal work, assured Britain a traders place in the then known world.
+metal work, assured Britain a trader�s place in the then known world.
Contacts with the eastern Mediterranean may have been by sea, with
Cornish tin as the attraction, or may have been made by the Food-vessel
middlemen on their trips to the Baltic coast. There they would have
@@ -5292,9 +5292,9 @@ relative isolation gave some peace and also gave time for a leveling
and further fusion of culture. The separate cultural traditions began
to have more in common. The growing of barley, the herding of sheep and
cattle, and the production of woolen garments were already features
-common to all Britains inhabitants save a few in the remote highlands,
+common to all Britain�s inhabitants save a few in the remote highlands,
the far north, and the distant islands not yet fully touched by
-food-production. The personality of Britain was being formed.
+food-production. The �personality of Britain� was being formed.
CREMATION BURIALS BEGIN
@@ -5325,9 +5325,9 @@ which we shall mention below.
The British cremation-burial-in-urns folk survived a long time in the
highland zone. In the general British scheme, they make up what is
-called the Middle Bronze Age, but in the highland zone they last
+called the �Middle Bronze Age,� but in the highland zone they last
until after 900 B.C. and are considered to be a specialized highland
-Late Bronze Age. In the highland zone, these later cremation-burial
+�Late Bronze Age.� In the highland zone, these later cremation-burial
folk seem to have continued the older Food-vessel tradition of being
middlemen in the metal market.
@@ -5379,12 +5379,12 @@ to get a picture of estate or tribal boundaries which included village
communities; we find a variety of tools in bronze, and even whetstones
which show that iron has been honed on them (although the scarce iron
has not been found). Let me give you the picture in Professor S.
-Piggotts words: The ... Late Bronze Age of southern England was but
+Piggott�s words: �The ... Late Bronze Age of southern England was but
the forerunner of the earliest Iron Age in the same region, not only in
the techniques of agriculture, but almost certainly in terms of ethnic
kinship ... we can with some assurance talk of the Celts ... the great
early Celtic expansion of the Continent is recognized to be that of the
-Urnfield people.
+Urnfield people.�
Thus, certainly by 500 B.C., there were people in Britain, some of
whose descendants we may recognize today in name or language in remote
@@ -5399,11 +5399,11 @@ efficient set of tools than does bronze. Iron tools seem first to
have been made in quantity in Hittite Anatolia about 1500 B.C. In
continental Europe, the earliest, so-called Hallstatt, iron-using
cultures appeared in Germany soon after 750 B.C. Somewhat later,
-Greek and especially Etruscan exports of _objets dart_--which moved
+Greek and especially Etruscan exports of _objets d�art_--which moved
with a flourishing trans-Alpine wine trade--influenced the Hallstatt
iron-working tradition. Still later new classical motifs, together with
older Hallstatt, oriental, and northern nomad motifs, gave rise to a
-new style in metal decoration which characterizes the so-called La Tne
+new style in metal decoration which characterizes the so-called La T�ne
phase.
A few iron users reached Britain a little before 400 B.C. Not long
@@ -5422,7 +5422,7 @@ HILL-FORTS AND FARMS
The earliest iron-users seem to have entrenched themselves temporarily
within hill-top forts, mainly in the south. Gradually, they moved
inland, establishing _individual_ farm sites with extensive systems
-of rectangular fields. We recognize these fields by the lynchets or
+of rectangular fields. We recognize these fields by the �lynchets� or
lines of soil-creep which plowing left on the slopes of hills. New
crops appeared; there were now bread wheat, oats, and rye, as well as
barley.
@@ -5434,7 +5434,7 @@ various outbuildings and pits for the storage of grain. Weaving was
done on the farm, but not blacksmithing, which must have been a
specialized trade. Save for the lack of firearms, the place might
almost be taken for a farmstead on the American frontier in the early
-1800s.
+1800�s.
Toward 250 B.C. there seems to have been a hasty attempt to repair the
hill-forts and to build new ones, evidently in response to signs of
@@ -5446,9 +5446,9 @@ THE SECOND PHASE
Perhaps the hill-forts were not entirely effective or perhaps a
compromise was reached. In any case, the newcomers from the Marne
district did establish themselves, first in the southeast and then to
-the north and west. They brought iron with decoration of the La Tne
+the north and west. They brought iron with decoration of the La T�ne
type and also the two-wheeled chariot. Like the Wessex warriors of
-over a thousand years earlier, they made heroes graves, with their
+over a thousand years earlier, they made �heroes�� graves, with their
warriors buried in the war-chariots and dressed in full trappings.
[Illustration: CELTIC BUCKLE]
@@ -5457,7 +5457,7 @@ The metal work of these Marnian newcomers is excellent. The peculiar
Celtic art style, based originally on the classic tendril motif,
is colorful and virile, and fits with Greek and Roman descriptions
of Celtic love of color in dress. There is a strong trace of these
-newcomers northward in Yorkshire, linked by Ptolemys description to
+newcomers northward in Yorkshire, linked by Ptolemy�s description to
the Parisii, doubtless part of the Celtic tribe which originally gave
its name to Paris on the Seine. Near Glastonbury, in Somerset, two
villages in swamps have been excavated. They seem to date toward the
@@ -5469,7 +5469,7 @@ villagers.
In Scotland, which yields its first iron tools at a date of about 100
B.C., and in northern Ireland even slightly earlier, the effects of the
-two phases of newcomers tend especially to blend. Hill-forts, brochs
+two phases of newcomers tend especially to blend. Hill-forts, �brochs�
(stone-built round towers) and a variety of other strange structures
seem to appear as the new ideas develop in the comparative isolation of
northern Britain.
@@ -5493,27 +5493,27 @@ at last, we can even begin to speak of dynasties and individuals.
Some time before 55 B.C., the Catuvellauni, originally from the Marne
district in France, had possessed themselves of a large part of
southeastern England. They evidently sailed up the Thames and built a
-town of over a hundred acres in area. Here ruled Cassivellaunus, the
-first man in England whose name we know, and whose town Caesar sacked.
+town of over a hundred acres in area. Here ruled Cassivellaunus, �the
+first man in England whose name we know,� and whose town Caesar sacked.
The town sprang up elsewhere again, however.
THE END OF PREHISTORY
Prehistory, strictly speaking, is now over in southern Britain.
-Claudius effective invasion took place in 43 A.D.; by 83 A.D., a raid
+Claudius� effective invasion took place in 43 A.D.; by 83 A.D., a raid
had been made as far north as Aberdeen in Scotland. But by 127 A.D.,
Hadrian had completed his wall from the Solway to the Tyne, and the
Romans settled behind it. In Scotland, Romanization can have affected
-the countryside very little. Professor Piggott adds that ... it is
+the countryside very little. Professor Piggott adds that �... it is
when the pressure of Romanization is relaxed by the break-up of the
Dark Ages that we see again the Celtic metal-smiths handling their
material with the same consummate skill as they had before the Roman
Conquest, and with traditional styles that had not even then forgotten
-their Marnian and Belgic heritage.
+their Marnian and Belgic heritage.�
In fact, many centuries go by, in Britain as well as in the rest of
-Europe, before the archeologists task is complete and the historian on
+Europe, before the archeologist�s task is complete and the historian on
his own is able to describe the ways of men in the past.
@@ -5524,7 +5524,7 @@ you will have noticed how often I had to refer to the European
continent itself. Britain, beyond the English Channel for all of her
later prehistory, had a much simpler course of events than did most of
the rest of Europe in later prehistoric times. This holds, in spite
-of all the invasions and reverberations from the continent. Most
+of all the �invasions� and �reverberations� from the continent. Most
of Europe was the scene of an even more complicated ebb and flow of
cultural change, save in some of its more remote mountain valleys and
peninsulas.
@@ -5536,7 +5536,7 @@ accounts and some good general accounts of part of the range from about
3000 B.C. to A.D. 1. I suspect that the difficulty of making a good
book that covers all of its later prehistory is another aspect of what
makes Europe so very complicated a continent today. The prehistoric
-foundations for Europes very complicated set of civilizations,
+foundations for Europe�s very complicated set of civilizations,
cultures, and sub-cultures--which begin to appear as history
proceeds--were in themselves very complicated.
@@ -5552,8 +5552,8 @@ of their journeys. But by the same token, they had had time en route to
take on their characteristic European aspects.
Some time ago, Sir Cyril Fox wrote a famous book called _The
-Personality of Britain_, sub-titled Its Influence on Inhabitant and
-Invader in Prehistoric and Early Historic Times. We have not gone
+Personality of Britain_, sub-titled �Its Influence on Inhabitant and
+Invader in Prehistoric and Early Historic Times.� We have not gone
into the post-Roman early historic period here; there are still the
Anglo-Saxons and Normans to account for as well as the effects of
the Romans. But what I have tried to do was to begin the story of
@@ -5570,7 +5570,7 @@ Summary
In the pages you have read so far, you have been brought through the
-earliest 99 per cent of the story of mans life on this planet. I have
+earliest 99 per cent of the story of man�s life on this planet. I have
left only 1 per cent of the story for the historians to tell.
@@ -5601,7 +5601,7 @@ But I think there may have been a few. Certainly the pace of the
first act accelerated with the swing from simple gathering to more
intensified collecting. The great cave art of France and Spain was
probably an expression of a climax. Even the ideas of burying the dead
-and of the Venus figurines must also point to levels of human thought
+and of the �Venus� figurines must also point to levels of human thought
and activity that were over and above pure food-getting.
@@ -5629,7 +5629,7 @@ five thousand years after the second act began. But it could never have
happened in the first act at all.
There is another curious thing about the first act. Many of the players
-didnt know it was over and they kept on with their roles long after
+didn�t know it was over and they kept on with their roles long after
the second act had begun. On the edges of the stage there are today
some players who are still going on with the first act. The Eskimos,
and the native Australians, and certain tribes in the Amazon jungle are
@@ -5680,20 +5680,20 @@ act may have lessons for us and give depth to our thinking. I know
there are at least _some_ lessons, even in the present incomplete
state of our knowledge. The players who began the second act--that of
food-production--separately, in different parts of the world, were not
-all of one pure race nor did they have pure cultural traditions.
+all of one �pure race� nor did they have �pure� cultural traditions.
Some apparently quite mixed Mediterraneans got off to the first start
on the second act and brought it to its first two climaxes as well.
Peoples of quite different physical type achieved the first climaxes in
China and in the New World.
In our British example of how the late prehistory of Europe worked, we
-listed a continuous series of invasions and reverberations. After
+listed a continuous series of �invasions� and �reverberations.� After
each of these came fusion. Even though the Channel protected Britain
from some of the extreme complications of the mixture and fusion of
continental Europe, you can see how silly it would be to refer to a
-pure British race or a pure British culture. We speak of the United
-States as a melting pot. But this is nothing new. Actually, Britain
-and all the rest of the world have been melting pots at one time or
+�pure� British race or a �pure� British culture. We speak of the United
+States as a �melting pot.� But this is nothing new. Actually, Britain
+and all the rest of the world have been �melting pots� at one time or
another.
By the time the written records of Mesopotamia and Egypt begin to turn
@@ -5703,12 +5703,12 @@ itself, we are thrown back on prehistoric archeology. And this is as
true for China, India, Middle America, and the Andes, as it is for the
Near East.
-There are lessons to be learned from all of mans past, not simply
+There are lessons to be learned from all of man�s past, not simply
lessons of how to fight battles or win peace conferences, but of how
human society evolves from one stage to another. Many of these lessons
can only be looked for in the prehistoric past. So far, we have only
made a beginning. There is much still to do, and many gaps in the story
-are yet to be filled. The prehistorians job is to find the evidence,
+are yet to be filled. The prehistorian�s job is to find the evidence,
to fill the gaps, and to discover the lessons men have learned in the
past. As I see it, this is not only an exciting but a very practical
goal for which to strive.
@@ -5745,7 +5745,7 @@ paperbound books.)
GEOCHRONOLOGY AND THE ICE AGE
-(Two general books. Some Pleistocene geologists disagree with Zeuners
+(Two general books. Some Pleistocene geologists disagree with Zeuner�s
interpretation of the dating evidence, but their points of view appear
in professional journals, in articles too cumbersome to list here.)
@@ -5815,7 +5815,7 @@ GENERAL PREHISTORY
Press.
Movius, Hallam L., Jr.
- Old World Prehistory: Paleolithic in _Anthropology Today_.
+ �Old World Prehistory: Paleolithic� in _Anthropology Today_.
Kroeber, A. L., ed. 1953. University of Chicago Press.
Oakley, Kenneth P.
@@ -5826,7 +5826,7 @@ GENERAL PREHISTORY
_British Prehistory._ 1949. Oxford University Press.
Pittioni, Richard
- _Die Urgeschichtlichen Grundlagen der Europischen Kultur._
+ _Die Urgeschichtlichen Grundlagen der Europ�ischen Kultur._
1949. Deuticke. (A single book which does attempt to cover the
whole range of European prehistory to ca. 1 A.D.)
@@ -5834,7 +5834,7 @@ GENERAL PREHISTORY
THE NEAR EAST
Adams, Robert M.
- Developmental Stages in Ancient Mesopotamia, _in_ Steward,
+ �Developmental Stages in Ancient Mesopotamia,� _in_ Steward,
Julian, _et al_, _Irrigation Civilizations: A Comparative
Study_. 1955. Pan American Union.
@@ -6000,7 +6000,7 @@ Index
Bolas, 54
- Bordes, Franois, 62
+ Bordes, Fran�ois, 62
Borer, 77
@@ -6028,7 +6028,7 @@ Index
killed by stampede, 86
Burials, 66, 86;
- in henges, 164;
+ in �henges,� 164;
in urns, 168
Burins, 75
@@ -6085,7 +6085,7 @@ Index
Combe Capelle, 30
- Combe Capelle-Brnn group, 34
+ Combe Capelle-Br�nn group, 34
Commont, Victor, 51
@@ -6097,7 +6097,7 @@ Index
Corrals for cattle, 140
- Cradle of mankind, 136
+ �Cradle of mankind,� 136
Cremation, 167
@@ -6123,7 +6123,7 @@ Index
Domestication, of animals, 100, 105, 107;
of plants, 100
- Dragon teeth fossils in China, 28
+ �Dragon teeth� fossils in China, 28
Drill, 77
@@ -6176,9 +6176,9 @@ Index
Fayum, 135;
radiocarbon date, 146
- Fertile Crescent, 107, 146
+ �Fertile Crescent,� 107, 146
- Figurines, Venus, 84;
+ Figurines, �Venus,� 84;
at Jarmo, 128;
at Ubaid, 153
@@ -6197,7 +6197,7 @@ Index
Flint industry, 127
- Fontchevade, 32, 56, 58
+ Font�chevade, 32, 56, 58
Food-collecting, 104, 121;
end of, 104
@@ -6223,7 +6223,7 @@ Index
Food-vessel folk, 164
- Forest folk, 97, 98, 104, 110
+ �Forest folk,� 97, 98, 104, 110
Fox, Sir Cyril, 174
@@ -6379,7 +6379,7 @@ Index
Land bridges in Mediterranean, 19
- La Tne phase, 170
+ La T�ne phase, 170
Laurel leaf point, 78, 89
@@ -6404,7 +6404,7 @@ Index
Mammoth, 93;
in cave art, 85
- Man-apes, 26
+ �Man-apes,� 26
Mango, 107
@@ -6435,7 +6435,7 @@ Index
Microliths, 87;
at Jarmo, 130;
- lunates, 87;
+ �lunates,� 87;
trapezoids, 87;
triangles, 87
@@ -6443,7 +6443,7 @@ Index
Mine-shafts, 140
- Mlefaat, 126, 127
+ M�lefaat, 126, 127
Mongoloids, 29, 90
@@ -6453,9 +6453,9 @@ Index
Mount Carmel, 11, 33, 52, 59, 64, 69, 113, 114
- Mousterian man, 64
+ �Mousterian man,� 64
- Mousterian tools, 61, 62;
+ �Mousterian� tools, 61, 62;
of Acheulean tradition, 62
Movius, H. L., 47
@@ -6471,7 +6471,7 @@ Index
Near East, beginnings of civilization in, 20, 144;
cave sites, 58;
climate in Ice Age, 99;
- Fertile Crescent, 107, 146;
+ �Fertile Crescent,� 107, 146;
food-production in, 99;
Natufian assemblage in, 113-115;
stone tools, 114
@@ -6539,7 +6539,7 @@ Index
Pig, wild, 108
- Piltdown man, 29
+ �Piltdown man,� 29
Pins, 80
@@ -6578,7 +6578,7 @@ Index
Race, 35;
biological, 36;
- pure, 16
+ �pure,� 16
Radioactivity, 9, 10
@@ -6795,7 +6795,7 @@ Index
Writing, 158;
cuneiform, 158
- Wrm I glaciation, 58
+ W�rm I glaciation, 58
Zebu cattle, domestication of, 107
@@ -6810,7 +6810,7 @@ Index
-Transcribers note:
+Transcriber�s note:
Punctuation, hyphenation, and spelling were made consistent when a
predominant preference was found in this book; otherwise they were not
diff --git a/ciphers/transposition_cipher_encrypt_decrypt_file.py b/ciphers/transposition_cipher_encrypt_decrypt_file.py
index 6296b1e6d709..b9630243d7f3 100644
--- a/ciphers/transposition_cipher_encrypt_decrypt_file.py
+++ b/ciphers/transposition_cipher_encrypt_decrypt_file.py
@@ -6,8 +6,8 @@
def main() -> None:
- input_file = "Prehistoric Men.txt"
- output_file = "Output.txt"
+ input_file = "./prehistoric_men.txt"
+ output_file = "./Output.txt"
key = int(input("Enter key: "))
mode = input("Encrypt/Decrypt [e/d]: ")
From 24dbdd0b88bdfd4ddb940cf0b681075c66842cc3 Mon Sep 17 00:00:00 2001
From: Raghav <83136390+Raghav-Bell@users.noreply.github.com>
Date: Wed, 4 Oct 2023 11:38:13 +0530
Subject: [PATCH 249/808] Update coulombs_law.py docs (#9667)
* Update coulombs_law.py
distance is positive non zero real number (float type) hence corrected docs which says only integer.
* Update physics/coulombs_law.py
---------
Co-authored-by: Tianyi Zheng
---
physics/coulombs_law.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/physics/coulombs_law.py b/physics/coulombs_law.py
index 252e8ec0f74e..fe2d358f653e 100644
--- a/physics/coulombs_law.py
+++ b/physics/coulombs_law.py
@@ -32,7 +32,7 @@ def coulombs_law(q1: float, q2: float, radius: float) -> float:
17975103584.6
"""
if radius <= 0:
- raise ValueError("The radius is always a positive non zero integer")
+ raise ValueError("The radius is always a positive number")
return round(((8.9875517923 * 10**9) * q1 * q2) / (radius**2), 2)
From 3fd3497f15982a7286326b520b5e7b52767da1f3 Mon Sep 17 00:00:00 2001
From: Siddhant Totade
Date: Wed, 4 Oct 2023 14:55:26 +0530
Subject: [PATCH 250/808] Add Comments (#9668)
* docs : add comment in circular_linked_list.py and swap_nodes.py
* docs : improve comments
* docs : improved docs and tested on pre-commit
* docs : add comment in circular_linked_list.py and swap_nodes.py
* docs : improve comments
* docs : improved docs and tested on pre-commit
* docs : modified comments
* Update circular_linked_list.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* docs : improved
* Update data_structures/linked_list/circular_linked_list.py
Co-authored-by: Christian Clauss
* Update data_structures/linked_list/circular_linked_list.py
Co-authored-by: Christian Clauss
* Update data_structures/linked_list/swap_nodes.py
Co-authored-by: Christian Clauss
* Update data_structures/linked_list/swap_nodes.py
Co-authored-by: Christian Clauss
* Update data_structures/linked_list/swap_nodes.py
Co-authored-by: Christian Clauss
* Update data_structures/linked_list/swap_nodes.py
Co-authored-by: Christian Clauss
* Update requirements.txt
Co-authored-by: Christian Clauss
* Update data_structures/linked_list/circular_linked_list.py
Co-authored-by: Christian Clauss
* Apply suggestions from code review
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update circular_linked_list.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.../linked_list/circular_linked_list.py | 87 ++++++++++++++++---
data_structures/linked_list/swap_nodes.py | 47 ++++++++--
2 files changed, 113 insertions(+), 21 deletions(-)
diff --git a/data_structures/linked_list/circular_linked_list.py b/data_structures/linked_list/circular_linked_list.py
index d9544f4263a6..72212f46be15 100644
--- a/data_structures/linked_list/circular_linked_list.py
+++ b/data_structures/linked_list/circular_linked_list.py
@@ -6,16 +6,29 @@
class Node:
def __init__(self, data: Any):
+ """
+ Initialize a new Node with the given data.
+ Args:
+ data: The data to be stored in the node.
+ """
self.data: Any = data
- self.next: Node | None = None
+ self.next: Node | None = None # Reference to the next node
class CircularLinkedList:
- def __init__(self):
- self.head = None
- self.tail = None
+ def __init__(self) -> None:
+ """
+ Initialize an empty Circular Linked List.
+ """
+ self.head = None # Reference to the head (first node)
+ self.tail = None # Reference to the tail (last node)
def __iter__(self) -> Iterator[Any]:
+ """
+ Iterate through all nodes in the Circular Linked List yielding their data.
+ Yields:
+ The data of each node in the linked list.
+ """
node = self.head
while self.head:
yield node.data
@@ -24,25 +37,48 @@ def __iter__(self) -> Iterator[Any]:
break
def __len__(self) -> int:
+ """
+ Get the length (number of nodes) in the Circular Linked List.
+ """
return sum(1 for _ in self)
- def __repr__(self):
+ def __repr__(self) -> str:
+ """
+ Generate a string representation of the Circular Linked List.
+ Returns:
+ A string of the format "1->2->....->N".
+ """
return "->".join(str(item) for item in iter(self))
def insert_tail(self, data: Any) -> None:
+ """
+ Insert a node with the given data at the end of the Circular Linked List.
+ """
self.insert_nth(len(self), data)
def insert_head(self, data: Any) -> None:
+ """
+ Insert a node with the given data at the beginning of the Circular Linked List.
+ """
self.insert_nth(0, data)
def insert_nth(self, index: int, data: Any) -> None:
+ """
+ Insert the data of the node at the nth pos in the Circular Linked List.
+ Args:
+ index: The index at which the data should be inserted.
+ data: The data to be inserted.
+
+ Raises:
+ IndexError: If the index is out of range.
+ """
if index < 0 or index > len(self):
raise IndexError("list index out of range.")
new_node = Node(data)
if self.head is None:
- new_node.next = new_node # first node points itself
+ new_node.next = new_node # First node points to itself
self.tail = self.head = new_node
- elif index == 0: # insert at head
+ elif index == 0: # Insert at the head
new_node.next = self.head
self.head = self.tail.next = new_node
else:
@@ -51,22 +87,43 @@ def insert_nth(self, index: int, data: Any) -> None:
temp = temp.next
new_node.next = temp.next
temp.next = new_node
- if index == len(self) - 1: # insert at tail
+ if index == len(self) - 1: # Insert at the tail
self.tail = new_node
- def delete_front(self):
+ def delete_front(self) -> Any:
+ """
+ Delete and return the data of the node at the front of the Circular Linked List.
+ Raises:
+ IndexError: If the list is empty.
+ """
return self.delete_nth(0)
def delete_tail(self) -> Any:
+ """
+ Delete and return the data of the node at the end of the Circular Linked List.
+ Returns:
+ Any: The data of the deleted node.
+ Raises:
+ IndexError: If the index is out of range.
+ """
return self.delete_nth(len(self) - 1)
def delete_nth(self, index: int = 0) -> Any:
+ """
+ Delete and return the data of the node at the nth pos in Circular Linked List.
+ Args:
+ index (int): The index of the node to be deleted. Defaults to 0.
+ Returns:
+ Any: The data of the deleted node.
+ Raises:
+ IndexError: If the index is out of range.
+ """
if not 0 <= index < len(self):
raise IndexError("list index out of range.")
delete_node = self.head
- if self.head == self.tail: # just one node
+ if self.head == self.tail: # Just one node
self.head = self.tail = None
- elif index == 0: # delete head node
+ elif index == 0: # Delete head node
self.tail.next = self.tail.next.next
self.head = self.head.next
else:
@@ -75,16 +132,22 @@ def delete_nth(self, index: int = 0) -> Any:
temp = temp.next
delete_node = temp.next
temp.next = temp.next.next
- if index == len(self) - 1: # delete at tail
+ if index == len(self) - 1: # Delete at tail
self.tail = temp
return delete_node.data
def is_empty(self) -> bool:
+ """
+ Check if the Circular Linked List is empty.
+ Returns:
+ bool: True if the list is empty, False otherwise.
+ """
return len(self) == 0
def test_circular_linked_list() -> None:
"""
+ Test cases for the CircularLinkedList class.
>>> test_circular_linked_list()
"""
circular_linked_list = CircularLinkedList()
diff --git a/data_structures/linked_list/swap_nodes.py b/data_structures/linked_list/swap_nodes.py
index 3f825756b3d2..da6aa07a79fd 100644
--- a/data_structures/linked_list/swap_nodes.py
+++ b/data_structures/linked_list/swap_nodes.py
@@ -2,30 +2,56 @@
class Node:
- def __init__(self, data: Any):
+ def __init__(self, data: Any) -> None:
+ """
+ Initialize a new Node with the given data.
+
+ Args:
+ data: The data to be stored in the node.
+
+ """
self.data = data
- self.next = None
+ self.next = None # Reference to the next node
class LinkedList:
- def __init__(self):
- self.head = None
+ def __init__(self) -> None:
+ """
+ Initialize an empty Linked List.
+ """
+ self.head = None # Reference to the head (first node)
def print_list(self):
+ """
+ Print the elements of the Linked List in order.
+ """
temp = self.head
while temp is not None:
print(temp.data, end=" ")
temp = temp.next
print()
- # adding nodes
- def push(self, new_data: Any):
+ def push(self, new_data: Any) -> None:
+ """
+ Add a new node with the given data to the beginning of the Linked List.
+ Args:
+ new_data (Any): The data to be added to the new node.
+ """
new_node = Node(new_data)
new_node.next = self.head
self.head = new_node
- # swapping nodes
- def swap_nodes(self, node_data_1, node_data_2):
+ def swap_nodes(self, node_data_1, node_data_2) -> None:
+ """
+ Swap the positions of two nodes in the Linked List based on their data values.
+ Args:
+ node_data_1: Data value of the first node to be swapped.
+ node_data_2: Data value of the second node to be swapped.
+
+
+ Note:
+ If either of the specified data values isn't found then, no swapping occurs.
+ """
if node_data_1 == node_data_2:
return
else:
@@ -40,6 +66,7 @@ def swap_nodes(self, node_data_1, node_data_2):
if node_1 is None or node_2 is None:
return
+ # Swap the data values of the two nodes
node_1.data, node_2.data = node_2.data, node_1.data
@@ -48,8 +75,10 @@ def swap_nodes(self, node_data_1, node_data_2):
for i in range(5, 0, -1):
ll.push(i)
+ print("Original Linked List:")
ll.print_list()
ll.swap_nodes(1, 4)
- print("After swapping")
+ print("After swapping the nodes whose data is 1 and 4:")
+
ll.print_list()
From dfdd78135df938d948ba3044aca628aca08886e7 Mon Sep 17 00:00:00 2001
From: Tianyi Zheng
Date: Wed, 4 Oct 2023 12:05:00 -0400
Subject: [PATCH 251/808] Fix mypy errors in circular_linked_list.py and
swap_nodes.py (#9707)
* updating DIRECTORY.md
* Fix mypy errors in circular_linked_list.py
* Fix mypy errors in swap_nodes.py
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 5 ++---
.../linked_list/circular_linked_list.py | 22 +++++++++++++------
data_structures/linked_list/swap_nodes.py | 4 ++--
3 files changed, 19 insertions(+), 12 deletions(-)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 9a913aa786e1..4f4cc423d678 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -541,8 +541,8 @@
* [Basic Maths](maths/basic_maths.py)
* [Binary Exp Mod](maths/binary_exp_mod.py)
* [Binary Exponentiation](maths/binary_exponentiation.py)
- * [Binary Exponentiation 2](maths/binary_exponentiation_2.py)
* [Binary Exponentiation 3](maths/binary_exponentiation_3.py)
+ * [Binary Multiplication](maths/binary_multiplication.py)
* [Binomial Coefficient](maths/binomial_coefficient.py)
* [Binomial Distribution](maths/binomial_distribution.py)
* [Bisection](maths/bisection.py)
@@ -557,8 +557,7 @@
* [Decimal Isolate](maths/decimal_isolate.py)
* [Decimal To Fraction](maths/decimal_to_fraction.py)
* [Dodecahedron](maths/dodecahedron.py)
- * [Double Factorial Iterative](maths/double_factorial_iterative.py)
- * [Double Factorial Recursive](maths/double_factorial_recursive.py)
+ * [Double Factorial](maths/double_factorial.py)
* [Dual Number Automatic Differentiation](maths/dual_number_automatic_differentiation.py)
* [Entropy](maths/entropy.py)
* [Euclidean Distance](maths/euclidean_distance.py)
diff --git a/data_structures/linked_list/circular_linked_list.py b/data_structures/linked_list/circular_linked_list.py
index 72212f46be15..ef6658733a95 100644
--- a/data_structures/linked_list/circular_linked_list.py
+++ b/data_structures/linked_list/circular_linked_list.py
@@ -20,8 +20,8 @@ def __init__(self) -> None:
"""
Initialize an empty Circular Linked List.
"""
- self.head = None # Reference to the head (first node)
- self.tail = None # Reference to the tail (last node)
+ self.head: Node | None = None # Reference to the head (first node)
+ self.tail: Node | None = None # Reference to the tail (last node)
def __iter__(self) -> Iterator[Any]:
"""
@@ -30,7 +30,7 @@ def __iter__(self) -> Iterator[Any]:
The data of each node in the linked list.
"""
node = self.head
- while self.head:
+ while node:
yield node.data
node = node.next
if node == self.head:
@@ -74,17 +74,20 @@ def insert_nth(self, index: int, data: Any) -> None:
"""
if index < 0 or index > len(self):
raise IndexError("list index out of range.")
- new_node = Node(data)
+ new_node: Node = Node(data)
if self.head is None:
new_node.next = new_node # First node points to itself
self.tail = self.head = new_node
elif index == 0: # Insert at the head
new_node.next = self.head
+ assert self.tail is not None # List is not empty, tail exists
self.head = self.tail.next = new_node
else:
- temp = self.head
+ temp: Node | None = self.head
for _ in range(index - 1):
+ assert temp is not None
temp = temp.next
+ assert temp is not None
new_node.next = temp.next
temp.next = new_node
if index == len(self) - 1: # Insert at the tail
@@ -120,16 +123,21 @@ def delete_nth(self, index: int = 0) -> Any:
"""
if not 0 <= index < len(self):
raise IndexError("list index out of range.")
- delete_node = self.head
+
+ assert self.head is not None and self.tail is not None
+ delete_node: Node = self.head
if self.head == self.tail: # Just one node
self.head = self.tail = None
elif index == 0: # Delete head node
+ assert self.tail.next is not None
self.tail.next = self.tail.next.next
self.head = self.head.next
else:
- temp = self.head
+ temp: Node | None = self.head
for _ in range(index - 1):
+ assert temp is not None
temp = temp.next
+ assert temp is not None and temp.next is not None
delete_node = temp.next
temp.next = temp.next.next
if index == len(self) - 1: # Delete at tail
diff --git a/data_structures/linked_list/swap_nodes.py b/data_structures/linked_list/swap_nodes.py
index da6aa07a79fd..31dcb02bfa9a 100644
--- a/data_structures/linked_list/swap_nodes.py
+++ b/data_structures/linked_list/swap_nodes.py
@@ -11,7 +11,7 @@ def __init__(self, data: Any) -> None:
"""
self.data = data
- self.next = None # Reference to the next node
+ self.next: Node | None = None # Reference to the next node
class LinkedList:
@@ -19,7 +19,7 @@ def __init__(self) -> None:
"""
Initialize an empty Linked List.
"""
- self.head = None # Reference to the head (first node)
+ self.head: Node | None = None # Reference to the head (first node)
def print_list(self):
"""
From d74349793b613b0948608409a572426a9800c3a1 Mon Sep 17 00:00:00 2001
From: halfhearted <99018821+Arunsiva003@users.noreply.github.com>
Date: Wed, 4 Oct 2023 22:09:28 +0530
Subject: [PATCH 252/808] Arunsiva003 patch 1 flatten tree (#9695)
* infix to prefix missing feature added
* infix to prefix missing feature added
* infix to prefix missing feature added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* infix to prefix missing feature added (comments)
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* infix to prefix missing feature added (comments)
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* newly updated infix_to_prefix
* newly updated infix_to_prefix_2
* newly updated infix_to_prefix_3
* from the beginning
* Created flatten_binarytree_to_linkedlist.py
* Update flatten_binarytree_to_linkedlist.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update flatten_binarytree_to_linkedlist.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update flatten_binarytree_to_linkedlist.py
* Update flatten_binarytree_to_linkedlist.py
* Update flatten_binarytree_to_linkedlist.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update flatten_binarytree_to_linkedlist.py (space added)
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update flatten_binarytree_to_linkedlist.py space added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update flatten_binarytree_to_linkedlist.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* flatten binary tree to linked list - 1
* flatten binary tree to linked list final
* flatten binary tree to linked list final
* review updated
* Update flatten_binarytree_to_linkedlist.py
* Update .pre-commit-config.yaml
* Update flatten_binarytree_to_linkedlist.py
* Update flatten_binarytree_to_linkedlist.py
---------
Co-authored-by: ArunSiva
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.../flatten_binarytree_to_linkedlist.py | 138 ++++++++++++++++++
1 file changed, 138 insertions(+)
create mode 100644 data_structures/binary_tree/flatten_binarytree_to_linkedlist.py
diff --git a/data_structures/binary_tree/flatten_binarytree_to_linkedlist.py b/data_structures/binary_tree/flatten_binarytree_to_linkedlist.py
new file mode 100644
index 000000000000..8820a509ecba
--- /dev/null
+++ b/data_structures/binary_tree/flatten_binarytree_to_linkedlist.py
@@ -0,0 +1,138 @@
+"""
+Binary Tree Flattening Algorithm
+
+This code defines an algorithm to flatten a binary tree into a linked list
+represented using the right pointers of the tree nodes. It uses in-place
+flattening and demonstrates the flattening process along with a display
+function to visualize the flattened linked list.
+https://www.geeksforgeeks.org/flatten-a-binary-tree-into-linked-list
+
+Author: Arunkumar A
+Date: 04/09/2023
+"""
+from __future__ import annotations
+
+
+class TreeNode:
+ """
+ A TreeNode has data variable and pointers to TreeNode objects
+ for its left and right children.
+ """
+
+ def __init__(self, data: int) -> None:
+ self.data = data
+ self.left: TreeNode | None = None
+ self.right: TreeNode | None = None
+
+
+def build_tree() -> TreeNode:
+ """
+ Build and return a sample binary tree.
+
+ Returns:
+ TreeNode: The root of the binary tree.
+
+ Examples:
+ >>> root = build_tree()
+ >>> root.data
+ 1
+ >>> root.left.data
+ 2
+ >>> root.right.data
+ 5
+ >>> root.left.left.data
+ 3
+ >>> root.left.right.data
+ 4
+ >>> root.right.right.data
+ 6
+ """
+ root = TreeNode(1)
+ root.left = TreeNode(2)
+ root.right = TreeNode(5)
+ root.left.left = TreeNode(3)
+ root.left.right = TreeNode(4)
+ root.right.right = TreeNode(6)
+ return root
+
+
+def flatten(root: TreeNode | None) -> None:
+ """
+ Flatten a binary tree into a linked list in-place, where the linked list is
+ represented using the right pointers of the tree nodes.
+
+ Args:
+ root (TreeNode): The root of the binary tree to be flattened.
+
+ Examples:
+ >>> root = TreeNode(1)
+ >>> root.left = TreeNode(2)
+ >>> root.right = TreeNode(5)
+ >>> root.left.left = TreeNode(3)
+ >>> root.left.right = TreeNode(4)
+ >>> root.right.right = TreeNode(6)
+ >>> flatten(root)
+ >>> root.data
+ 1
+ >>> root.right.right is None
+ False
+ >>> root.right.right = TreeNode(3)
+ >>> root.right.right.right is None
+ True
+ """
+ if not root:
+ return
+
+ # Flatten the left subtree
+ flatten(root.left)
+
+ # Save the right subtree
+ right_subtree = root.right
+
+ # Make the left subtree the new right subtree
+ root.right = root.left
+ root.left = None
+
+ # Find the end of the new right subtree
+ current = root
+ while current.right:
+ current = current.right
+
+ # Append the original right subtree to the end
+ current.right = right_subtree
+
+ # Flatten the updated right subtree
+ flatten(right_subtree)
+
+
+def display_linked_list(root: TreeNode | None) -> None:
+ """
+ Display the flattened linked list.
+
+ Args:
+ root (TreeNode | None): The root of the flattened linked list.
+
+ Examples:
+ >>> root = TreeNode(1)
+ >>> root.right = TreeNode(2)
+ >>> root.right.right = TreeNode(3)
+ >>> display_linked_list(root)
+ 1 2 3
+ >>> root = None
+ >>> display_linked_list(root)
+
+ """
+ current = root
+ while current:
+ if current.right is None:
+ print(current.data, end="")
+ break
+ print(current.data, end=" ")
+ current = current.right
+
+
+if __name__ == "__main__":
+ print("Flattened Linked List:")
+ root = build_tree()
+ flatten(root)
+ display_linked_list(root)
From 922d6a88b3be2ff0dd69dd47d90e40aa95afd105 Mon Sep 17 00:00:00 2001
From: Bama Charan Chhandogi
Date: Wed, 4 Oct 2023 22:51:46 +0530
Subject: [PATCH 253/808] add median of matrix (#9363)
* add median of matrix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix formating
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Apply suggestions from code review
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
matrix/median_matrix.py | 38 ++++++++++++++++++++++++++++++++++++++
1 file changed, 38 insertions(+)
create mode 100644 matrix/median_matrix.py
diff --git a/matrix/median_matrix.py b/matrix/median_matrix.py
new file mode 100644
index 000000000000..116e609a587c
--- /dev/null
+++ b/matrix/median_matrix.py
@@ -0,0 +1,38 @@
+"""
+https://en.wikipedia.org/wiki/Median
+"""
+
+
+def median(matrix: list[list[int]]) -> int:
+ """
+ Calculate the median of a sorted matrix.
+
+ Args:
+ matrix: A 2D matrix of integers.
+
+ Returns:
+ The median value of the matrix.
+
+ Examples:
+ >>> matrix = [[1, 3, 5], [2, 6, 9], [3, 6, 9]]
+ >>> median(matrix)
+ 5
+
+ >>> matrix = [[1, 2, 3], [4, 5, 6]]
+ >>> median(matrix)
+ 3
+ """
+ # Flatten the matrix into a sorted 1D list
+ linear = sorted(num for row in matrix for num in row)
+
+ # Calculate the middle index
+ mid = (len(linear) - 1) // 2
+
+ # Return the median
+ return linear[mid]
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From d5806258d4f9eb0e5652e1edfac0613aacb71fb6 Mon Sep 17 00:00:00 2001
From: Bama Charan Chhandogi
Date: Wed, 4 Oct 2023 23:48:59 +0530
Subject: [PATCH 254/808] add median of two sorted array (#9386)
* add median of two sorted array
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix syntax
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix syntax
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* improve code
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* add documentation
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
data_structures/arrays/median_two_array.py | 61 ++++++++++++++++++++++
1 file changed, 61 insertions(+)
create mode 100644 data_structures/arrays/median_two_array.py
diff --git a/data_structures/arrays/median_two_array.py b/data_structures/arrays/median_two_array.py
new file mode 100644
index 000000000000..972b0ee44201
--- /dev/null
+++ b/data_structures/arrays/median_two_array.py
@@ -0,0 +1,61 @@
+"""
+https://www.enjoyalgorithms.com/blog/median-of-two-sorted-arrays
+"""
+
+
+def find_median_sorted_arrays(nums1: list[int], nums2: list[int]) -> float:
+ """
+ Find the median of two arrays.
+
+ Args:
+ nums1: The first array.
+ nums2: The second array.
+
+ Returns:
+ The median of the two arrays.
+
+ Examples:
+ >>> find_median_sorted_arrays([1, 3], [2])
+ 2.0
+
+ >>> find_median_sorted_arrays([1, 2], [3, 4])
+ 2.5
+
+ >>> find_median_sorted_arrays([0, 0], [0, 0])
+ 0.0
+
+ >>> find_median_sorted_arrays([], [])
+ Traceback (most recent call last):
+ ...
+ ValueError: Both input arrays are empty.
+
+ >>> find_median_sorted_arrays([], [1])
+ 1.0
+
+ >>> find_median_sorted_arrays([-1000], [1000])
+ 0.0
+
+ >>> find_median_sorted_arrays([-1.1, -2.2], [-3.3, -4.4])
+ -2.75
+ """
+ if not nums1 and not nums2:
+ raise ValueError("Both input arrays are empty.")
+
+ # Merge the arrays into a single sorted array.
+ merged = sorted(nums1 + nums2)
+ total = len(merged)
+
+ if total % 2 == 1: # If the total number of elements is odd
+ return float(merged[total // 2]) # then return the middle element
+
+ # If the total number of elements is even, calculate
+ # the average of the two middle elements as the median.
+ middle1 = merged[total // 2 - 1]
+ middle2 = merged[total // 2]
+ return (float(middle1) + float(middle2)) / 2.0
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From c16d2f8865c8ce28ae6d4d815d3f6c3008e94f74 Mon Sep 17 00:00:00 2001
From: Muhammad Umer Farooq <115654418+Muhammadummerr@users.noreply.github.com>
Date: Wed, 4 Oct 2023 23:43:17 +0500
Subject: [PATCH 255/808] UPDATED rat_in_maze.py (#9148)
* UPDATED rat_in_maze.py
* Update reddit.py in Webprogramming b/c it was causing error in pre-commit tests while raising PR.
* UPDATED rat_in_maze.py
* fixed return type to only maze,otherwise raise valueError.
* fixed whitespaces error,improved matrix visual.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* updated.
* Try
* updated
* updated
* Apply suggestions from code review
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
backtracking/rat_in_maze.py | 181 ++++++++++++++++++++++++++----------
1 file changed, 130 insertions(+), 51 deletions(-)
diff --git a/backtracking/rat_in_maze.py b/backtracking/rat_in_maze.py
index 7bde886dd558..626c83cb4a15 100644
--- a/backtracking/rat_in_maze.py
+++ b/backtracking/rat_in_maze.py
@@ -1,91 +1,164 @@
from __future__ import annotations
-def solve_maze(maze: list[list[int]]) -> bool:
+def solve_maze(
+ maze: list[list[int]],
+ source_row: int,
+ source_column: int,
+ destination_row: int,
+ destination_column: int,
+) -> list[list[int]]:
"""
This method solves the "rat in maze" problem.
- In this problem we have some n by n matrix, a start point and an end point.
- We want to go from the start to the end. In this matrix zeroes represent walls
- and ones paths we can use.
Parameters :
- maze(2D matrix) : maze
+ - maze: A two dimensional matrix of zeros and ones.
+ - source_row: The row index of the starting point.
+ - source_column: The column index of the starting point.
+ - destination_row: The row index of the destination point.
+ - destination_column: The column index of the destination point.
Returns:
- Return: True if the maze has a solution or False if it does not.
+ - solution: A 2D matrix representing the solution path if it exists.
+ Raises:
+ - ValueError: If no solution exists or if the source or
+ destination coordinates are invalid.
+ Description:
+ This method navigates through a maze represented as an n by n matrix,
+ starting from a specified source cell and
+ aiming to reach a destination cell.
+ The maze consists of walls (1s) and open paths (0s).
+ By providing custom row and column values, the source and destination
+ cells can be adjusted.
>>> maze = [[0, 1, 0, 1, 1],
... [0, 0, 0, 0, 0],
... [1, 0, 1, 0, 1],
... [0, 0, 1, 0, 0],
... [1, 0, 0, 1, 0]]
- >>> solve_maze(maze)
- [1, 0, 0, 0, 0]
- [1, 1, 1, 1, 0]
- [0, 0, 0, 1, 0]
- [0, 0, 0, 1, 1]
- [0, 0, 0, 0, 1]
- True
+ >>> solve_maze(maze,0,0,len(maze)-1,len(maze)-1) # doctest: +NORMALIZE_WHITESPACE
+ [[0, 1, 1, 1, 1],
+ [0, 0, 0, 0, 1],
+ [1, 1, 1, 0, 1],
+ [1, 1, 1, 0, 0],
+ [1, 1, 1, 1, 0]]
+
+ Note:
+ In the output maze, the zeros (0s) represent one of the possible
+ paths from the source to the destination.
>>> maze = [[0, 1, 0, 1, 1],
... [0, 0, 0, 0, 0],
... [0, 0, 0, 0, 1],
... [0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0]]
- >>> solve_maze(maze)
- [1, 0, 0, 0, 0]
- [1, 0, 0, 0, 0]
- [1, 0, 0, 0, 0]
- [1, 0, 0, 0, 0]
- [1, 1, 1, 1, 1]
- True
+ >>> solve_maze(maze,0,0,len(maze)-1,len(maze)-1) # doctest: +NORMALIZE_WHITESPACE
+ [[0, 1, 1, 1, 1],
+ [0, 1, 1, 1, 1],
+ [0, 1, 1, 1, 1],
+ [0, 1, 1, 1, 1],
+ [0, 0, 0, 0, 0]]
>>> maze = [[0, 0, 0],
... [0, 1, 0],
... [1, 0, 0]]
- >>> solve_maze(maze)
- [1, 1, 1]
- [0, 0, 1]
- [0, 0, 1]
- True
+ >>> solve_maze(maze,0,0,len(maze)-1,len(maze)-1) # doctest: +NORMALIZE_WHITESPACE
+ [[0, 0, 0],
+ [1, 1, 0],
+ [1, 1, 0]]
- >>> maze = [[0, 1, 0],
+ >>> maze = [[1, 0, 0],
... [0, 1, 0],
... [1, 0, 0]]
- >>> solve_maze(maze)
- No solution exists!
- False
+ >>> solve_maze(maze,0,1,len(maze)-1,len(maze)-1) # doctest: +NORMALIZE_WHITESPACE
+ [[1, 0, 0],
+ [1, 1, 0],
+ [1, 1, 0]]
+
+ >>> maze = [[1, 1, 0, 0, 1, 0, 0, 1],
+ ... [1, 0, 1, 0, 0, 1, 1, 1],
+ ... [0, 1, 0, 1, 0, 0, 1, 0],
+ ... [1, 1, 1, 0, 0, 1, 0, 1],
+ ... [0, 1, 0, 0, 1, 0, 1, 1],
+ ... [0, 0, 0, 1, 1, 1, 0, 1],
+ ... [0, 1, 0, 1, 0, 1, 1, 1],
+ ... [1, 1, 0, 0, 0, 0, 0, 1]]
+ >>> solve_maze(maze,0,2,len(maze)-1,2) # doctest: +NORMALIZE_WHITESPACE
+ [[1, 1, 0, 0, 1, 1, 1, 1],
+ [1, 1, 1, 0, 0, 1, 1, 1],
+ [1, 1, 1, 1, 0, 1, 1, 1],
+ [1, 1, 1, 0, 0, 1, 1, 1],
+ [1, 1, 0, 0, 1, 1, 1, 1],
+ [1, 1, 0, 1, 1, 1, 1, 1],
+ [1, 1, 0, 1, 1, 1, 1, 1],
+ [1, 1, 0, 1, 1, 1, 1, 1]]
+ >>> maze = [[1, 0, 0],
+ ... [0, 1, 1],
+ ... [1, 0, 1]]
+ >>> solve_maze(maze,0,1,len(maze)-1,len(maze)-1)
+ Traceback (most recent call last):
+ ...
+ ValueError: No solution exists!
+
+ >>> maze = [[0, 0],
+ ... [1, 1]]
+ >>> solve_maze(maze,0,0,len(maze)-1,len(maze)-1)
+ Traceback (most recent call last):
+ ...
+ ValueError: No solution exists!
>>> maze = [[0, 1],
... [1, 0]]
- >>> solve_maze(maze)
- No solution exists!
- False
+ >>> solve_maze(maze,2,0,len(maze)-1,len(maze)-1)
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid source or destination coordinates
+
+ >>> maze = [[1, 0, 0],
+ ... [0, 1, 0],
+ ... [1, 0, 0]]
+ >>> solve_maze(maze,0,1,len(maze),len(maze)-1)
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid source or destination coordinates
"""
size = len(maze)
+ # Check if source and destination coordinates are Invalid.
+ if not (0 <= source_row <= size - 1 and 0 <= source_column <= size - 1) or (
+ not (0 <= destination_row <= size - 1 and 0 <= destination_column <= size - 1)
+ ):
+ raise ValueError("Invalid source or destination coordinates")
# We need to create solution object to save path.
- solutions = [[0 for _ in range(size)] for _ in range(size)]
- solved = run_maze(maze, 0, 0, solutions)
+ solutions = [[1 for _ in range(size)] for _ in range(size)]
+ solved = run_maze(
+ maze, source_row, source_column, destination_row, destination_column, solutions
+ )
if solved:
- print("\n".join(str(row) for row in solutions))
+ return solutions
else:
- print("No solution exists!")
- return solved
+ raise ValueError("No solution exists!")
-def run_maze(maze: list[list[int]], i: int, j: int, solutions: list[list[int]]) -> bool:
+def run_maze(
+ maze: list[list[int]],
+ i: int,
+ j: int,
+ destination_row: int,
+ destination_column: int,
+ solutions: list[list[int]],
+) -> bool:
"""
This method is recursive starting from (i, j) and going in one of four directions:
up, down, left, right.
If a path is found to destination it returns True otherwise it returns False.
- Parameters:
- maze(2D matrix) : maze
+ Parameters
+ maze: A two dimensional matrix of zeros and ones.
i, j : coordinates of matrix
- solutions(2D matrix) : solutions
+ solutions: A two dimensional matrix of solutions.
Returns:
Boolean if path is found True, Otherwise False.
"""
size = len(maze)
# Final check point.
- if i == j == (size - 1):
- solutions[i][j] = 1
+ if i == destination_row and j == destination_column and maze[i][j] == 0:
+ solutions[i][j] = 0
return True
lower_flag = (not i < 0) and (not j < 0) # Check lower bounds
@@ -93,21 +166,27 @@ def run_maze(maze: list[list[int]], i: int, j: int, solutions: list[list[int]])
if lower_flag and upper_flag:
# check for already visited and block points.
- block_flag = (not solutions[i][j]) and (not maze[i][j])
+ block_flag = (solutions[i][j]) and (not maze[i][j])
if block_flag:
# check visited
- solutions[i][j] = 1
+ solutions[i][j] = 0
# check for directions
if (
- run_maze(maze, i + 1, j, solutions)
- or run_maze(maze, i, j + 1, solutions)
- or run_maze(maze, i - 1, j, solutions)
- or run_maze(maze, i, j - 1, solutions)
+ run_maze(maze, i + 1, j, destination_row, destination_column, solutions)
+ or run_maze(
+ maze, i, j + 1, destination_row, destination_column, solutions
+ )
+ or run_maze(
+ maze, i - 1, j, destination_row, destination_column, solutions
+ )
+ or run_maze(
+ maze, i, j - 1, destination_row, destination_column, solutions
+ )
):
return True
- solutions[i][j] = 0
+ solutions[i][j] = 1
return False
return False
@@ -115,4 +194,4 @@ def run_maze(maze: list[list[int]], i: int, j: int, solutions: list[list[int]])
if __name__ == "__main__":
import doctest
- doctest.testmod()
+ doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)
From 26d650ec2820e265e69c88608959a3e18f28c5d5 Mon Sep 17 00:00:00 2001
From: piyush-poddar <143445461+piyush-poddar@users.noreply.github.com>
Date: Thu, 5 Oct 2023 01:58:19 +0530
Subject: [PATCH 256/808] Moved relu.py from maths/ to
neural_network/activation_functions (#9753)
* Moved file relu.py from maths/ to neural_network/activation_functions
* Renamed relu.py to rectified_linear_unit.py
* Renamed relu.py to rectified_linear_unit.py in DIRECTORY.md
---
DIRECTORY.md | 2 +-
.../activation_functions/rectified_linear_unit.py | 0
2 files changed, 1 insertion(+), 1 deletion(-)
rename maths/relu.py => neural_network/activation_functions/rectified_linear_unit.py (100%)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 4f4cc423d678..696a059bb4c8 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -639,7 +639,6 @@
* [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py)
* [Radians](maths/radians.py)
* [Radix2 Fft](maths/radix2_fft.py)
- * [Relu](maths/relu.py)
* [Remove Digit](maths/remove_digit.py)
* [Runge Kutta](maths/runge_kutta.py)
* [Segmented Sieve](maths/segmented_sieve.py)
@@ -710,6 +709,7 @@
* [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py)
* [Leaky Rectified Linear Unit](neural_network/activation_functions/leaky_rectified_linear_unit.py)
* [Scaled Exponential Linear Unit](neural_network/activation_functions/scaled_exponential_linear_unit.py)
+ * [Rectified Linear Unit](neural_network/activation_functions/rectified_linear_unit.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Perceptron](neural_network/perceptron.py)
diff --git a/maths/relu.py b/neural_network/activation_functions/rectified_linear_unit.py
similarity index 100%
rename from maths/relu.py
rename to neural_network/activation_functions/rectified_linear_unit.py
From 6a391d113d8f0efdd69e69c8da7b44766594449a Mon Sep 17 00:00:00 2001
From: Raghav <83136390+Raghav-Bell@users.noreply.github.com>
Date: Thu, 5 Oct 2023 04:46:19 +0530
Subject: [PATCH 257/808] Added Photoelectric effect equation (#9666)
* Added Photoelectric effect equation
Photoelectric effect is one of the demonstration of quanta of energy.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fixed doctest
Co-authored-by: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Rohan Anand <96521078+rohan472000@users.noreply.github.com>
---
physics/photoelectric_effect.py | 67 +++++++++++++++++++++++++++++++++
1 file changed, 67 insertions(+)
create mode 100644 physics/photoelectric_effect.py
diff --git a/physics/photoelectric_effect.py b/physics/photoelectric_effect.py
new file mode 100644
index 000000000000..3a0138ffe045
--- /dev/null
+++ b/physics/photoelectric_effect.py
@@ -0,0 +1,67 @@
+"""
+The photoelectric effect is the emission of electrons when electromagnetic radiation ,
+such as light, hits a material. Electrons emitted in this manner are called
+photoelectrons.
+
+In 1905, Einstein proposed a theory of the photoelectric effect using a concept that
+light consists of tiny packets of energy known as photons or light quanta. Each packet
+carries energy hv that is proportional to the frequency v of the corresponding
+electromagnetic wave. The proportionality constant h has become known as the
+Planck constant. In the range of kinetic energies of the electrons that are removed from
+their varying atomic bindings by the absorption of a photon of energy hv, the highest
+kinetic energy K_max is :
+
+K_max = hv-W
+
+Here, W is the minimum energy required to remove an electron from the surface of the
+material. It is called the work function of the surface
+
+Reference: https://en.wikipedia.org/wiki/Photoelectric_effect
+
+"""
+
+PLANCK_CONSTANT_JS = 6.6261 * pow(10, -34) # in SI (Js)
+PLANCK_CONSTANT_EVS = 4.1357 * pow(10, -15) # in eVs
+
+
+def maximum_kinetic_energy(
+ frequency: float, work_function: float, in_ev: bool = False
+) -> float:
+ """
+ Calculates the maximum kinetic energy of emitted electron from the surface.
+ if the maximum kinetic energy is zero then no electron will be emitted
+ or given electromagnetic wave frequency is small.
+
+ frequency (float): Frequency of electromagnetic wave.
+ work_function (float): Work function of the surface.
+ in_ev (optional)(bool): Pass True if values are in eV.
+
+ Usage example:
+ >>> maximum_kinetic_energy(1000000,2)
+ 0
+ >>> maximum_kinetic_energy(1000000,2,True)
+ 0
+ >>> maximum_kinetic_energy(10000000000000000,2,True)
+ 39.357000000000006
+ >>> maximum_kinetic_energy(-9,20)
+ Traceback (most recent call last):
+ ...
+ ValueError: Frequency can't be negative.
+
+ >>> maximum_kinetic_energy(1000,"a")
+ Traceback (most recent call last):
+ ...
+ TypeError: unsupported operand type(s) for -: 'float' and 'str'
+
+ """
+ if frequency < 0:
+ raise ValueError("Frequency can't be negative.")
+ if in_ev:
+ return max(PLANCK_CONSTANT_EVS * frequency - work_function, 0)
+ return max(PLANCK_CONSTANT_JS * frequency - work_function, 0)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 2fd43c0f7ff1d7f72fa65a528ddabccf90c89a0d Mon Sep 17 00:00:00 2001
From: Tauseef Hilal Tantary
Date: Thu, 5 Oct 2023 05:03:12 +0530
Subject: [PATCH 258/808] [New Algorithm] - Bell Numbers (#9324)
* Add Bell Numbers
* Use descriptive variable names
* Add type hints
* Fix failing tests
---
maths/bell_numbers.py | 78 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 78 insertions(+)
create mode 100644 maths/bell_numbers.py
diff --git a/maths/bell_numbers.py b/maths/bell_numbers.py
new file mode 100644
index 000000000000..660ec6e6aa09
--- /dev/null
+++ b/maths/bell_numbers.py
@@ -0,0 +1,78 @@
+"""
+Bell numbers represent the number of ways to partition a set into non-empty
+subsets. This module provides functions to calculate Bell numbers for sets of
+integers. In other words, the first (n + 1) Bell numbers.
+
+For more information about Bell numbers, refer to:
+https://en.wikipedia.org/wiki/Bell_number
+"""
+
+
+def bell_numbers(max_set_length: int) -> list[int]:
+ """
+ Calculate Bell numbers for the sets of lengths from 0 to max_set_length.
+ In other words, calculate first (max_set_length + 1) Bell numbers.
+
+ Args:
+ max_set_length (int): The maximum length of the sets for which
+ Bell numbers are calculated.
+
+ Returns:
+ list: A list of Bell numbers for sets of lengths from 0 to max_set_length.
+
+ Examples:
+ >>> bell_numbers(0)
+ [1]
+ >>> bell_numbers(1)
+ [1, 1]
+ >>> bell_numbers(5)
+ [1, 1, 2, 5, 15, 52]
+ """
+ if max_set_length < 0:
+ raise ValueError("max_set_length must be non-negative")
+
+ bell = [0] * (max_set_length + 1)
+ bell[0] = 1
+
+ for i in range(1, max_set_length + 1):
+ for j in range(i):
+ bell[i] += _binomial_coefficient(i - 1, j) * bell[j]
+
+ return bell
+
+
+def _binomial_coefficient(total_elements: int, elements_to_choose: int) -> int:
+ """
+ Calculate the binomial coefficient C(total_elements, elements_to_choose)
+
+ Args:
+ total_elements (int): The total number of elements.
+ elements_to_choose (int): The number of elements to choose.
+
+ Returns:
+ int: The binomial coefficient C(total_elements, elements_to_choose).
+
+ Examples:
+ >>> _binomial_coefficient(5, 2)
+ 10
+ >>> _binomial_coefficient(6, 3)
+ 20
+ """
+ if elements_to_choose in {0, total_elements}:
+ return 1
+
+ if elements_to_choose > total_elements - elements_to_choose:
+ elements_to_choose = total_elements - elements_to_choose
+
+ coefficient = 1
+ for i in range(elements_to_choose):
+ coefficient *= total_elements - i
+ coefficient //= i + 1
+
+ return coefficient
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 1fda96b7044d9fa08c84f09f54a345ebf052b2eb Mon Sep 17 00:00:00 2001
From: Sanket Kittad <86976526+sanketkittad@users.noreply.github.com>
Date: Thu, 5 Oct 2023 05:10:14 +0530
Subject: [PATCH 259/808] Palindromic (#9288)
* added longest palindromic subsequence
* removed
* added longest palindromic subsequence
* added longest palindromic subsequence link
* added comments
---
.../longest_palindromic_subsequence.py | 44 +++++++++++++++++++
1 file changed, 44 insertions(+)
create mode 100644 dynamic_programming/longest_palindromic_subsequence.py
diff --git a/dynamic_programming/longest_palindromic_subsequence.py b/dynamic_programming/longest_palindromic_subsequence.py
new file mode 100644
index 000000000000..a60d95e460e6
--- /dev/null
+++ b/dynamic_programming/longest_palindromic_subsequence.py
@@ -0,0 +1,44 @@
+"""
+author: Sanket Kittad
+Given a string s, find the longest palindromic subsequence's length in s.
+Input: s = "bbbab"
+Output: 4
+Explanation: One possible longest palindromic subsequence is "bbbb".
+Leetcode link: https://leetcode.com/problems/longest-palindromic-subsequence/description/
+"""
+
+
+def longest_palindromic_subsequence(input_string: str) -> int:
+ """
+ This function returns the longest palindromic subsequence in a string
+ >>> longest_palindromic_subsequence("bbbab")
+ 4
+ >>> longest_palindromic_subsequence("bbabcbcab")
+ 7
+ """
+ n = len(input_string)
+ rev = input_string[::-1]
+ m = len(rev)
+ dp = [[-1] * (m + 1) for i in range(n + 1)]
+ for i in range(n + 1):
+ dp[i][0] = 0
+ for i in range(m + 1):
+ dp[0][i] = 0
+
+ # create and initialise dp array
+ for i in range(1, n + 1):
+ for j in range(1, m + 1):
+ # If characters at i and j are the same
+ # include them in the palindromic subsequence
+ if input_string[i - 1] == rev[j - 1]:
+ dp[i][j] = 1 + dp[i - 1][j - 1]
+ else:
+ dp[i][j] = max(dp[i - 1][j], dp[i][j - 1])
+
+ return dp[n][m]
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 935d1d3225ede4c0650165d5dfd8f5eb35b54f5e Mon Sep 17 00:00:00 2001
From: Vipin Karthic <143083087+vipinkarthic@users.noreply.github.com>
Date: Thu, 5 Oct 2023 11:27:55 +0530
Subject: [PATCH 260/808] Added Mirror Formulae Equation (#9717)
* Python mirror_formulae.py is added to the repository
* Changes done after reading readme.md
* Changes for running doctest on all platforms
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Change 2 for Doctests
* Changes for doctest 2
* updating DIRECTORY.md
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 9 ++-
physics/mirror_formulae.py | 127 +++++++++++++++++++++++++++++++++++++
2 files changed, 135 insertions(+), 1 deletion(-)
create mode 100644 physics/mirror_formulae.py
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 696a059bb4c8..5f23cbd6c922 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -170,6 +170,7 @@
## Data Structures
* Arrays
+ * [Median Two Array](data_structures/arrays/median_two_array.py)
* [Permutations](data_structures/arrays/permutations.py)
* [Prefix Sum](data_structures/arrays/prefix_sum.py)
* [Product Sum](data_structures/arrays/product_sum.py)
@@ -185,6 +186,7 @@
* [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py)
* [Distribute Coins](data_structures/binary_tree/distribute_coins.py)
* [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py)
+ * [Flatten Binarytree To Linkedlist](data_structures/binary_tree/flatten_binarytree_to_linkedlist.py)
* [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py)
* [Is Bst](data_structures/binary_tree/is_bst.py)
* [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py)
@@ -324,6 +326,7 @@
* [Longest Common Substring](dynamic_programming/longest_common_substring.py)
* [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py)
* [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py)
+ * [Longest Palindromic Subsequence](dynamic_programming/longest_palindromic_subsequence.py)
* [Longest Sub Array](dynamic_programming/longest_sub_array.py)
* [Matrix Chain Order](dynamic_programming/matrix_chain_order.py)
* [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py)
@@ -539,6 +542,7 @@
* [Average Mode](maths/average_mode.py)
* [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py)
* [Basic Maths](maths/basic_maths.py)
+ * [Bell Numbers](maths/bell_numbers.py)
* [Binary Exp Mod](maths/binary_exp_mod.py)
* [Binary Exponentiation](maths/binary_exponentiation.py)
* [Binary Exponentiation 3](maths/binary_exponentiation_3.py)
@@ -690,6 +694,7 @@
* [Matrix Class](matrix/matrix_class.py)
* [Matrix Operation](matrix/matrix_operation.py)
* [Max Area Of Island](matrix/max_area_of_island.py)
+ * [Median Matrix](matrix/median_matrix.py)
* [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py)
* [Pascal Triangle](matrix/pascal_triangle.py)
* [Rotate Matrix](matrix/rotate_matrix.py)
@@ -708,8 +713,8 @@
* Activation Functions
* [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py)
* [Leaky Rectified Linear Unit](neural_network/activation_functions/leaky_rectified_linear_unit.py)
- * [Scaled Exponential Linear Unit](neural_network/activation_functions/scaled_exponential_linear_unit.py)
* [Rectified Linear Unit](neural_network/activation_functions/rectified_linear_unit.py)
+ * [Scaled Exponential Linear Unit](neural_network/activation_functions/scaled_exponential_linear_unit.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Perceptron](neural_network/perceptron.py)
@@ -756,9 +761,11 @@
* [Kinetic Energy](physics/kinetic_energy.py)
* [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py)
* [Malus Law](physics/malus_law.py)
+ * [Mirror Formulae](physics/mirror_formulae.py)
* [N Body Simulation](physics/n_body_simulation.py)
* [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py)
* [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py)
+ * [Photoelectric Effect](physics/photoelectric_effect.py)
* [Potential Energy](physics/potential_energy.py)
* [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py)
* [Shear Stress](physics/shear_stress.py)
diff --git a/physics/mirror_formulae.py b/physics/mirror_formulae.py
new file mode 100644
index 000000000000..f1b4ac2c7baf
--- /dev/null
+++ b/physics/mirror_formulae.py
@@ -0,0 +1,127 @@
+"""
+This module contains the functions to calculate the focal length, object distance
+and image distance of a mirror.
+
+The mirror formula is an equation that relates the object distance (u),
+image distance (v), and focal length (f) of a spherical mirror.
+It is commonly used in optics to determine the position and characteristics
+of an image formed by a mirror. It is expressed using the formulae :
+
+-------------------
+| 1/f = 1/v + 1/u |
+-------------------
+
+Where,
+f = Focal length of the spherical mirror (metre)
+v = Image distance from the mirror (metre)
+u = Object distance from the mirror (metre)
+
+
+The signs of the distances are taken with respect to the sign convention.
+The sign convention is as follows:
+ 1) Object is always placed to the left of mirror
+ 2) Distances measured in the direction of the incident ray are positive
+ and the distances measured in the direction opposite to that of the incident
+ rays are negative.
+ 3) All distances are measured from the pole of the mirror.
+
+
+There are a few assumptions that are made while using the mirror formulae.
+They are as follows:
+ 1) Thin Mirror: The mirror is assumed to be thin, meaning its thickness is
+ negligible compared to its radius of curvature. This assumption allows
+ us to treat the mirror as a two-dimensional surface.
+ 2) Spherical Mirror: The mirror is assumed to have a spherical shape. While this
+ assumption may not hold exactly for all mirrors, it is a reasonable approximation
+ for most practical purposes.
+ 3) Small Angles: The angles involved in the derivation are assumed to be small.
+ This assumption allows us to use the small-angle approximation, where the tangent
+ of a small angle is approximately equal to the angle itself. It simplifies the
+ calculations and makes the derivation more manageable.
+ 4) Paraxial Rays: The mirror formula is derived using paraxial rays, which are
+ rays that are close to the principal axis and make small angles with it. This
+ assumption ensures that the rays are close enough to the principal axis, making the
+ calculations more accurate.
+ 5) Reflection and Refraction Laws: The derivation assumes that the laws of
+ reflection and refraction hold.
+ These laws state that the angle of incidence is equal to the angle of reflection
+ for reflection, and the incident and refracted rays lie in the same plane and
+ obey Snell's law for refraction.
+
+(Description and Assumptions adapted from
+https://www.collegesearch.in/articles/mirror-formula-derivation)
+
+(Sign Convention adapted from
+https://www.toppr.com/ask/content/concept/sign-convention-for-mirrors-210189/)
+
+
+"""
+
+
+def focal_length(distance_of_object: float, distance_of_image: float) -> float:
+ """
+ >>> from math import isclose
+ >>> isclose(focal_length(10, 20), 6.66666666666666)
+ True
+ >>> from math import isclose
+ >>> isclose(focal_length(9.5, 6.7), 3.929012346)
+ True
+ >>> focal_length(0, 20)
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid inputs. Enter non zero values with respect
+ to the sign convention.
+ """
+
+ if distance_of_object == 0 or distance_of_image == 0:
+ raise ValueError(
+ "Invalid inputs. Enter non zero values with respect to the sign convention."
+ )
+ focal_length = 1 / ((1 / distance_of_object) + (1 / distance_of_image))
+ return focal_length
+
+
+def object_distance(focal_length: float, distance_of_image: float) -> float:
+ """
+ >>> from math import isclose
+ >>> isclose(object_distance(30, 20), -60.0)
+ True
+ >>> from math import isclose
+ >>> isclose(object_distance(10.5, 11.7), 102.375)
+ True
+ >>> object_distance(90, 0)
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid inputs. Enter non zero values with respect
+ to the sign convention.
+ """
+
+ if distance_of_image == 0 or focal_length == 0:
+ raise ValueError(
+ "Invalid inputs. Enter non zero values with respect to the sign convention."
+ )
+ object_distance = 1 / ((1 / focal_length) - (1 / distance_of_image))
+ return object_distance
+
+
+def image_distance(focal_length: float, distance_of_object: float) -> float:
+ """
+ >>> from math import isclose
+ >>> isclose(image_distance(10, 40), 13.33333333)
+ True
+ >>> from math import isclose
+ >>> isclose(image_distance(1.5, 6.7), 1.932692308)
+ True
+ >>> image_distance(0, 0)
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid inputs. Enter non zero values with respect
+ to the sign convention.
+ """
+
+ if distance_of_object == 0 or focal_length == 0:
+ raise ValueError(
+ "Invalid inputs. Enter non zero values with respect to the sign convention."
+ )
+ image_distance = 1 / ((1 / focal_length) - (1 / distance_of_object))
+ return image_distance
From 4b6301d4ce91638d39689f7be7db797f99623964 Mon Sep 17 00:00:00 2001
From: rtang09 <49603415+rtang09@users.noreply.github.com>
Date: Wed, 4 Oct 2023 23:12:08 -0700
Subject: [PATCH 261/808] Fletcher 16 (#9775)
* Add files via upload
* Update fletcher16.py
* Update fletcher16.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update fletcher16.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update fletcher16.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update fletcher16.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
hashes/fletcher16.py | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
create mode 100644 hashes/fletcher16.py
diff --git a/hashes/fletcher16.py b/hashes/fletcher16.py
new file mode 100644
index 000000000000..7c23c98d72c5
--- /dev/null
+++ b/hashes/fletcher16.py
@@ -0,0 +1,36 @@
+"""
+The Fletcher checksum is an algorithm for computing a position-dependent
+checksum devised by John G. Fletcher (1934–2012) at Lawrence Livermore Labs
+in the late 1970s.[1] The objective of the Fletcher checksum was to
+provide error-detection properties approaching those of a cyclic
+redundancy check but with the lower computational effort associated
+with summation techniques.
+
+Source: https://en.wikipedia.org/wiki/Fletcher%27s_checksum
+"""
+
+
+def fletcher16(text: str) -> int:
+ """
+ Loop through every character in the data and add to two sums.
+
+ >>> fletcher16('hello world')
+ 6752
+ >>> fletcher16('onethousandfourhundredthirtyfour')
+ 28347
+ >>> fletcher16('The quick brown fox jumps over the lazy dog.')
+ 5655
+ """
+ data = bytes(text, "ascii")
+ sum1 = 0
+ sum2 = 0
+ for character in data:
+ sum1 = (sum1 + character) % 255
+ sum2 = (sum1 + sum2) % 255
+ return (sum2 << 8) | sum1
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 0d324de7ab9c354d958fd93f6046d0111014d95a Mon Sep 17 00:00:00 2001
From: Vipin Karthic <143083087+vipinkarthic@users.noreply.github.com>
Date: Thu, 5 Oct 2023 13:18:15 +0530
Subject: [PATCH 262/808] Doctest Error Correction of mirror_formulae.py
(#9782)
* Python mirror_formulae.py is added to the repository
* Changes done after reading readme.md
* Changes for running doctest on all platforms
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Change 2 for Doctests
* Changes for doctest 2
* updating DIRECTORY.md
* Doctest whitespace error rectification to mirror_formulae.py
* updating DIRECTORY.md
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
DIRECTORY.md | 1 +
physics/mirror_formulae.py | 6 +++---
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index 5f23cbd6c922..b0ba3c3852da 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -469,6 +469,7 @@
* [Djb2](hashes/djb2.py)
* [Elf](hashes/elf.py)
* [Enigma Machine](hashes/enigma_machine.py)
+ * [Fletcher16](hashes/fletcher16.py)
* [Hamming Code](hashes/hamming_code.py)
* [Luhn](hashes/luhn.py)
* [Md5](hashes/md5.py)
diff --git a/physics/mirror_formulae.py b/physics/mirror_formulae.py
index f1b4ac2c7baf..7efc52438140 100644
--- a/physics/mirror_formulae.py
+++ b/physics/mirror_formulae.py
@@ -66,7 +66,7 @@ def focal_length(distance_of_object: float, distance_of_image: float) -> float:
>>> from math import isclose
>>> isclose(focal_length(9.5, 6.7), 3.929012346)
True
- >>> focal_length(0, 20)
+ >>> focal_length(0, 20) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: Invalid inputs. Enter non zero values with respect
@@ -89,7 +89,7 @@ def object_distance(focal_length: float, distance_of_image: float) -> float:
>>> from math import isclose
>>> isclose(object_distance(10.5, 11.7), 102.375)
True
- >>> object_distance(90, 0)
+ >>> object_distance(90, 0) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: Invalid inputs. Enter non zero values with respect
@@ -112,7 +112,7 @@ def image_distance(focal_length: float, distance_of_object: float) -> float:
>>> from math import isclose
>>> isclose(image_distance(1.5, 6.7), 1.932692308)
True
- >>> image_distance(0, 0)
+ >>> image_distance(0, 0) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: Invalid inputs. Enter non zero values with respect
From f3be0ae9e60a0ed2185e55c0758ddf401e604f8c Mon Sep 17 00:00:00 2001
From: Naman <37952726+namansharma18899@users.noreply.github.com>
Date: Thu, 5 Oct 2023 14:07:23 +0530
Subject: [PATCH 263/808] Added largest pow of 2 le num (#9374)
---
bit_manipulation/largest_pow_of_two_le_num.py | 60 +++++++++++++++++++
1 file changed, 60 insertions(+)
create mode 100644 bit_manipulation/largest_pow_of_two_le_num.py
diff --git a/bit_manipulation/largest_pow_of_two_le_num.py b/bit_manipulation/largest_pow_of_two_le_num.py
new file mode 100644
index 000000000000..6ef827312199
--- /dev/null
+++ b/bit_manipulation/largest_pow_of_two_le_num.py
@@ -0,0 +1,60 @@
+"""
+Author : Naman Sharma
+Date : October 2, 2023
+
+Task:
+To Find the largest power of 2 less than or equal to a given number.
+
+Implementation notes: Use bit manipulation.
+We start from 1 & left shift the set bit to check if (res<<1)<=number.
+Each left bit shift represents a pow of 2.
+
+For example:
+number: 15
+res: 1 0b1
+ 2 0b10
+ 4 0b100
+ 8 0b1000
+ 16 0b10000 (Exit)
+"""
+
+
+def largest_pow_of_two_le_num(number: int) -> int:
+ """
+ Return the largest power of two less than or equal to a number.
+
+ >>> largest_pow_of_two_le_num(0)
+ 0
+ >>> largest_pow_of_two_le_num(1)
+ 1
+ >>> largest_pow_of_two_le_num(-1)
+ 0
+ >>> largest_pow_of_two_le_num(3)
+ 2
+ >>> largest_pow_of_two_le_num(15)
+ 8
+ >>> largest_pow_of_two_le_num(99)
+ 64
+ >>> largest_pow_of_two_le_num(178)
+ 128
+ >>> largest_pow_of_two_le_num(999999)
+ 524288
+ >>> largest_pow_of_two_le_num(99.9)
+ Traceback (most recent call last):
+ ...
+ TypeError: Input value must be a 'int' type
+ """
+ if isinstance(number, float):
+ raise TypeError("Input value must be a 'int' type")
+ if number <= 0:
+ return 0
+ res = 1
+ while (res << 1) <= number:
+ res <<= 1
+ return res
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From e29024d14ade8ff4cdb43d1da6a7738f44685e5e Mon Sep 17 00:00:00 2001
From: Rohan Sardar <77870108+RohanSardar@users.noreply.github.com>
Date: Thu, 5 Oct 2023 14:22:40 +0530
Subject: [PATCH 264/808] Program to convert a given string to Pig Latin
(#9712)
* Program to convert a given string to Pig Latin
This is a program to convert a user given string to its respective Pig Latin form
As per wikipedia (link: https://en.wikipedia.org/wiki/Pig_Latin#Rules)
For words that begin with consonant sounds, all letters before the initial vowel are placed at the end of the word sequence. Then, "ay" is added, as in the following examples:
"pig" = "igpay"
"latin" = "atinlay"
"banana" = "ananabay"
When words begin with consonant clusters (multiple consonants that form one sound), the whole sound is added to the end when speaking or writing.
"friends" = "iendsfray"
"smile" = "ilesmay"
"string" = "ingstray"
For words that begin with vowel sounds, one just adds "hay", "way" or "yay" to the end. Examples are:
"eat" = "eatway"
"omelet" = "omeletway"
"are" = "areway"
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update pig_latin.py
Added f-string
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update pig_latin.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update pig_latin.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update pig_latin.py
* Update pig_latin.py
* Update pig_latin.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
strings/pig_latin.py | 44 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 44 insertions(+)
create mode 100644 strings/pig_latin.py
diff --git a/strings/pig_latin.py b/strings/pig_latin.py
new file mode 100644
index 000000000000..457dbb5a6cf6
--- /dev/null
+++ b/strings/pig_latin.py
@@ -0,0 +1,44 @@
+def pig_latin(word: str) -> str:
+ """Compute the piglatin of a given string.
+
+ https://en.wikipedia.org/wiki/Pig_Latin
+
+ Usage examples:
+ >>> pig_latin("pig")
+ 'igpay'
+ >>> pig_latin("latin")
+ 'atinlay'
+ >>> pig_latin("banana")
+ 'ananabay'
+ >>> pig_latin("friends")
+ 'iendsfray'
+ >>> pig_latin("smile")
+ 'ilesmay'
+ >>> pig_latin("string")
+ 'ingstray'
+ >>> pig_latin("eat")
+ 'eatway'
+ >>> pig_latin("omelet")
+ 'omeletway'
+ >>> pig_latin("are")
+ 'areway'
+ >>> pig_latin(" ")
+ ''
+ >>> pig_latin(None)
+ ''
+ """
+ if not (word or "").strip():
+ return ""
+ word = word.lower()
+ if word[0] in "aeiou":
+ return f"{word}way"
+ for i, char in enumerate(word): # noqa: B007
+ if char in "aeiou":
+ break
+ return f"{word[i:]}{word[:i]}ay"
+
+
+if __name__ == "__main__":
+ print(f"{pig_latin('friends') = }")
+ word = input("Enter a word: ")
+ print(f"{pig_latin(word) = }")
From dffbe458c07d492b9c599376233f9f6295527339 Mon Sep 17 00:00:00 2001
From: Chris O <46587501+ChrisO345@users.noreply.github.com>
Date: Fri, 6 Oct 2023 00:26:33 +1300
Subject: [PATCH 265/808] Update contributing guidelines to say not to open new
issues for algorithms (#9760)
* updated CONTRIBUTING.md with markdown anchors and issues
* removed testing header from previous PR
---
CONTRIBUTING.md | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 7a67ce33cd62..bf3420185c1a 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -25,8 +25,12 @@ We appreciate any contribution, from fixing a grammar mistake in a comment to im
Your contribution will be tested by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) to save time and mental energy. After you have submitted your pull request, you should see the GitHub Actions tests start to run at the bottom of your submission page. If those tests fail, then click on the ___details___ button try to read through the GitHub Actions output to understand the failure. If you do not understand, please leave a comment on your submission page and a community member will try to help.
+#### Issues
+
If you are interested in resolving an [open issue](https://github.com/TheAlgorithms/Python/issues), simply make a pull request with your proposed fix. __We do not assign issues in this repo__ so please do not ask for permission to work on an issue.
+__Do not__ create an issue to contribute an algorithm. Please submit a pull request instead.
+
Please help us keep our issue list small by adding `Fixes #{$ISSUE_NUMBER}` to the description of pull requests that resolve open issues.
For example, if your pull request fixes issue #10, then please add the following to its description:
```
From 0e3ea3fbab0297f38ed48b9e2f694cc43f8af567 Mon Sep 17 00:00:00 2001
From: Kamil <32775019+quant12345@users.noreply.github.com>
Date: Thu, 5 Oct 2023 16:30:39 +0500
Subject: [PATCH 266/808] Fermat_little_theorem type annotation (#9794)
* Replacing the generator with numpy vector operations from lu_decomposition.
* Revert "Replacing the generator with numpy vector operations from lu_decomposition."
This reverts commit ad217c66165898d62b76cc89ba09c2d7049b6448.
* Added type annotation.
* Update fermat_little_theorem.py
Used other syntax.
* Update fermat_little_theorem.py
* Update maths/fermat_little_theorem.py
---------
Co-authored-by: Tianyi Zheng
---
maths/fermat_little_theorem.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/maths/fermat_little_theorem.py b/maths/fermat_little_theorem.py
index eea03be245cb..4a3ecd05ce91 100644
--- a/maths/fermat_little_theorem.py
+++ b/maths/fermat_little_theorem.py
@@ -5,7 +5,7 @@
# Wikipedia reference: https://en.wikipedia.org/wiki/Fermat%27s_little_theorem
-def binary_exponentiation(a, n, mod):
+def binary_exponentiation(a: int, n: float, mod: int) -> int:
if n == 0:
return 1
From 1b6c5cc2713743b8a74fd9c92e0a1b6442d63a7f Mon Sep 17 00:00:00 2001
From: Kamil <32775019+quant12345@users.noreply.github.com>
Date: Thu, 5 Oct 2023 17:30:43 +0500
Subject: [PATCH 267/808] Karatsuba type annotation (#9800)
* Replacing the generator with numpy vector operations from lu_decomposition.
* Revert "Replacing the generator with numpy vector operations from lu_decomposition."
This reverts commit ad217c66165898d62b76cc89ba09c2d7049b6448.
* Added type annotation.
---
maths/karatsuba.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/maths/karatsuba.py b/maths/karatsuba.py
index 4bf4aecdc068..3d29e31d2107 100644
--- a/maths/karatsuba.py
+++ b/maths/karatsuba.py
@@ -1,7 +1,7 @@
""" Multiply two numbers using Karatsuba algorithm """
-def karatsuba(a, b):
+def karatsuba(a: int, b: int) -> int:
"""
>>> karatsuba(15463, 23489) == 15463 * 23489
True
From f159a3350650843e0b3e856e612cda56eabb4237 Mon Sep 17 00:00:00 2001
From: Abul Hasan <33129246+haxkd@users.noreply.github.com>
Date: Thu, 5 Oct 2023 18:09:14 +0530
Subject: [PATCH 268/808] convert to the base minus 2 of a number (#9748)
* Fix: Issue 9588
* Fix: Issue 9588
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix: Issue 9588
* Fix: Issue #9588
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix: Issue #9588
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix: Issue #9588
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix: issue #9793
* fix: issue #9793
* fix: issue #9588
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
maths/base_neg2_conversion.py | 37 +++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
create mode 100644 maths/base_neg2_conversion.py
diff --git a/maths/base_neg2_conversion.py b/maths/base_neg2_conversion.py
new file mode 100644
index 000000000000..81d40d37e79d
--- /dev/null
+++ b/maths/base_neg2_conversion.py
@@ -0,0 +1,37 @@
+def decimal_to_negative_base_2(num: int) -> int:
+ """
+ This function returns the number negative base 2
+ of the decimal number of the input data.
+
+ Args:
+ int: The decimal number to convert.
+
+ Returns:
+ int: The negative base 2 number.
+
+ Examples:
+ >>> decimal_to_negative_base_2(0)
+ 0
+ >>> decimal_to_negative_base_2(-19)
+ 111101
+ >>> decimal_to_negative_base_2(4)
+ 100
+ >>> decimal_to_negative_base_2(7)
+ 11011
+ """
+ if num == 0:
+ return 0
+ ans = ""
+ while num != 0:
+ num, rem = divmod(num, -2)
+ if rem < 0:
+ rem += 2
+ num += 1
+ ans = str(rem) + ans
+ return int(ans)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 9bfc314e878e36a5f5d8974ec188ad7f0db8c5a1 Mon Sep 17 00:00:00 2001
From: Kamil <32775019+quant12345@users.noreply.github.com>
Date: Thu, 5 Oct 2023 17:39:29 +0500
Subject: [PATCH 269/808] hardy_ramanujanalgo type annotation (#9799)
* Replacing the generator with numpy vector operations from lu_decomposition.
* Revert "Replacing the generator with numpy vector operations from lu_decomposition."
This reverts commit ad217c66165898d62b76cc89ba09c2d7049b6448.
* Added type annotation.
---
maths/hardy_ramanujanalgo.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/maths/hardy_ramanujanalgo.py b/maths/hardy_ramanujanalgo.py
index 6929533fc389..31ec76fbe10b 100644
--- a/maths/hardy_ramanujanalgo.py
+++ b/maths/hardy_ramanujanalgo.py
@@ -4,7 +4,7 @@
import math
-def exact_prime_factor_count(n):
+def exact_prime_factor_count(n: int) -> int:
"""
>>> exact_prime_factor_count(51242183)
3
From 6643c955376174c307c982b1d5cc39778c40bea1 Mon Sep 17 00:00:00 2001
From: Adebisi Ahmed
Date: Thu, 5 Oct 2023 14:18:54 +0100
Subject: [PATCH 270/808] add gas station (#9446)
* feat: add gas station
* make code more readable
make code more readable
* update test
* Update gas_station.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* tuple[GasStation, ...]
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: Christian Clauss
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
greedy_methods/gas_station.py | 97 +++++++++++++++++++++++++++++++++++
1 file changed, 97 insertions(+)
create mode 100644 greedy_methods/gas_station.py
diff --git a/greedy_methods/gas_station.py b/greedy_methods/gas_station.py
new file mode 100644
index 000000000000..2427375d2664
--- /dev/null
+++ b/greedy_methods/gas_station.py
@@ -0,0 +1,97 @@
+"""
+Task:
+There are n gas stations along a circular route, where the amount of gas
+at the ith station is gas_quantities[i].
+
+You have a car with an unlimited gas tank and it costs costs[i] of gas
+to travel from the ith station to its next (i + 1)th station.
+You begin the journey with an empty tank at one of the gas stations.
+
+Given two integer arrays gas_quantities and costs, return the starting
+gas station's index if you can travel around the circuit once
+in the clockwise direction otherwise, return -1.
+If there exists a solution, it is guaranteed to be unique
+
+Reference: https://leetcode.com/problems/gas-station/description
+
+Implementation notes:
+First, check whether the total gas is enough to complete the journey. If not, return -1.
+However, if there is enough gas, it is guaranteed that there is a valid
+starting index to reach the end of the journey.
+Greedily calculate the net gain (gas_quantity - cost) at each station.
+If the net gain ever goes below 0 while iterating through the stations,
+start checking from the next station.
+
+"""
+from dataclasses import dataclass
+
+
+@dataclass
+class GasStation:
+ gas_quantity: int
+ cost: int
+
+
+def get_gas_stations(
+ gas_quantities: list[int], costs: list[int]
+) -> tuple[GasStation, ...]:
+ """
+ This function returns a tuple of gas stations.
+
+ Args:
+ gas_quantities: Amount of gas available at each station
+ costs: The cost of gas required to move from one station to the next
+
+ Returns:
+ A tuple of gas stations
+
+ >>> gas_stations = get_gas_stations([1, 2, 3, 4, 5], [3, 4, 5, 1, 2])
+ >>> len(gas_stations)
+ 5
+ >>> gas_stations[0]
+ GasStation(gas_quantity=1, cost=3)
+ >>> gas_stations[-1]
+ GasStation(gas_quantity=5, cost=2)
+ """
+ return tuple(
+ GasStation(quantity, cost) for quantity, cost in zip(gas_quantities, costs)
+ )
+
+
+def can_complete_journey(gas_stations: tuple[GasStation, ...]) -> int:
+ """
+ This function returns the index from which to start the journey
+ in order to reach the end.
+
+ Args:
+ gas_quantities [list]: Amount of gas available at each station
+ cost [list]: The cost of gas required to move from one station to the next
+
+ Returns:
+ start [int]: start index needed to complete the journey
+
+ Examples:
+ >>> can_complete_journey(get_gas_stations([1, 2, 3, 4, 5], [3, 4, 5, 1, 2]))
+ 3
+ >>> can_complete_journey(get_gas_stations([2, 3, 4], [3, 4, 3]))
+ -1
+ """
+ total_gas = sum(gas_station.gas_quantity for gas_station in gas_stations)
+ total_cost = sum(gas_station.cost for gas_station in gas_stations)
+ if total_gas < total_cost:
+ return -1
+
+ start = 0
+ net = 0
+ for i, gas_station in enumerate(gas_stations):
+ net += gas_station.gas_quantity - gas_station.cost
+ if net < 0:
+ start = i + 1
+ net = 0
+ return start
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 55ee273419ae76ddeda250374921644615b88393 Mon Sep 17 00:00:00 2001
From: Wei Jiang <42140605+Jiang15@users.noreply.github.com>
Date: Thu, 5 Oct 2023 16:00:48 +0200
Subject: [PATCH 271/808] [bug fixing] Edge case of the double ended queue
(#9823)
* fix the edge case of the double ended queue pop the last element
* refactoring doc
---------
Co-authored-by: Jiang15
---
data_structures/queue/double_ended_queue.py | 62 +++++++++++++++------
1 file changed, 45 insertions(+), 17 deletions(-)
diff --git a/data_structures/queue/double_ended_queue.py b/data_structures/queue/double_ended_queue.py
index 44dc863b9a4e..17a23038d288 100644
--- a/data_structures/queue/double_ended_queue.py
+++ b/data_structures/queue/double_ended_queue.py
@@ -242,12 +242,20 @@ def pop(self) -> Any:
Removes the last element of the deque and returns it.
Time complexity: O(1)
@returns topop.val: the value of the node to pop.
- >>> our_deque = Deque([1, 2, 3, 15182])
- >>> our_popped = our_deque.pop()
- >>> our_popped
+ >>> our_deque1 = Deque([1])
+ >>> our_popped1 = our_deque1.pop()
+ >>> our_popped1
+ 1
+ >>> our_deque1
+ []
+
+ >>> our_deque2 = Deque([1, 2, 3, 15182])
+ >>> our_popped2 = our_deque2.pop()
+ >>> our_popped2
15182
- >>> our_deque
+ >>> our_deque2
[1, 2, 3]
+
>>> from collections import deque
>>> deque_collections = deque([1, 2, 3, 15182])
>>> collections_popped = deque_collections.pop()
@@ -255,18 +263,24 @@ def pop(self) -> Any:
15182
>>> deque_collections
deque([1, 2, 3])
- >>> list(our_deque) == list(deque_collections)
+ >>> list(our_deque2) == list(deque_collections)
True
- >>> our_popped == collections_popped
+ >>> our_popped2 == collections_popped
True
"""
# make sure the deque has elements to pop
assert not self.is_empty(), "Deque is empty."
topop = self._back
- self._back = self._back.prev_node # set new back
- # drop the last node - python will deallocate memory automatically
- self._back.next_node = None
+ # if only one element in the queue: point the front and back to None
+ # else remove one element from back
+ if self._front == self._back:
+ self._front = None
+ self._back = None
+ else:
+ self._back = self._back.prev_node # set new back
+ # drop the last node, python will deallocate memory automatically
+ self._back.next_node = None
self._len -= 1
@@ -277,11 +291,17 @@ def popleft(self) -> Any:
Removes the first element of the deque and returns it.
Time complexity: O(1)
@returns topop.val: the value of the node to pop.
- >>> our_deque = Deque([15182, 1, 2, 3])
- >>> our_popped = our_deque.popleft()
- >>> our_popped
+ >>> our_deque1 = Deque([1])
+ >>> our_popped1 = our_deque1.pop()
+ >>> our_popped1
+ 1
+ >>> our_deque1
+ []
+ >>> our_deque2 = Deque([15182, 1, 2, 3])
+ >>> our_popped2 = our_deque2.popleft()
+ >>> our_popped2
15182
- >>> our_deque
+ >>> our_deque2
[1, 2, 3]
>>> from collections import deque
>>> deque_collections = deque([15182, 1, 2, 3])
@@ -290,17 +310,23 @@ def popleft(self) -> Any:
15182
>>> deque_collections
deque([1, 2, 3])
- >>> list(our_deque) == list(deque_collections)
+ >>> list(our_deque2) == list(deque_collections)
True
- >>> our_popped == collections_popped
+ >>> our_popped2 == collections_popped
True
"""
# make sure the deque has elements to pop
assert not self.is_empty(), "Deque is empty."
topop = self._front
- self._front = self._front.next_node # set new front and drop the first node
- self._front.prev_node = None
+ # if only one element in the queue: point the front and back to None
+ # else remove one element from front
+ if self._front == self._back:
+ self._front = None
+ self._back = None
+ else:
+ self._front = self._front.next_node # set new front and drop the first node
+ self._front.prev_node = None
self._len -= 1
@@ -432,3 +458,5 @@ def __repr__(self) -> str:
import doctest
doctest.testmod()
+ dq = Deque([3])
+ dq.pop()
From deb0480b3a07e50b93f88d4351d1fce000574d05 Mon Sep 17 00:00:00 2001
From: Aasheesh <126905285+AasheeshLikePanner@users.noreply.github.com>
Date: Thu, 5 Oct 2023 19:37:44 +0530
Subject: [PATCH 272/808] Changing the directory of sigmoid_linear_unit.py
(#9824)
* Changing the directory of sigmoid_linear_unit.py
* Delete neural_network/activation_functions/__init__.py
---------
Co-authored-by: Tianyi Zheng
---
.../activation_functions}/sigmoid_linear_unit.py | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename {maths => neural_network/activation_functions}/sigmoid_linear_unit.py (100%)
diff --git a/maths/sigmoid_linear_unit.py b/neural_network/activation_functions/sigmoid_linear_unit.py
similarity index 100%
rename from maths/sigmoid_linear_unit.py
rename to neural_network/activation_functions/sigmoid_linear_unit.py
From 87494f1fa1022368d154477bdc035fd01f9e4382 Mon Sep 17 00:00:00 2001
From: Parth <100679824+pa-kh039@users.noreply.github.com>
Date: Thu, 5 Oct 2023 21:51:28 +0530
Subject: [PATCH 273/808] largest divisible subset (#9825)
* largest divisible subset
* minor tweaks
* adding more test cases
Co-authored-by: Christian Clauss
* improving code for better readability
Co-authored-by: Christian Clauss
* update
Co-authored-by: Christian Clauss
* update
Co-authored-by: Christian Clauss
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* suggested changes done, and further modfications
* final update
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update largest_divisible_subset.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update largest_divisible_subset.py
---------
Co-authored-by: Christian Clauss
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
.../largest_divisible_subset.py | 74 +++++++++++++++++++
1 file changed, 74 insertions(+)
create mode 100644 dynamic_programming/largest_divisible_subset.py
diff --git a/dynamic_programming/largest_divisible_subset.py b/dynamic_programming/largest_divisible_subset.py
new file mode 100644
index 000000000000..db38636e29db
--- /dev/null
+++ b/dynamic_programming/largest_divisible_subset.py
@@ -0,0 +1,74 @@
+from __future__ import annotations
+
+
+def largest_divisible_subset(items: list[int]) -> list[int]:
+ """
+ Algorithm to find the biggest subset in the given array such that for any 2 elements
+ x and y in the subset, either x divides y or y divides x.
+ >>> largest_divisible_subset([1, 16, 7, 8, 4])
+ [16, 8, 4, 1]
+ >>> largest_divisible_subset([1, 2, 3])
+ [2, 1]
+ >>> largest_divisible_subset([-1, -2, -3])
+ [-3]
+ >>> largest_divisible_subset([1, 2, 4, 8])
+ [8, 4, 2, 1]
+ >>> largest_divisible_subset((1, 2, 4, 8))
+ [8, 4, 2, 1]
+ >>> largest_divisible_subset([1, 1, 1])
+ [1, 1, 1]
+ >>> largest_divisible_subset([0, 0, 0])
+ [0, 0, 0]
+ >>> largest_divisible_subset([-1, -1, -1])
+ [-1, -1, -1]
+ >>> largest_divisible_subset([])
+ []
+ """
+ # Sort the array in ascending order as the sequence does not matter we only have to
+ # pick up a subset.
+ items = sorted(items)
+
+ number_of_items = len(items)
+
+ # Initialize memo with 1s and hash with increasing numbers
+ memo = [1] * number_of_items
+ hash_array = list(range(number_of_items))
+
+ # Iterate through the array
+ for i, item in enumerate(items):
+ for prev_index in range(i):
+ if ((items[prev_index] != 0 and item % items[prev_index]) == 0) and (
+ (1 + memo[prev_index]) > memo[i]
+ ):
+ memo[i] = 1 + memo[prev_index]
+ hash_array[i] = prev_index
+
+ ans = -1
+ last_index = -1
+
+ # Find the maximum length and its corresponding index
+ for i, memo_item in enumerate(memo):
+ if memo_item > ans:
+ ans = memo_item
+ last_index = i
+
+ # Reconstruct the divisible subset
+ if last_index == -1:
+ return []
+ result = [items[last_index]]
+ while hash_array[last_index] != last_index:
+ last_index = hash_array[last_index]
+ result.append(items[last_index])
+
+ return result
+
+
+if __name__ == "__main__":
+ from doctest import testmod
+
+ testmod()
+
+ items = [1, 16, 7, 8, 4]
+ print(
+ f"The longest divisible subset of {items} is {largest_divisible_subset(items)}."
+ )
From b76115e8d184fbad1d6c400fcdd964e821f09e9b Mon Sep 17 00:00:00 2001
From: Pronay Debnath
Date: Thu, 5 Oct 2023 23:03:05 +0530
Subject: [PATCH 274/808] Updated check_bipartite_graph_dfs.py (#9525)
* Create dijkstra_algorithm.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update dijkstra_algorithm.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update dijkstra_algorithm.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update dijkstra_algorithm.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Delete greedy_methods/dijkstra_algorithm.py
* Update check_bipartite_graph_dfs.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update check_bipartite_graph_dfs.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update graphs/check_bipartite_graph_dfs.py
Co-authored-by: Christian Clauss
* Update graphs/check_bipartite_graph_dfs.py
Co-authored-by: Christian Clauss
* Update check_bipartite_graph_dfs.py
* Update check_bipartite_graph_dfs.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update check_bipartite_graph_dfs.py
* Update check_bipartite_graph_dfs.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update check_bipartite_graph_dfs.py
* Update check_bipartite_graph_dfs.py
* Update check_bipartite_graph_dfs.py
* Let's use self-documenting variable names
This is complex code so let's use self-documenting function and variable names to help readers to understand.
We should not shorten names to simplify the code formatting but use understandable name and leave to code formatting to psf/black.
I am not sure if `nbor` was supposed to be `neighbour`. ;-)
* Update check_bipartite_graph_dfs.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
graphs/check_bipartite_graph_dfs.py | 73 +++++++++++++++++++----------
1 file changed, 47 insertions(+), 26 deletions(-)
diff --git a/graphs/check_bipartite_graph_dfs.py b/graphs/check_bipartite_graph_dfs.py
index fd644230449c..b13a9eb95afb 100644
--- a/graphs/check_bipartite_graph_dfs.py
+++ b/graphs/check_bipartite_graph_dfs.py
@@ -1,34 +1,55 @@
-# Check whether Graph is Bipartite or Not using DFS
+from collections import defaultdict
-# A Bipartite Graph is a graph whose vertices can be divided into two independent sets,
-# U and V such that every edge (u, v) either connects a vertex from U to V or a vertex
-# from V to U. In other words, for every edge (u, v), either u belongs to U and v to V,
-# or u belongs to V and v to U. We can also say that there is no edge that connects
-# vertices of same set.
-def check_bipartite_dfs(graph):
- visited = [False] * len(graph)
- color = [-1] * len(graph)
+def is_bipartite(graph: defaultdict[int, list[int]]) -> bool:
+ """
+ Check whether a graph is Bipartite or not using Depth-First Search (DFS).
- def dfs(v, c):
- visited[v] = True
- color[v] = c
- for u in graph[v]:
- if not visited[u]:
- dfs(u, 1 - c)
+ A Bipartite Graph is a graph whose vertices can be divided into two independent
+ sets, U and V such that every edge (u, v) either connects a vertex from
+ U to V or a vertex from V to U. In other words, for every edge (u, v),
+ either u belongs to U and v to V, or u belongs to V and v to U. There is
+ no edge that connects vertices of the same set.
- for i in range(len(graph)):
- if not visited[i]:
- dfs(i, 0)
+ Args:
+ graph: An adjacency list representing the graph.
- for i in range(len(graph)):
- for j in graph[i]:
- if color[i] == color[j]:
- return False
+ Returns:
+ True if there's no edge that connects vertices of the same set, False otherwise.
- return True
+ Examples:
+ >>> is_bipartite(
+ ... defaultdict(list, {0: [1, 2], 1: [0, 3], 2: [0, 4], 3: [1], 4: [2]})
+ ... )
+ False
+ >>> is_bipartite(defaultdict(list, {0: [1, 2], 1: [0, 2], 2: [0, 1]}))
+ True
+ """
+ def depth_first_search(node: int, color: int) -> bool:
+ visited[node] = color
+ return any(
+ visited[neighbour] == color
+ or (
+ visited[neighbour] == -1
+ and not depth_first_search(neighbour, 1 - color)
+ )
+ for neighbour in graph[node]
+ )
-# Adjacency list of graph
-graph = {0: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2], 4: []}
-print(check_bipartite_dfs(graph))
+ visited: defaultdict[int, int] = defaultdict(lambda: -1)
+
+ return all(
+ not (visited[node] == -1 and not depth_first_search(node, 0)) for node in graph
+ )
+
+
+if __name__ == "__main__":
+ import doctest
+
+ result = doctest.testmod()
+
+ if result.failed:
+ print(f"{result.failed} test(s) failed.")
+ else:
+ print("All tests passed!")
From cffdf99c55dcda89a5ce0fb2bf3cb685d168d136 Mon Sep 17 00:00:00 2001
From: Muhammad Umer Farooq <115654418+Muhammadummerr@users.noreply.github.com>
Date: Thu, 5 Oct 2023 23:44:55 +0500
Subject: [PATCH 275/808] Updated prime_numbers.py testcases. (#9851)
* Updated prime_numbers.py testcases.
* revert __main__ code.
---
maths/prime_numbers.py | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/maths/prime_numbers.py b/maths/prime_numbers.py
index c5297ed9264c..38cc6670385d 100644
--- a/maths/prime_numbers.py
+++ b/maths/prime_numbers.py
@@ -17,8 +17,8 @@ def slow_primes(max_n: int) -> Generator[int, None, None]:
[2, 3, 5, 7, 11]
>>> list(slow_primes(33))
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31]
- >>> list(slow_primes(10000))[-1]
- 9973
+ >>> list(slow_primes(1000))[-1]
+ 997
"""
numbers: Generator = (i for i in range(1, (max_n + 1)))
for i in (n for n in numbers if n > 1):
@@ -44,8 +44,8 @@ def primes(max_n: int) -> Generator[int, None, None]:
[2, 3, 5, 7, 11]
>>> list(primes(33))
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31]
- >>> list(primes(10000))[-1]
- 9973
+ >>> list(primes(1000))[-1]
+ 997
"""
numbers: Generator = (i for i in range(1, (max_n + 1)))
for i in (n for n in numbers if n > 1):
@@ -73,8 +73,8 @@ def fast_primes(max_n: int) -> Generator[int, None, None]:
[2, 3, 5, 7, 11]
>>> list(fast_primes(33))
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31]
- >>> list(fast_primes(10000))[-1]
- 9973
+ >>> list(fast_primes(1000))[-1]
+ 997
"""
numbers: Generator = (i for i in range(1, (max_n + 1), 2))
# It's useless to test even numbers as they will not be prime
From 5869fda74245b55a3bda4ccc5ac62a84ab40766f Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Thu, 5 Oct 2023 23:55:13 +0200
Subject: [PATCH 276/808] print reverse: A LinkedList with a tail pointer
(#9875)
* print reverse: A LinkedList with a tail pointer
* updating DIRECTORY.md
---------
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
---
DIRECTORY.md | 7 +-
data_structures/linked_list/print_reverse.py | 134 +++++++++++++------
2 files changed, 101 insertions(+), 40 deletions(-)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index b0ba3c3852da..c199a4329202 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -50,6 +50,7 @@
* [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py)
* [Is Even](bit_manipulation/is_even.py)
* [Is Power Of Two](bit_manipulation/is_power_of_two.py)
+ * [Largest Pow Of Two Le Num](bit_manipulation/largest_pow_of_two_le_num.py)
* [Missing Number](bit_manipulation/missing_number.py)
* [Numbers Different Signs](bit_manipulation/numbers_different_signs.py)
* [Reverse Bits](bit_manipulation/reverse_bits.py)
@@ -322,6 +323,7 @@
* [Integer Partition](dynamic_programming/integer_partition.py)
* [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py)
* [Knapsack](dynamic_programming/knapsack.py)
+ * [Largest Divisible Subset](dynamic_programming/largest_divisible_subset.py)
* [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py)
* [Longest Common Substring](dynamic_programming/longest_common_substring.py)
* [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py)
@@ -460,6 +462,7 @@
## Greedy Methods
* [Fractional Knapsack](greedy_methods/fractional_knapsack.py)
* [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py)
+ * [Gas Station](greedy_methods/gas_station.py)
* [Minimum Waiting Time](greedy_methods/minimum_waiting_time.py)
* [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py)
@@ -542,6 +545,7 @@
* [Average Median](maths/average_median.py)
* [Average Mode](maths/average_mode.py)
* [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py)
+ * [Base Neg2 Conversion](maths/base_neg2_conversion.py)
* [Basic Maths](maths/basic_maths.py)
* [Bell Numbers](maths/bell_numbers.py)
* [Binary Exp Mod](maths/binary_exp_mod.py)
@@ -657,7 +661,6 @@
* [P Series](maths/series/p_series.py)
* [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py)
* [Sigmoid](maths/sigmoid.py)
- * [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py)
* [Signum](maths/signum.py)
* [Simpson Rule](maths/simpson_rule.py)
* [Simultaneous Linear Equation Solver](maths/simultaneous_linear_equation_solver.py)
@@ -716,6 +719,7 @@
* [Leaky Rectified Linear Unit](neural_network/activation_functions/leaky_rectified_linear_unit.py)
* [Rectified Linear Unit](neural_network/activation_functions/rectified_linear_unit.py)
* [Scaled Exponential Linear Unit](neural_network/activation_functions/scaled_exponential_linear_unit.py)
+ * [Sigmoid Linear Unit](neural_network/activation_functions/sigmoid_linear_unit.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Perceptron](neural_network/perceptron.py)
@@ -1180,6 +1184,7 @@
* [Naive String Search](strings/naive_string_search.py)
* [Ngram](strings/ngram.py)
* [Palindrome](strings/palindrome.py)
+ * [Pig Latin](strings/pig_latin.py)
* [Prefix Function](strings/prefix_function.py)
* [Rabin Karp](strings/rabin_karp.py)
* [Remove Duplicate](strings/remove_duplicate.py)
diff --git a/data_structures/linked_list/print_reverse.py b/data_structures/linked_list/print_reverse.py
index f83d5607ffdd..a023745dee69 100644
--- a/data_structures/linked_list/print_reverse.py
+++ b/data_structures/linked_list/print_reverse.py
@@ -1,22 +1,91 @@
from __future__ import annotations
+from collections.abc import Iterable, Iterator
+from dataclasses import dataclass
+
+@dataclass
class Node:
- def __init__(self, data=None):
- self.data = data
- self.next = None
+ data: int
+ next_node: Node | None = None
+
+
+class LinkedList:
+ """A class to represent a Linked List.
+ Use a tail pointer to speed up the append() operation.
+ """
+
+ def __init__(self) -> None:
+ """Initialize a LinkedList with the head node set to None.
+ >>> linked_list = LinkedList()
+ >>> (linked_list.head, linked_list.tail)
+ (None, None)
+ """
+ self.head: Node | None = None
+ self.tail: Node | None = None # Speeds up the append() operation
+
+ def __iter__(self) -> Iterator[int]:
+ """Iterate the LinkedList yielding each Node's data.
+ >>> linked_list = LinkedList()
+ >>> items = (1, 2, 3, 4, 5)
+ >>> linked_list.extend(items)
+ >>> tuple(linked_list) == items
+ True
+ """
+ node = self.head
+ while node:
+ yield node.data
+ node = node.next_node
+
+ def __repr__(self) -> str:
+ """Returns a string representation of the LinkedList.
+ >>> linked_list = LinkedList()
+ >>> str(linked_list)
+ ''
+ >>> linked_list.append(1)
+ >>> str(linked_list)
+ '1'
+ >>> linked_list.extend([2, 3, 4, 5])
+ >>> str(linked_list)
+ '1 -> 2 -> 3 -> 4 -> 5'
+ """
+ return " -> ".join([str(data) for data in self])
- def __repr__(self):
- """Returns a visual representation of the node and all its following nodes."""
- string_rep = []
- temp = self
- while temp:
- string_rep.append(f"{temp.data}")
- temp = temp.next
- return "->".join(string_rep)
+ def append(self, data: int) -> None:
+ """Appends a new node with the given data to the end of the LinkedList.
+ >>> linked_list = LinkedList()
+ >>> str(linked_list)
+ ''
+ >>> linked_list.append(1)
+ >>> str(linked_list)
+ '1'
+ >>> linked_list.append(2)
+ >>> str(linked_list)
+ '1 -> 2'
+ """
+ if self.tail:
+ self.tail.next_node = self.tail = Node(data)
+ else:
+ self.head = self.tail = Node(data)
+ def extend(self, items: Iterable[int]) -> None:
+ """Appends each item to the end of the LinkedList.
+ >>> linked_list = LinkedList()
+ >>> linked_list.extend([])
+ >>> str(linked_list)
+ ''
+ >>> linked_list.extend([1, 2])
+ >>> str(linked_list)
+ '1 -> 2'
+ >>> linked_list.extend([3,4])
+ >>> str(linked_list)
+ '1 -> 2 -> 3 -> 4'
+ """
+ for item in items:
+ self.append(item)
-def make_linked_list(elements_list: list):
+
+def make_linked_list(elements_list: Iterable[int]) -> LinkedList:
"""Creates a Linked List from the elements of the given sequence
(list/tuple) and returns the head of the Linked List.
>>> make_linked_list([])
@@ -28,43 +97,30 @@ def make_linked_list(elements_list: list):
>>> make_linked_list(['abc'])
abc
>>> make_linked_list([7, 25])
- 7->25
+ 7 -> 25
"""
if not elements_list:
raise Exception("The Elements List is empty")
- current = head = Node(elements_list[0])
- for i in range(1, len(elements_list)):
- current.next = Node(elements_list[i])
- current = current.next
- return head
+ linked_list = LinkedList()
+ linked_list.extend(elements_list)
+ return linked_list
-def print_reverse(head_node: Node) -> None:
+def in_reverse(linked_list: LinkedList) -> str:
"""Prints the elements of the given Linked List in reverse order
- >>> print_reverse([])
- >>> linked_list = make_linked_list([69, 88, 73])
- >>> print_reverse(linked_list)
- 73
- 88
- 69
+ >>> in_reverse(LinkedList())
+ ''
+ >>> in_reverse(make_linked_list([69, 88, 73]))
+ '73 <- 88 <- 69'
"""
- if head_node is not None and isinstance(head_node, Node):
- print_reverse(head_node.next)
- print(head_node.data)
+ return " <- ".join(str(line) for line in reversed(tuple(linked_list)))
-def main():
+if __name__ == "__main__":
from doctest import testmod
testmod()
-
- linked_list = make_linked_list([14, 52, 14, 12, 43])
- print("Linked List:")
- print(linked_list)
- print("Elements in Reverse:")
- print_reverse(linked_list)
-
-
-if __name__ == "__main__":
- main()
+ linked_list = make_linked_list((14, 52, 14, 12, 43))
+ print(f"Linked List: {linked_list}")
+ print(f"Reverse List: {in_reverse(linked_list)}")
From 7f94a73eec45edfd215e8f07148c9c657b4e4b89 Mon Sep 17 00:00:00 2001
From: Marek Mazij <112333347+Mrk-Mzj@users.noreply.github.com>
Date: Fri, 6 Oct 2023 00:05:23 +0200
Subject: [PATCH 277/808] camelCase to snake_case conversion - Fixes #9726
(#9727)
* First commit
camel case to snake case conversion algorithm, including numbers
* code modified to not use regex
---
strings/camel_case_to_snake_case.py | 60 +++++++++++++++++++++++++++++
1 file changed, 60 insertions(+)
create mode 100644 strings/camel_case_to_snake_case.py
diff --git a/strings/camel_case_to_snake_case.py b/strings/camel_case_to_snake_case.py
new file mode 100644
index 000000000000..582907be2edb
--- /dev/null
+++ b/strings/camel_case_to_snake_case.py
@@ -0,0 +1,60 @@
+def camel_to_snake_case(input_str: str) -> str:
+ """
+ Transforms a camelCase (or PascalCase) string to snake_case
+
+ >>> camel_to_snake_case("someRandomString")
+ 'some_random_string'
+
+ >>> camel_to_snake_case("SomeRandomStr#ng")
+ 'some_random_str_ng'
+
+ >>> camel_to_snake_case("123someRandom123String123")
+ '123_some_random_123_string_123'
+
+ >>> camel_to_snake_case("123SomeRandom123String123")
+ '123_some_random_123_string_123'
+
+ >>> camel_to_snake_case(123)
+ Traceback (most recent call last):
+ ...
+ ValueError: Expected string as input, found
+
+ """
+
+ # check for invalid input type
+ if not isinstance(input_str, str):
+ msg = f"Expected string as input, found {type(input_str)}"
+ raise ValueError(msg)
+
+ snake_str = ""
+
+ for index, char in enumerate(input_str):
+ if char.isupper():
+ snake_str += "_" + char.lower()
+
+ # if char is lowercase but proceeded by a digit:
+ elif input_str[index - 1].isdigit() and char.islower():
+ snake_str += "_" + char
+
+ # if char is a digit proceeded by a letter:
+ elif input_str[index - 1].isalpha() and char.isnumeric():
+ snake_str += "_" + char.lower()
+
+ # if char is not alphanumeric:
+ elif not char.isalnum():
+ snake_str += "_"
+
+ else:
+ snake_str += char
+
+ # remove leading underscore
+ if snake_str[0] == "_":
+ snake_str = snake_str[1:]
+
+ return snake_str
+
+
+if __name__ == "__main__":
+ from doctest import testmod
+
+ testmod()
From 13317e4f7f260f59e6e53595f802c9d12ec0db4a Mon Sep 17 00:00:00 2001
From: Akshay B Shetty <107768228+NinjaSoulPirate@users.noreply.github.com>
Date: Fri, 6 Oct 2023 03:57:13 +0530
Subject: [PATCH 278/808] feat: :sparkles: calculating the resitance of
resistor using color codes (#9874)
---
electronics/resistor_color_code.py | 373 +++++++++++++++++++++++++++++
1 file changed, 373 insertions(+)
create mode 100644 electronics/resistor_color_code.py
diff --git a/electronics/resistor_color_code.py b/electronics/resistor_color_code.py
new file mode 100644
index 000000000000..b0534b813def
--- /dev/null
+++ b/electronics/resistor_color_code.py
@@ -0,0 +1,373 @@
+"""
+Title : Calculating the resistance of a n band resistor using the color codes
+
+Description :
+ Resistors resist the flow of electrical current.Each one has a value that tells how
+ strongly it resists current flow.This value's unit is the ohm, often noted with the
+ Greek letter omega: Ω.
+
+ The colored bands on a resistor can tell you everything you need to know about its
+ value and tolerance, as long as you understand how to read them. The order in which
+ the colors are arranged is very important, and each value of resistor has its own
+ unique combination.
+
+ The color coding for resistors is an international standard that is defined in IEC
+ 60062.
+
+ The number of bands present in a resistor varies from three to six. These represent
+ significant figures, multiplier, tolerance, reliability, and temperature coefficient
+ Each color used for a type of band has a value assigned to it. It is read from left
+ to right.
+ All resistors will have significant figures and multiplier bands. In a three band
+ resistor first two bands from the left represent significant figures and the third
+ represents the multiplier band.
+
+ Significant figures - The number of significant figures band in a resistor can vary
+ from two to three.
+ Colors and values associated with significant figure bands -
+ (Black = 0, Brown = 1, Red = 2, Orange = 3, Yellow = 4, Green = 5, Blue = 6,
+ Violet = 7, Grey = 8, White = 9)
+
+ Multiplier - There will be one multiplier band in a resistor. It is multiplied with
+ the significant figures obtained from previous bands.
+ Colors and values associated with multiplier band -
+ (Black = 100, Brown = 10^1, Red = 10^2, Orange = 10^3, Yellow = 10^4, Green = 10^5,
+ Blue = 10^6, Violet = 10^7, Grey = 10^8, White = 10^9, Gold = 10^-1, Silver = 10^-2)
+ Note that multiplier bands use Gold and Silver which are not used for significant
+ figure bands.
+
+ Tolerance - The tolerance band is not always present. It can be seen in four band
+ resistors and above. This is a percentage by which the resistor value can vary.
+ Colors and values associated with tolerance band -
+ (Brown = 1%, Red = 2%, Orange = 0.05%, Yellow = 0.02%, Green = 0.5%,Blue = 0.25%,
+ Violet = 0.1%, Grey = 0.01%, Gold = 5%, Silver = 10%)
+ If no color is mentioned then by default tolerance is 20%
+ Note that tolerance band does not use Black and White colors.
+
+ Temperature Coeffecient - Indicates the change in resistance of the component as
+ a function of ambient temperature in terms of ppm/K.
+ It is present in six band resistors.
+ Colors and values associated with Temperature coeffecient -
+ (Black = 250 ppm/K, Brown = 100 ppm/K, Red = 50 ppm/K, Orange = 15 ppm/K,
+ Yellow = 25 ppm/K, Green = 20 ppm/K, Blue = 10 ppm/K, Violet = 5 ppm/K,
+ Grey = 1 ppm/K)
+ Note that temperature coeffecient band does not use White, Gold, Silver colors.
+
+Sources :
+ https://www.calculator.net/resistor-calculator.html
+ https://learn.parallax.com/support/reference/resistor-color-codes
+ https://byjus.com/physics/resistor-colour-codes/
+"""
+valid_colors: list = [
+ "Black",
+ "Brown",
+ "Red",
+ "Orange",
+ "Yellow",
+ "Green",
+ "Blue",
+ "Violet",
+ "Grey",
+ "White",
+ "Gold",
+ "Silver",
+]
+
+significant_figures_color_values: dict[str, int] = {
+ "Black": 0,
+ "Brown": 1,
+ "Red": 2,
+ "Orange": 3,
+ "Yellow": 4,
+ "Green": 5,
+ "Blue": 6,
+ "Violet": 7,
+ "Grey": 8,
+ "White": 9,
+}
+
+multiplier_color_values: dict[str, float] = {
+ "Black": 10**0,
+ "Brown": 10**1,
+ "Red": 10**2,
+ "Orange": 10**3,
+ "Yellow": 10**4,
+ "Green": 10**5,
+ "Blue": 10**6,
+ "Violet": 10**7,
+ "Grey": 10**8,
+ "White": 10**9,
+ "Gold": 10**-1,
+ "Silver": 10**-2,
+}
+
+tolerance_color_values: dict[str, float] = {
+ "Brown": 1,
+ "Red": 2,
+ "Orange": 0.05,
+ "Yellow": 0.02,
+ "Green": 0.5,
+ "Blue": 0.25,
+ "Violet": 0.1,
+ "Grey": 0.01,
+ "Gold": 5,
+ "Silver": 10,
+}
+
+temperature_coeffecient_color_values: dict[str, int] = {
+ "Black": 250,
+ "Brown": 100,
+ "Red": 50,
+ "Orange": 15,
+ "Yellow": 25,
+ "Green": 20,
+ "Blue": 10,
+ "Violet": 5,
+ "Grey": 1,
+}
+
+band_types: dict[int, dict[str, int]] = {
+ 3: {"significant": 2, "multiplier": 1},
+ 4: {"significant": 2, "multiplier": 1, "tolerance": 1},
+ 5: {"significant": 3, "multiplier": 1, "tolerance": 1},
+ 6: {"significant": 3, "multiplier": 1, "tolerance": 1, "temp_coeffecient": 1},
+}
+
+
+def get_significant_digits(colors: list) -> str:
+ """
+ Function returns the digit associated with the color. Function takes a
+ list containing colors as input and returns digits as string
+
+ >>> get_significant_digits(['Black','Blue'])
+ '06'
+
+ >>> get_significant_digits(['Aqua','Blue'])
+ Traceback (most recent call last):
+ ...
+ ValueError: Aqua is not a valid color for significant figure bands
+
+ """
+ digit = ""
+ for color in colors:
+ if color not in significant_figures_color_values:
+ msg = f"{color} is not a valid color for significant figure bands"
+ raise ValueError(msg)
+ digit = digit + str(significant_figures_color_values[color])
+ return str(digit)
+
+
+def get_multiplier(color: str) -> float:
+ """
+ Function returns the multiplier value associated with the color.
+ Function takes color as input and returns multiplier value
+
+ >>> get_multiplier('Gold')
+ 0.1
+
+ >>> get_multiplier('Ivory')
+ Traceback (most recent call last):
+ ...
+ ValueError: Ivory is not a valid color for multiplier band
+
+ """
+ if color not in multiplier_color_values:
+ msg = f"{color} is not a valid color for multiplier band"
+ raise ValueError(msg)
+ return multiplier_color_values[color]
+
+
+def get_tolerance(color: str) -> float:
+ """
+ Function returns the tolerance value associated with the color.
+ Function takes color as input and returns tolerance value.
+
+ >>> get_tolerance('Green')
+ 0.5
+
+ >>> get_tolerance('Indigo')
+ Traceback (most recent call last):
+ ...
+ ValueError: Indigo is not a valid color for tolerance band
+
+ """
+ if color not in tolerance_color_values:
+ msg = f"{color} is not a valid color for tolerance band"
+ raise ValueError(msg)
+ return tolerance_color_values[color]
+
+
+def get_temperature_coeffecient(color: str) -> int:
+ """
+ Function returns the temperature coeffecient value associated with the color.
+ Function takes color as input and returns temperature coeffecient value.
+
+ >>> get_temperature_coeffecient('Yellow')
+ 25
+
+ >>> get_temperature_coeffecient('Cyan')
+ Traceback (most recent call last):
+ ...
+ ValueError: Cyan is not a valid color for temperature coeffecient band
+
+ """
+ if color not in temperature_coeffecient_color_values:
+ msg = f"{color} is not a valid color for temperature coeffecient band"
+ raise ValueError(msg)
+ return temperature_coeffecient_color_values[color]
+
+
+def get_band_type_count(total_number_of_bands: int, type_of_band: str) -> int:
+ """
+ Function returns the number of bands of a given type in a resistor with n bands
+ Function takes total_number_of_bands and type_of_band as input and returns
+ number of bands belonging to that type in the given resistor
+
+ >>> get_band_type_count(3,'significant')
+ 2
+
+ >>> get_band_type_count(2,'significant')
+ Traceback (most recent call last):
+ ...
+ ValueError: 2 is not a valid number of bands
+
+ >>> get_band_type_count(3,'sign')
+ Traceback (most recent call last):
+ ...
+ ValueError: sign is not valid for a 3 band resistor
+
+ >>> get_band_type_count(3,'tolerance')
+ Traceback (most recent call last):
+ ...
+ ValueError: tolerance is not valid for a 3 band resistor
+
+ >>> get_band_type_count(5,'temp_coeffecient')
+ Traceback (most recent call last):
+ ...
+ ValueError: temp_coeffecient is not valid for a 5 band resistor
+
+ """
+ if total_number_of_bands not in band_types:
+ msg = f"{total_number_of_bands} is not a valid number of bands"
+ raise ValueError(msg)
+ if type_of_band not in band_types[total_number_of_bands]:
+ msg = f"{type_of_band} is not valid for a {total_number_of_bands} band resistor"
+ raise ValueError(msg)
+ return band_types[total_number_of_bands][type_of_band]
+
+
+def check_validity(number_of_bands: int, colors: list) -> bool:
+ """
+ Function checks if the input provided is valid or not.
+ Function takes number_of_bands and colors as input and returns
+ True if it is valid
+
+ >>> check_validity(3, ["Black","Blue","Orange"])
+ True
+
+ >>> check_validity(4, ["Black","Blue","Orange"])
+ Traceback (most recent call last):
+ ...
+ ValueError: Expecting 4 colors, provided 3 colors
+
+ >>> check_validity(3, ["Cyan","Red","Yellow"])
+ Traceback (most recent call last):
+ ...
+ ValueError: Cyan is not a valid color
+
+ """
+ if number_of_bands >= 3 and number_of_bands <= 6:
+ if number_of_bands == len(colors):
+ for color in colors:
+ if color not in valid_colors:
+ msg = f"{color} is not a valid color"
+ raise ValueError(msg)
+ return True
+ else:
+ msg = f"Expecting {number_of_bands} colors, provided {len(colors)} colors"
+ raise ValueError(msg)
+ else:
+ msg = "Invalid number of bands. Resistor bands must be 3 to 6"
+ raise ValueError(msg)
+
+
+def calculate_resistance(number_of_bands: int, color_code_list: list) -> dict:
+ """
+ Function calculates the total resistance of the resistor using the color codes.
+ Function takes number_of_bands, color_code_list as input and returns
+ resistance
+
+ >>> calculate_resistance(3, ["Black","Blue","Orange"])
+ {'resistance': '6000Ω ±20% '}
+
+ >>> calculate_resistance(4, ["Orange","Green","Blue","Gold"])
+ {'resistance': '35000000Ω ±5% '}
+
+ >>> calculate_resistance(5, ["Violet","Brown","Grey","Silver","Green"])
+ {'resistance': '7.18Ω ±0.5% '}
+
+ >>> calculate_resistance(6, ["Red","Green","Blue","Yellow","Orange","Grey"])
+ {'resistance': '2560000Ω ±0.05% 1 ppm/K'}
+
+ >>> calculate_resistance(0, ["Violet","Brown","Grey","Silver","Green"])
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid number of bands. Resistor bands must be 3 to 6
+
+ >>> calculate_resistance(4, ["Violet","Brown","Grey","Silver","Green"])
+ Traceback (most recent call last):
+ ...
+ ValueError: Expecting 4 colors, provided 5 colors
+
+ >>> calculate_resistance(4, ["Violet","Silver","Brown","Grey"])
+ Traceback (most recent call last):
+ ...
+ ValueError: Silver is not a valid color for significant figure bands
+
+ >>> calculate_resistance(4, ["Violet","Blue","Lime","Grey"])
+ Traceback (most recent call last):
+ ...
+ ValueError: Lime is not a valid color
+
+ """
+ is_valid = check_validity(number_of_bands, color_code_list)
+ if is_valid:
+ number_of_significant_bands = get_band_type_count(
+ number_of_bands, "significant"
+ )
+ significant_colors = color_code_list[:number_of_significant_bands]
+ significant_digits = int(get_significant_digits(significant_colors))
+ multiplier_color = color_code_list[number_of_significant_bands]
+ multiplier = get_multiplier(multiplier_color)
+ if number_of_bands == 3:
+ tolerance_color = None
+ else:
+ tolerance_color = color_code_list[number_of_significant_bands + 1]
+ tolerance = (
+ 20 if tolerance_color is None else get_tolerance(str(tolerance_color))
+ )
+ if number_of_bands != 6:
+ temperature_coeffecient_color = None
+ else:
+ temperature_coeffecient_color = color_code_list[
+ number_of_significant_bands + 2
+ ]
+ temperature_coeffecient = (
+ 0
+ if temperature_coeffecient_color is None
+ else get_temperature_coeffecient(str(temperature_coeffecient_color))
+ )
+ resisitance = significant_digits * multiplier
+ if temperature_coeffecient == 0:
+ answer = f"{resisitance}Ω ±{tolerance}% "
+ else:
+ answer = f"{resisitance}Ω ±{tolerance}% {temperature_coeffecient} ppm/K"
+ return {"resistance": answer}
+ else:
+ raise ValueError("Input is invalid")
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From b316a9612826905b963a465f0f02febaed761ccc Mon Sep 17 00:00:00 2001
From: Abul Hasan <33129246+haxkd@users.noreply.github.com>
Date: Fri, 6 Oct 2023 04:15:10 +0530
Subject: [PATCH 279/808] Match a pattern and String using backtracking (#9861)
* Fix: Issue 9588
* Fix: Issue 9588
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix: Issue 9588
* Fix: Issue #9588
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix: Issue #9588
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix: Issue #9588
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix: issue #9793
* fix: issue #9793
* fix: issue #9588
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix: issue #9844
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix: issue #9844
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix: issue #9844
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix: issue #9844
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
backtracking/match_word_pattern.py | 61 ++++++++++++++++++++++++++++++
1 file changed, 61 insertions(+)
create mode 100644 backtracking/match_word_pattern.py
diff --git a/backtracking/match_word_pattern.py b/backtracking/match_word_pattern.py
new file mode 100644
index 000000000000..bfa9b1354d51
--- /dev/null
+++ b/backtracking/match_word_pattern.py
@@ -0,0 +1,61 @@
+def match_word_pattern(pattern: str, input_string: str) -> bool:
+ """
+ Determine if a given pattern matches a string using backtracking.
+
+ pattern: The pattern to match.
+ input_string: The string to match against the pattern.
+ return: True if the pattern matches the string, False otherwise.
+
+ >>> match_word_pattern("aba", "GraphTreesGraph")
+ True
+
+ >>> match_word_pattern("xyx", "PythonRubyPython")
+ True
+
+ >>> match_word_pattern("GG", "PythonJavaPython")
+ False
+ """
+
+ def backtrack(pattern_index: int, str_index: int) -> bool:
+ """
+ >>> backtrack(0, 0)
+ True
+
+ >>> backtrack(0, 1)
+ True
+
+ >>> backtrack(0, 4)
+ False
+ """
+ if pattern_index == len(pattern) and str_index == len(input_string):
+ return True
+ if pattern_index == len(pattern) or str_index == len(input_string):
+ return False
+ char = pattern[pattern_index]
+ if char in pattern_map:
+ mapped_str = pattern_map[char]
+ if input_string.startswith(mapped_str, str_index):
+ return backtrack(pattern_index + 1, str_index + len(mapped_str))
+ else:
+ return False
+ for end in range(str_index + 1, len(input_string) + 1):
+ substr = input_string[str_index:end]
+ if substr in str_map:
+ continue
+ pattern_map[char] = substr
+ str_map[substr] = char
+ if backtrack(pattern_index + 1, end):
+ return True
+ del pattern_map[char]
+ del str_map[substr]
+ return False
+
+ pattern_map: dict[str, str] = {}
+ str_map: dict[str, str] = {}
+ return backtrack(0, 0)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From cd684fd94762c4df5529d19d1ede6fc927428815 Mon Sep 17 00:00:00 2001
From: Dean Bring
Date: Thu, 5 Oct 2023 15:45:40 -0700
Subject: [PATCH 280/808] Added algorithm to deeply clone a graph (#9765)
* Added algorithm to deeply clone a graph
* Fixed file name and removed a function call
* Removed nested function and fixed class parameter types
* Fixed doctests
* bug fix
* Added class decorator
* Updated doctests and fixed precommit errors
* Cleaned up code
* Simplified doctest
* Added doctests
* Code simplification
---
graphs/deep_clone_graph.py | 77 ++++++++++++++++++++++++++++++++++++++
1 file changed, 77 insertions(+)
create mode 100644 graphs/deep_clone_graph.py
diff --git a/graphs/deep_clone_graph.py b/graphs/deep_clone_graph.py
new file mode 100644
index 000000000000..55678b4c01ec
--- /dev/null
+++ b/graphs/deep_clone_graph.py
@@ -0,0 +1,77 @@
+"""
+LeetCode 133. Clone Graph
+https://leetcode.com/problems/clone-graph/
+
+Given a reference of a node in a connected undirected graph.
+
+Return a deep copy (clone) of the graph.
+
+Each node in the graph contains a value (int) and a list (List[Node]) of its
+neighbors.
+"""
+from dataclasses import dataclass
+
+
+@dataclass
+class Node:
+ value: int = 0
+ neighbors: list["Node"] | None = None
+
+ def __post_init__(self) -> None:
+ """
+ >>> Node(3).neighbors
+ []
+ """
+ self.neighbors = self.neighbors or []
+
+ def __hash__(self) -> int:
+ """
+ >>> hash(Node(3)) != 0
+ True
+ """
+ return id(self)
+
+
+def clone_graph(node: Node | None) -> Node | None:
+ """
+ This function returns a clone of a connected undirected graph.
+ >>> clone_graph(Node(1))
+ Node(value=1, neighbors=[])
+ >>> clone_graph(Node(1, [Node(2)]))
+ Node(value=1, neighbors=[Node(value=2, neighbors=[])])
+ >>> clone_graph(None) is None
+ True
+ """
+ if not node:
+ return None
+
+ originals_to_clones = {} # map nodes to clones
+
+ stack = [node]
+
+ while stack:
+ original = stack.pop()
+
+ if original in originals_to_clones:
+ continue
+
+ originals_to_clones[original] = Node(original.value)
+
+ stack.extend(original.neighbors or [])
+
+ for original, clone in originals_to_clones.items():
+ for neighbor in original.neighbors or []:
+ cloned_neighbor = originals_to_clones[neighbor]
+
+ if not clone.neighbors:
+ clone.neighbors = []
+
+ clone.neighbors.append(cloned_neighbor)
+
+ return originals_to_clones[node]
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 9200c64464492117bff792f1f43b19050070af4a Mon Sep 17 00:00:00 2001
From: Aroson <74296409+Aroson1@users.noreply.github.com>
Date: Fri, 6 Oct 2023 04:46:51 +0530
Subject: [PATCH 281/808] Added Wheatstone Bridge Algorithm (#9872)
* Add files via upload
* Update wheatstone_bridge.py
* Update wheatstone_bridge.py
---
electronics/wheatstone_bridge.py | 41 ++++++++++++++++++++++++++++++++
1 file changed, 41 insertions(+)
create mode 100644 electronics/wheatstone_bridge.py
diff --git a/electronics/wheatstone_bridge.py b/electronics/wheatstone_bridge.py
new file mode 100644
index 000000000000..3529a09339c4
--- /dev/null
+++ b/electronics/wheatstone_bridge.py
@@ -0,0 +1,41 @@
+# https://en.wikipedia.org/wiki/Wheatstone_bridge
+from __future__ import annotations
+
+
+def wheatstone_solver(
+ resistance_1: float, resistance_2: float, resistance_3: float
+) -> float:
+ """
+ This function can calculate the unknown resistance in an wheatstone network,
+ given that the three other resistances in the network are known.
+ The formula to calculate the same is:
+
+ ---------------
+ |Rx=(R2/R1)*R3|
+ ---------------
+
+ Usage examples:
+ >>> wheatstone_solver(resistance_1=2, resistance_2=4, resistance_3=5)
+ 10.0
+ >>> wheatstone_solver(resistance_1=356, resistance_2=234, resistance_3=976)
+ 641.5280898876405
+ >>> wheatstone_solver(resistance_1=2, resistance_2=-1, resistance_3=2)
+ Traceback (most recent call last):
+ ...
+ ValueError: All resistance values must be positive
+ >>> wheatstone_solver(resistance_1=0, resistance_2=0, resistance_3=2)
+ Traceback (most recent call last):
+ ...
+ ValueError: All resistance values must be positive
+ """
+
+ if resistance_1 <= 0 or resistance_2 <= 0 or resistance_3 <= 0:
+ raise ValueError("All resistance values must be positive")
+ else:
+ return float((resistance_2 / resistance_1) * resistance_3)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 19fc788197474f75c56cc3755582cc583be9e52f Mon Sep 17 00:00:00 2001
From: ojas wani <52542740+ojas-wani@users.noreply.github.com>
Date: Thu, 5 Oct 2023 16:43:45 -0700
Subject: [PATCH 282/808] added laplacian_filter file (#9783)
* added laplacian_filter file
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* updated laplacian.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* updated laplacian_py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* updated laplacian_filter.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* updated laplacian_filter.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* updated laplacian_filter.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* required changes to laplacian file
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* changed laplacian_filter.py
* changed laplacian_filter.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* changed laplacian_filter.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* changed laplacian_filter.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* updated laplacian_filter.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update laplacian_filter.py
* update laplacian_filter.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* update laplacian_filter.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* changed laplacian_filter.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* changed laplacian_filter.py
* changed laplacian_filter.py
* changed laplacian_filter.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update laplacian_filter.py
* Add a test
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
.../filters/laplacian_filter.py | 81 +++++++++++++++++++
1 file changed, 81 insertions(+)
create mode 100644 digital_image_processing/filters/laplacian_filter.py
diff --git a/digital_image_processing/filters/laplacian_filter.py b/digital_image_processing/filters/laplacian_filter.py
new file mode 100644
index 000000000000..69b9616e4d30
--- /dev/null
+++ b/digital_image_processing/filters/laplacian_filter.py
@@ -0,0 +1,81 @@
+# @Author : ojas-wani
+# @File : laplacian_filter.py
+# @Date : 10/04/2023
+
+import numpy as np
+from cv2 import (
+ BORDER_DEFAULT,
+ COLOR_BGR2GRAY,
+ CV_64F,
+ cvtColor,
+ filter2D,
+ imread,
+ imshow,
+ waitKey,
+)
+
+from digital_image_processing.filters.gaussian_filter import gaussian_filter
+
+
+def my_laplacian(src: np.ndarray, ksize: int) -> np.ndarray:
+ """
+ :param src: the source image, which should be a grayscale or color image.
+ :param ksize: the size of the kernel used to compute the Laplacian filter,
+ which can be 1, 3, 5, or 7.
+
+ >>> my_laplacian(src=np.array([]), ksize=0)
+ Traceback (most recent call last):
+ ...
+ ValueError: ksize must be in (1, 3, 5, 7)
+ """
+ kernels = {
+ 1: np.array([[0, -1, 0], [-1, 4, -1], [0, -1, 0]]),
+ 3: np.array([[0, 1, 0], [1, -4, 1], [0, 1, 0]]),
+ 5: np.array(
+ [
+ [0, 0, -1, 0, 0],
+ [0, -1, -2, -1, 0],
+ [-1, -2, 16, -2, -1],
+ [0, -1, -2, -1, 0],
+ [0, 0, -1, 0, 0],
+ ]
+ ),
+ 7: np.array(
+ [
+ [0, 0, 0, -1, 0, 0, 0],
+ [0, 0, -2, -3, -2, 0, 0],
+ [0, -2, -7, -10, -7, -2, 0],
+ [-1, -3, -10, 68, -10, -3, -1],
+ [0, -2, -7, -10, -7, -2, 0],
+ [0, 0, -2, -3, -2, 0, 0],
+ [0, 0, 0, -1, 0, 0, 0],
+ ]
+ ),
+ }
+ if ksize not in kernels:
+ msg = f"ksize must be in {tuple(kernels)}"
+ raise ValueError(msg)
+
+ # Apply the Laplacian kernel using convolution
+ return filter2D(
+ src, CV_64F, kernels[ksize], 0, borderType=BORDER_DEFAULT, anchor=(0, 0)
+ )
+
+
+if __name__ == "__main__":
+ # read original image
+ img = imread(r"../image_data/lena.jpg")
+
+ # turn image in gray scale value
+ gray = cvtColor(img, COLOR_BGR2GRAY)
+
+ # Applying gaussian filter
+ blur_image = gaussian_filter(gray, 3, sigma=1)
+
+ # Apply multiple Kernel to detect edges
+ laplacian_image = my_laplacian(ksize=3, src=blur_image)
+
+ imshow("Original image", img)
+ imshow("Detected edges using laplacian filter", laplacian_image)
+
+ waitKey(0)
From 17af6444497a64dbe803904e2ef27d0e2a280f8c Mon Sep 17 00:00:00 2001
From: JeevaRamanathan <64531160+JeevaRamanathan@users.noreply.github.com>
Date: Fri, 6 Oct 2023 05:30:58 +0530
Subject: [PATCH 283/808] Symmetric tree (#9871)
* symmectric tree
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* removed trailing spaces
* escape sequence fix
* added return type
* added class
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* wordings fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* added static method
* added type
* added static method
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* wordings fix
* testcase added
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* testcase added for mirror function
* testcase added for mirror function
* made the requested changes
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* made the requested changes
* doc test added for symmetric, asymmetric
* Update symmetric_tree.py
---------
Co-authored-by: jeevaramanthan.m
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
data_structures/binary_tree/symmetric_tree.py | 101 ++++++++++++++++++
1 file changed, 101 insertions(+)
create mode 100644 data_structures/binary_tree/symmetric_tree.py
diff --git a/data_structures/binary_tree/symmetric_tree.py b/data_structures/binary_tree/symmetric_tree.py
new file mode 100644
index 000000000000..331a25849c1c
--- /dev/null
+++ b/data_structures/binary_tree/symmetric_tree.py
@@ -0,0 +1,101 @@
+"""
+Given the root of a binary tree, check whether it is a mirror of itself
+(i.e., symmetric around its center).
+
+Leetcode reference: https://leetcode.com/problems/symmetric-tree/
+"""
+from __future__ import annotations
+
+from dataclasses import dataclass
+
+
+@dataclass
+class Node:
+ """
+ A Node has data variable and pointers to Nodes to its left and right.
+ """
+
+ data: int
+ left: Node | None = None
+ right: Node | None = None
+
+
+def make_symmetric_tree() -> Node:
+ r"""
+ Create a symmetric tree for testing.
+ The tree looks like this:
+ 1
+ / \
+ 2 2
+ / \ / \
+ 3 4 4 3
+ """
+ root = Node(1)
+ root.left = Node(2)
+ root.right = Node(2)
+ root.left.left = Node(3)
+ root.left.right = Node(4)
+ root.right.left = Node(4)
+ root.right.right = Node(3)
+ return root
+
+
+def make_asymmetric_tree() -> Node:
+ r"""
+ Create a asymmetric tree for testing.
+ The tree looks like this:
+ 1
+ / \
+ 2 2
+ / \ / \
+ 3 4 3 4
+ """
+ root = Node(1)
+ root.left = Node(2)
+ root.right = Node(2)
+ root.left.left = Node(3)
+ root.left.right = Node(4)
+ root.right.left = Node(3)
+ root.right.right = Node(4)
+ return root
+
+
+def is_symmetric_tree(tree: Node) -> bool:
+ """
+ Test cases for is_symmetric_tree function
+ >>> is_symmetric_tree(make_symmetric_tree())
+ True
+ >>> is_symmetric_tree(make_asymmetric_tree())
+ False
+ """
+ if tree:
+ return is_mirror(tree.left, tree.right)
+ return True # An empty tree is considered symmetric.
+
+
+def is_mirror(left: Node | None, right: Node | None) -> bool:
+ """
+ >>> tree1 = make_symmetric_tree()
+ >>> tree1.right.right = Node(3)
+ >>> is_mirror(tree1.left, tree1.right)
+ True
+ >>> tree2 = make_asymmetric_tree()
+ >>> is_mirror(tree2.left, tree2.right)
+ False
+ """
+ if left is None and right is None:
+ # Both sides are empty, which is symmetric.
+ return True
+ if left is None or right is None:
+ # One side is empty while the other is not, which is not symmetric.
+ return False
+ if left.data == right.data:
+ # The values match, so check the subtree
+ return is_mirror(left.left, right.right) and is_mirror(left.right, right.left)
+ return False
+
+
+if __name__ == "__main__":
+ from doctest import testmod
+
+ testmod()
From d0c54acd75cedf14cff353869482a0487fea1697 Mon Sep 17 00:00:00 2001
From: Christian Clauss
Date: Fri, 6 Oct 2023 04:31:11 +0200
Subject: [PATCH 284/808] Use dataclasses in singly_linked_list.py (#9886)
---
DIRECTORY.md | 7 +
.../linked_list/singly_linked_list.py | 151 ++++++++++--------
2 files changed, 93 insertions(+), 65 deletions(-)
diff --git a/DIRECTORY.md b/DIRECTORY.md
index c199a4329202..a975b9264be0 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -25,6 +25,7 @@
* [Combination Sum](backtracking/combination_sum.py)
* [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py)
* [Knight Tour](backtracking/knight_tour.py)
+ * [Match Word Pattern](backtracking/match_word_pattern.py)
* [Minimax](backtracking/minimax.py)
* [N Queens](backtracking/n_queens.py)
* [N Queens Math](backtracking/n_queens_math.py)
@@ -199,6 +200,7 @@
* [Red Black Tree](data_structures/binary_tree/red_black_tree.py)
* [Segment Tree](data_structures/binary_tree/segment_tree.py)
* [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py)
+ * [Symmetric Tree](data_structures/binary_tree/symmetric_tree.py)
* [Treap](data_structures/binary_tree/treap.py)
* [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py)
* Disjoint Set
@@ -277,6 +279,7 @@
* [Convolve](digital_image_processing/filters/convolve.py)
* [Gabor Filter](digital_image_processing/filters/gabor_filter.py)
* [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py)
+ * [Laplacian Filter](digital_image_processing/filters/laplacian_filter.py)
* [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py)
* [Median Filter](digital_image_processing/filters/median_filter.py)
* [Sobel Filter](digital_image_processing/filters/sobel_filter.py)
@@ -365,8 +368,10 @@
* [Ind Reactance](electronics/ind_reactance.py)
* [Ohms Law](electronics/ohms_law.py)
* [Real And Reactive Power](electronics/real_and_reactive_power.py)
+ * [Resistor Color Code](electronics/resistor_color_code.py)
* [Resistor Equivalence](electronics/resistor_equivalence.py)
* [Resonant Frequency](electronics/resonant_frequency.py)
+ * [Wheatstone Bridge](electronics/wheatstone_bridge.py)
## File Transfer
* [Receive File](file_transfer/receive_file.py)
@@ -415,6 +420,7 @@
* [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py)
* [Check Cycle](graphs/check_cycle.py)
* [Connected Components](graphs/connected_components.py)
+ * [Deep Clone Graph](graphs/deep_clone_graph.py)
* [Depth First Search](graphs/depth_first_search.py)
* [Depth First Search 2](graphs/depth_first_search_2.py)
* [Dijkstra](graphs/dijkstra.py)
@@ -1159,6 +1165,7 @@
* [Autocomplete Using Trie](strings/autocomplete_using_trie.py)
* [Barcode Validator](strings/barcode_validator.py)
* [Boyer Moore Search](strings/boyer_moore_search.py)
+ * [Camel Case To Snake Case](strings/camel_case_to_snake_case.py)
* [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py)
* [Capitalize](strings/capitalize.py)
* [Check Anagrams](strings/check_anagrams.py)
diff --git a/data_structures/linked_list/singly_linked_list.py b/data_structures/linked_list/singly_linked_list.py
index f4b2ddce12d7..2c6713a47ad9 100644
--- a/data_structures/linked_list/singly_linked_list.py
+++ b/data_structures/linked_list/singly_linked_list.py
@@ -1,27 +1,38 @@
+from __future__ import annotations
+
+from collections.abc import Iterator
+from dataclasses import dataclass
from typing import Any
+@dataclass
class Node:
- def __init__(self, data: Any):
- """
- Create and initialize Node class instance.
- >>> Node(20)
- Node(20)
- >>> Node("Hello, world!")
- Node(Hello, world!)
- >>> Node(None)
- Node(None)
- >>> Node(True)
- Node(True)
- """
- self.data = data
- self.next = None
+ """
+ Create and initialize Node class instance.
+ >>> Node(20)
+ Node(20)
+ >>> Node("Hello, world!")
+ Node(Hello, world!)
+ >>> Node(None)
+ Node(None)
+ >>> Node(True)
+ Node(True)
+ """
+
+ data: Any
+ next_node: Node | None = None
def __repr__(self) -> str:
"""
Get the string representation of this node.
>>> Node(10).__repr__()
'Node(10)'
+ >>> repr(Node(10))
+ 'Node(10)'
+ >>> str(Node(10))
+ 'Node(10)'
+ >>> Node(10)
+ Node(10)
"""
return f"Node({self.data})"
@@ -31,10 +42,12 @@ def __init__(self):
"""
Create and initialize LinkedList class instance.
>>> linked_list = LinkedList()
+ >>> linked_list.head is None
+ True
"""
self.head = None
- def __iter__(self) -> Any:
+ def __iter__(self) -> Iterator[Any]:
"""
This function is intended for iterators to access
and iterate through data inside linked list.
@@ -51,7 +64,7 @@ def __iter__(self) -> Any:
node = self.head
while node:
yield node.data
- node = node.next
+ node = node.next_node
def __len__(self) -> int:
"""
@@ -81,9 +94,16 @@ def __repr__(self) -> str:
>>> linked_list.insert_tail(1)
>>> linked_list.insert_tail(3)
>>> linked_list.__repr__()
- '1->3'
+ '1 -> 3'
+ >>> repr(linked_list)
+ '1 -> 3'
+ >>> str(linked_list)
+ '1 -> 3'
+ >>> linked_list.insert_tail(5)
+ >>> f"{linked_list}"
+ '1 -> 3 -> 5'
"""
- return "->".join([str(item) for item in self])
+ return " -> ".join([str(item) for item in self])
def __getitem__(self, index: int) -> Any:
"""
@@ -134,7 +154,7 @@ def __setitem__(self, index: int, data: Any) -> None:
raise ValueError("list index out of range.")
current = self.head
for _ in range(index):
- current = current.next
+ current = current.next_node
current.data = data
def insert_tail(self, data: Any) -> None:
@@ -146,10 +166,10 @@ def insert_tail(self, data: Any) -> None:
tail
>>> linked_list.insert_tail("tail_2")
>>> linked_list
- tail->tail_2
+ tail -> tail_2
>>> linked_list.insert_tail("tail_3")
>>> linked_list
- tail->tail_2->tail_3
+ tail -> tail_2 -> tail_3
"""
self.insert_nth(len(self), data)
@@ -162,10 +182,10 @@ def insert_head(self, data: Any) -> None:
head
>>> linked_list.insert_head("head_2")
>>> linked_list
- head_2->head
+ head_2 -> head
>>> linked_list.insert_head("head_3")
>>> linked_list
- head_3->head_2->head
+ head_3 -> head_2 -> head
"""
self.insert_nth(0, data)
@@ -177,13 +197,13 @@ def insert_nth(self, index: int, data: Any) -> None:
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
- first->second->third
+ first -> second -> third
>>> linked_list.insert_nth(1, "fourth")
>>> linked_list
- first->fourth->second->third
+ first -> fourth -> second -> third
>>> linked_list.insert_nth(3, "fifth")
>>> linked_list
- first->fourth->second->fifth->third
+ first -> fourth -> second -> fifth -> third
"""
if not 0 <= index <= len(self):
raise IndexError("list index out of range")
@@ -191,14 +211,14 @@ def insert_nth(self, index: int, data: Any) -> None:
if self.head is None:
self.head = new_node
elif index == 0:
- new_node.next = self.head # link new_node to head
+ new_node.next_node = self.head # link new_node to head
self.head = new_node
else:
temp = self.head
for _ in range(index - 1):
- temp = temp.next
- new_node.next = temp.next
- temp.next = new_node
+ temp = temp.next_node
+ new_node.next_node = temp.next_node
+ temp.next_node = new_node
def print_list(self) -> None: # print every node data
"""
@@ -208,7 +228,7 @@ def print_list(self) -> None: # print every node data
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
- first->second->third
+ first -> second -> third
"""
print(self)
@@ -221,11 +241,11 @@ def delete_head(self) -> Any:
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
- first->second->third
+ first -> second -> third
>>> linked_list.delete_head()
'first'
>>> linked_list
- second->third
+ second -> third
>>> linked_list.delete_head()
'second'
>>> linked_list
@@ -248,11 +268,11 @@ def delete_tail(self) -> Any: # delete from tail
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
- first->second->third
+ first -> second -> third
>>> linked_list.delete_tail()
'third'
>>> linked_list
- first->second
+ first -> second
>>> linked_list.delete_tail()
'second'
>>> linked_list
@@ -275,11 +295,11 @@ def delete_nth(self, index: int = 0) -> Any:
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
- first->second->third
+ first -> second -> third
>>> linked_list.delete_nth(1) # delete middle
'second'
>>> linked_list
- first->third
+ first -> third
>>> linked_list.delete_nth(5) # this raises error
Traceback (most recent call last):
...
@@ -293,13 +313,13 @@ def delete_nth(self, index: int = 0) -> Any:
raise IndexError("List index out of range.")
delete_node = self.head # default first node
if index == 0:
- self.head = self.head.next
+ self.head = self.head.next_node
else:
temp = self.head
for _ in range(index - 1):
- temp = temp.next
- delete_node = temp.next
- temp.next = temp.next.next
+ temp = temp.next_node
+ delete_node = temp.next_node
+ temp.next_node = temp.next_node.next_node
return delete_node.data
def is_empty(self) -> bool:
@@ -322,22 +342,22 @@ def reverse(self) -> None:
>>> linked_list.insert_tail("second")
>>> linked_list.insert_tail("third")
>>> linked_list
- first->second->third
+ first -> second -> third
>>> linked_list.reverse()
>>> linked_list
- third->second->first
+ third -> second -> first
"""
prev = None
current = self.head
while current:
# Store the current node's next node.
- next_node = current.next
- # Make the current node's next point backwards
- current.next = prev
+ next_node = current.next_node
+ # Make the current node's next_node point backwards
+ current.next_node = prev
# Make the previous node be the current node
prev = current
- # Make the current node the next node (to progress iteration)
+ # Make the current node the next_node node (to progress iteration)
current = next_node
# Return prev in order to put the head at the end
self.head = prev
@@ -366,17 +386,17 @@ def test_singly_linked_list() -> None:
for i in range(10):
assert len(linked_list) == i
linked_list.insert_nth(i, i + 1)
- assert str(linked_list) == "->".join(str(i) for i in range(1, 11))
+ assert str(linked_list) == " -> ".join(str(i) for i in range(1, 11))
linked_list.insert_head(0)
linked_list.insert_tail(11)
- assert str(linked_list) == "->".join(str(i) for i in range(12))
+ assert str(linked_list) == " -> ".join(str(i) for i in range(12))
assert linked_list.delete_head() == 0
assert linked_list.delete_nth(9) == 10
assert linked_list.delete_tail() == 11
assert len(linked_list) == 9
- assert str(linked_list) == "->".join(str(i) for i in range(1, 10))
+ assert str(linked_list) == " -> ".join(str(i) for i in range(1, 10))
assert all(linked_list[i] == i + 1 for i in range(9)) is True
@@ -385,7 +405,7 @@ def test_singly_linked_list() -> None:
assert all(linked_list[i] == -i for i in range(9)) is True
linked_list.reverse()
- assert str(linked_list) == "->".join(str(i) for i in range(-8, 1))
+ assert str(linked_list) == " -> ".join(str(i) for i in range(-8, 1))
def test_singly_linked_list_2() -> None:
@@ -417,56 +437,57 @@ def test_singly_linked_list_2() -> None:
# Check if it's empty or not
assert linked_list.is_empty() is False
assert (
- str(linked_list) == "-9->100->Node(77345112)->dlrow olleH->7->5555->0->"
- "-192.55555->Hello, world!->77.9->Node(10)->None->None->12.2"
+ str(linked_list)
+ == "-9 -> 100 -> Node(77345112) -> dlrow olleH -> 7 -> 5555 -> "
+ "0 -> -192.55555 -> Hello, world! -> 77.9 -> Node(10) -> None -> None -> 12.2"
)
# Delete the head
result = linked_list.delete_head()
assert result == -9
assert (
- str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
- "Hello, world!->77.9->Node(10)->None->None->12.2"
+ str(linked_list) == "100 -> Node(77345112) -> dlrow olleH -> 7 -> 5555 -> 0 -> "
+ "-192.55555 -> Hello, world! -> 77.9 -> Node(10) -> None -> None -> 12.2"
)
# Delete the tail
result = linked_list.delete_tail()
assert result == 12.2
assert (
- str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
- "Hello, world!->77.9->Node(10)->None->None"
+ str(linked_list) == "100 -> Node(77345112) -> dlrow olleH -> 7 -> 5555 -> 0 -> "
+ "-192.55555 -> Hello, world! -> 77.9 -> Node(10) -> None -> None"
)
# Delete a node in specific location in linked list
result = linked_list.delete_nth(10)
assert result is None
assert (
- str(linked_list) == "100->Node(77345112)->dlrow olleH->7->5555->0->-192.55555->"
- "Hello, world!->77.9->Node(10)->None"
+ str(linked_list) == "100 -> Node(77345112) -> dlrow olleH -> 7 -> 5555 -> 0 -> "
+ "-192.55555 -> Hello, world! -> 77.9 -> Node(10) -> None"
)
# Add a Node instance to its head
linked_list.insert_head(Node("Hello again, world!"))
assert (
str(linked_list)
- == "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
- "7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None"
+ == "Node(Hello again, world!) -> 100 -> Node(77345112) -> dlrow olleH -> "
+ "7 -> 5555 -> 0 -> -192.55555 -> Hello, world! -> 77.9 -> Node(10) -> None"
)
# Add None to its tail
linked_list.insert_tail(None)
assert (
str(linked_list)
- == "Node(Hello again, world!)->100->Node(77345112)->dlrow olleH->"
- "7->5555->0->-192.55555->Hello, world!->77.9->Node(10)->None->None"
+ == "Node(Hello again, world!) -> 100 -> Node(77345112) -> dlrow olleH -> 7 -> "
+ "5555 -> 0 -> -192.55555 -> Hello, world! -> 77.9 -> Node(10) -> None -> None"
)
# Reverse the linked list
linked_list.reverse()
assert (
str(linked_list)
- == "None->None->Node(10)->77.9->Hello, world!->-192.55555->0->5555->"
- "7->dlrow olleH->Node(77345112)->100->Node(Hello again, world!)"
+ == "None -> None -> Node(10) -> 77.9 -> Hello, world! -> -192.55555 -> 0 -> "
+ "5555 -> 7 -> dlrow olleH -> Node(77345112) -> 100 -> Node(Hello again, world!)"
)
From 795e97e87f6760a693769097613ace56a6addc8d Mon Sep 17 00:00:00 2001
From: Sarvjeet Singh <63469455+aazad20@users.noreply.github.com>
Date: Fri, 6 Oct 2023 19:19:34 +0530
Subject: [PATCH 285/808] Added Majority Voting Algorithm (#9866)
* Create MajorityVoteAlgorithm.py
* Update and rename MajorityVoteAlgorithm.py to majorityvotealgorithm.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update and rename majorityvotealgorithm.py to majority_vote_algorithm.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update majority_vote_algorithm.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update majority_vote_algorithm.py
* Update majority_vote_algorithm.py
* Update other/majority_vote_algorithm.py
Co-authored-by: Christian Clauss
* renaming variables majority_vote_algorithm.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update majority_vote_algorithm.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update majority_vote_algorithm.py
* Update majority_vote_algorithm.py
* Update majority_vote_algorithm.py
* Update majority_vote_algorithm.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update other/majority_vote_algorithm.py
Co-authored-by: Christian Clauss
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update other/majority_vote_algorithm.py
Co-authored-by: Christian Clauss
* adding more testcases majority_vote_algorithm.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update majority_vote_algorithm.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update majority_vote_algorithm.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
other/majority_vote_algorithm.py | 37 ++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
create mode 100644 other/majority_vote_algorithm.py
diff --git a/other/majority_vote_algorithm.py b/other/majority_vote_algorithm.py
new file mode 100644
index 000000000000..ab8b386dd2e5
--- /dev/null
+++ b/other/majority_vote_algorithm.py
@@ -0,0 +1,37 @@
+"""
+This is Booyer-Moore Majority Vote Algorithm. The problem statement goes like this:
+Given an integer array of size n, find all elements that appear more than ⌊ n/k ⌋ times.
+We have to solve in O(n) time and O(1) Space.
+URL : https://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_majority_vote_algorithm
+"""
+from collections import Counter
+
+
+def majority_vote(votes: list[int], votes_needed_to_win: int) -> list[int]:
+ """
+ >>> majority_vote([1, 2, 2, 3, 1, 3, 2], 3)
+ [2]
+ >>> majority_vote([1, 2, 2, 3, 1, 3, 2], 2)
+ []
+ >>> majority_vote([1, 2, 2, 3, 1, 3, 2], 4)
+ [1, 2, 3]
+ """
+ majority_candidate_counter: Counter[int] = Counter()
+ for vote in votes:
+ majority_candidate_counter[vote] += 1
+ if len(majority_candidate_counter) == votes_needed_to_win:
+ majority_candidate_counter -= Counter(set(majority_candidate_counter))
+ majority_candidate_counter = Counter(
+ vote for vote in votes if vote in majority_candidate_counter
+ )
+ return [
+ vote
+ for vote in majority_candidate_counter
+ if majority_candidate_counter[vote] > len(votes) / votes_needed_to_win
+ ]
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 995c5533c645250c120b11f0eddc53909fc3d012 Mon Sep 17 00:00:00 2001
From: fxdup <47389903+fxdup@users.noreply.github.com>
Date: Fri, 6 Oct 2023 14:46:58 -0400
Subject: [PATCH 286/808] Consolidate gamma (#9769)
* refactor(gamma): Append _iterative to func name
* refactor(gamma): Consolidate implementations
* refactor(gamma): Redundant test function removal
* Update maths/gamma.py
---------
Co-authored-by: Tianyi Zheng
---
maths/gamma.py | 91 ++++++++++++++++++++++++++++++++++------
maths/gamma_recursive.py | 77 ----------------------------------
2 files changed, 79 insertions(+), 89 deletions(-)
delete mode 100644 maths/gamma_recursive.py
diff --git a/maths/gamma.py b/maths/gamma.py
index d5debc58764b..822bbc74456f 100644
--- a/maths/gamma.py
+++ b/maths/gamma.py
@@ -1,35 +1,43 @@
+"""
+Gamma function is a very useful tool in math and physics.
+It helps calculating complex integral in a convenient way.
+for more info: https://en.wikipedia.org/wiki/Gamma_function
+In mathematics, the gamma function is one commonly
+used extension of the factorial function to complex numbers.
+The gamma function is defined for all complex numbers except
+the non-positive integers
+Python's Standard Library math.gamma() function overflows around gamma(171.624).
+"""
import math
from numpy import inf
from scipy.integrate import quad
-def gamma(num: float) -> float:
+def gamma_iterative(num: float) -> float:
"""
- https://en.wikipedia.org/wiki/Gamma_function
- In mathematics, the gamma function is one commonly
- used extension of the factorial function to complex numbers.
- The gamma function is defined for all complex numbers except the non-positive
- integers
- >>> gamma(-1)
+ Calculates the value of Gamma function of num
+ where num is either an integer (1, 2, 3..) or a half-integer (0.5, 1.5, 2.5 ...).
+
+ >>> gamma_iterative(-1)
Traceback (most recent call last):
...
ValueError: math domain error
- >>> gamma(0)
+ >>> gamma_iterative(0)
Traceback (most recent call last):
...
ValueError: math domain error
- >>> gamma(9)
+ >>> gamma_iterative(9)
40320.0
>>> from math import gamma as math_gamma
- >>> all(.99999999 < gamma(i) / math_gamma(i) <= 1.000000001
+ >>> all(.99999999 < gamma_iterative(i) / math_gamma(i) <= 1.000000001
... for i in range(1, 50))
True
- >>> gamma(-1)/math_gamma(-1) <= 1.000000001
+ >>> gamma_iterative(-1)/math_gamma(-1) <= 1.000000001
Traceback (most recent call last):
...
ValueError: math domain error
- >>> gamma(3.3) - math_gamma(3.3) <= 0.00000001
+ >>> gamma_iterative(3.3) - math_gamma(3.3) <= 0.00000001
True
"""
if num <= 0:
@@ -42,7 +50,66 @@ def integrand(x: float, z: float) -> float:
return math.pow(x, z - 1) * math.exp(-x)
+def gamma_recursive(num: float) -> float:
+ """
+ Calculates the value of Gamma function of num
+ where num is either an integer (1, 2, 3..) or a half-integer (0.5, 1.5, 2.5 ...).
+ Implemented using recursion
+ Examples:
+ >>> from math import isclose, gamma as math_gamma
+ >>> gamma_recursive(0.5)
+ 1.7724538509055159
+ >>> gamma_recursive(1)
+ 1.0
+ >>> gamma_recursive(2)
+ 1.0
+ >>> gamma_recursive(3.5)
+ 3.3233509704478426
+ >>> gamma_recursive(171.5)
+ 9.483367566824795e+307
+ >>> all(isclose(gamma_recursive(num), math_gamma(num))
+ ... for num in (0.5, 2, 3.5, 171.5))
+ True
+ >>> gamma_recursive(0)
+ Traceback (most recent call last):
+ ...
+ ValueError: math domain error
+ >>> gamma_recursive(-1.1)
+ Traceback (most recent call last):
+ ...
+ ValueError: math domain error
+ >>> gamma_recursive(-4)
+ Traceback (most recent call last):
+ ...
+ ValueError: math domain error
+ >>> gamma_recursive(172)
+ Traceback (most recent call last):
+ ...
+ OverflowError: math range error
+ >>> gamma_recursive(1.1)
+ Traceback (most recent call last):
+ ...
+ NotImplementedError: num must be an integer or a half-integer
+ """
+ if num <= 0:
+ raise ValueError("math domain error")
+ if num > 171.5:
+ raise OverflowError("math range error")
+ elif num - int(num) not in (0, 0.5):
+ raise NotImplementedError("num must be an integer or a half-integer")
+ elif num == 0.5:
+ return math.sqrt(math.pi)
+ else:
+ return 1.0 if num == 1 else (num - 1) * gamma_recursive(num - 1)
+
+
if __name__ == "__main__":
from doctest import testmod
testmod()
+ num = 1.0
+ while num:
+ num = float(input("Gamma of: "))
+ print(f"gamma_iterative({num}) = {gamma_iterative(num)}")
+ print(f"gamma_recursive({num}) = {gamma_recursive(num)}")
+ print("\nEnter 0 to exit...")
diff --git a/maths/gamma_recursive.py b/maths/gamma_recursive.py
deleted file mode 100644
index 3d6b8c5e8138..000000000000
--- a/maths/gamma_recursive.py
+++ /dev/null
@@ -1,77 +0,0 @@
-"""
-Gamma function is a very useful tool in math and physics.
-It helps calculating complex integral in a convenient way.
-for more info: https://en.wikipedia.org/wiki/Gamma_function
-Python's Standard Library math.gamma() function overflows around gamma(171.624).
-"""
-from math import pi, sqrt
-
-
-def gamma(num: float) -> float:
- """
- Calculates the value of Gamma function of num
- where num is either an integer (1, 2, 3..) or a half-integer (0.5, 1.5, 2.5 ...).
- Implemented using recursion
- Examples:
- >>> from math import isclose, gamma as math_gamma
- >>> gamma(0.5)
- 1.7724538509055159
- >>> gamma(2)
- 1.0
- >>> gamma(3.5)
- 3.3233509704478426
- >>> gamma(171.5)
- 9.483367566824795e+307
- >>> all(isclose(gamma(num), math_gamma(num)) for num in (0.5, 2, 3.5, 171.5))
- True
- >>> gamma(0)
- Traceback (most recent call last):
- ...
- ValueError: math domain error
- >>> gamma(-1.1)
- Traceback (most recent call last):
- ...
- ValueError: math domain error
- >>> gamma(-4)
- Traceback (most recent call last):
- ...
- ValueError: math domain error
- >>> gamma(172)
- Traceback (most recent call last):
- ...
- OverflowError: math range error
- >>> gamma(1.1)
- Traceback (most recent call last):
- ...
- NotImplementedError: num must be an integer or a half-integer
- """
- if num <= 0:
- raise ValueError("math domain error")
- if num > 171.5:
- raise OverflowError("math range error")
- elif num - int(num) not in (0, 0.5):
- raise NotImplementedError("num must be an integer or a half-integer")
- elif num == 0.5:
- return sqrt(pi)
- else:
- return 1.0 if num == 1 else (num - 1) * gamma(num - 1)
-
-
-def test_gamma() -> None:
- """
- >>> test_gamma()
- """
- assert gamma(0.5) == sqrt(pi)
- assert gamma(1) == 1.0
- assert gamma(2) == 1.0
-
-
-if __name__ == "__main__":
- from doctest import testmod
-
- testmod()
- num = 1.0
- while num:
- num = float(input("Gamma of: "))
- print(f"gamma({num}) = {gamma(num)}")
- print("\nEnter 0 to exit...")
From c6ec99d57140cbf8b54077d379dfffeb6c7ad280 Mon Sep 17 00:00:00 2001
From: Kausthub Kannan
Date: Sat, 7 Oct 2023 00:53:05 +0530
Subject: [PATCH 287/808] Added Mish Activation Function (#9942)
* Added Mish Activation Function
* Apply suggestions from code review
---------
Co-authored-by: Tianyi Zheng
---
neural_network/activation_functions/mish.py | 39 +++++++++++++++++++++
1 file changed, 39 insertions(+)
create mode 100644 neural_network/activation_functions/mish.py
diff --git a/neural_network/activation_functions/mish.py b/neural_network/activation_functions/mish.py
new file mode 100644
index 000000000000..e4f98307f2ba
--- /dev/null
+++ b/neural_network/activation_functions/mish.py
@@ -0,0 +1,39 @@
+"""
+Mish Activation Function
+
+Use Case: Improved version of the ReLU activation function used in Computer Vision.
+For more detailed information, you can refer to the following link:
+https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Mish
+"""
+
+import numpy as np
+
+
+def mish(vector: np.ndarray) -> np.ndarray:
+ """
+ Implements the Mish activation function.
+
+ Parameters:
+ vector (np.ndarray): The input array for Mish activation.
+
+ Returns:
+ np.ndarray: The input array after applying the Mish activation.
+
+ Formula:
+ f(x) = x * tanh(softplus(x)) = x * tanh(ln(1 + e^x))
+
+ Examples:
+ >>> mish(vector=np.array([2.3,0.6,-2,-3.8]))
+ array([ 2.26211893, 0.46613649, -0.25250148, -0.08405831])
+
+ >>> mish(np.array([-9.2, -0.3, 0.45, -4.56]))
+ array([-0.00092952, -0.15113318, 0.33152014, -0.04745745])
+
+ """
+ return vector * np.tanh(np.log(1 + np.exp(vector)))
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 80a2087e0aa349b81fb6bbc5d73dae920f560e75 Mon Sep 17 00:00:00 2001
From: Kausthub Kannan
Date: Sat, 7 Oct 2023 01:56:09 +0530
Subject: [PATCH 288/808] Added Softplus activation function (#9944)
---
.../activation_functions/softplus.py | 37 +++++++++++++++++++
1 file changed, 37 insertions(+)
create mode 100644 neural_network/activation_functions/softplus.py
diff --git a/neural_network/activation_functions/softplus.py b/neural_network/activation_functions/softplus.py
new file mode 100644
index 000000000000..35fdf41afc96
--- /dev/null
+++ b/neural_network/activation_functions/softplus.py
@@ -0,0 +1,37 @@
+"""
+Softplus Activation Function
+
+Use Case: The Softplus function is a smooth approximation of the ReLU function.
+For more detailed information, you can refer to the following link:
+https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#Softplus
+"""
+
+import numpy as np
+
+
+def softplus(vector: np.ndarray) -> np.ndarray:
+ """
+ Implements the Softplus activation function.
+
+ Parameters:
+ vector (np.ndarray): The input array for the Softplus activation.
+
+ Returns:
+ np.ndarray: The input array after applying the Softplus activation.
+
+ Formula: f(x) = ln(1 + e^x)
+
+ Examples:
+ >>> softplus(np.array([2.3, 0.6, -2, -3.8]))
+ array([2.39554546, 1.03748795, 0.12692801, 0.02212422])
+
+ >>> softplus(np.array([-9.2, -0.3, 0.45, -4.56]))
+ array([1.01034298e-04, 5.54355244e-01, 9.43248946e-01, 1.04077103e-02])
+ """
+ return np.log(1 + np.exp(vector))
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 2122474e41f2b85500e1f9347d98c9efc15aba4e Mon Sep 17 00:00:00 2001
From: Kamil <32775019+quant12345@users.noreply.github.com>
Date: Sat, 7 Oct 2023 14:09:39 +0500
Subject: [PATCH 289/808] Segmented sieve - doctests (#9945)
* Replacing the generator with numpy vector operations from lu_decomposition.
* Revert "Replacing the generator with numpy vector operations from lu_decomposition."
This reverts commit ad217c66165898d62b76cc89ba09c2d7049b6448.
* Added doctests.
* Update segmented_sieve.py
Removed unnecessary check.
* Update segmented_sieve.py
Added checks for 0 and negative numbers.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update segmented_sieve.py
* Update segmented_sieve.py
Added float number check.
* Update segmented_sieve.py
* Update segmented_sieve.py
simplified verification
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update segmented_sieve.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update segmented_sieve.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* ValueError: Number 22.2 must instead be a positive integer
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
maths/segmented_sieve.py | 38 ++++++++++++++++++++++++++++++++++++--
1 file changed, 36 insertions(+), 2 deletions(-)
diff --git a/maths/segmented_sieve.py b/maths/segmented_sieve.py
index e950a83b752a..125390edc588 100644
--- a/maths/segmented_sieve.py
+++ b/maths/segmented_sieve.py
@@ -4,7 +4,36 @@
def sieve(n: int) -> list[int]:
- """Segmented Sieve."""
+ """
+ Segmented Sieve.
+
+ Examples:
+ >>> sieve(8)
+ [2, 3, 5, 7]
+
+ >>> sieve(27)
+ [2, 3, 5, 7, 11, 13, 17, 19, 23]
+
+ >>> sieve(0)
+ Traceback (most recent call last):
+ ...
+ ValueError: Number 0 must instead be a positive integer
+
+ >>> sieve(-1)
+ Traceback (most recent call last):
+ ...
+ ValueError: Number -1 must instead be a positive integer
+
+ >>> sieve(22.2)
+ Traceback (most recent call last):
+ ...
+ ValueError: Number 22.2 must instead be a positive integer
+ """
+
+ if n <= 0 or isinstance(n, float):
+ msg = f"Number {n} must instead be a positive integer"
+ raise ValueError(msg)
+
in_prime = []
start = 2
end = int(math.sqrt(n)) # Size of every segment
@@ -42,4 +71,9 @@ def sieve(n: int) -> list[int]:
return prime
-print(sieve(10**6))
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
+ print(f"{sieve(10**6) = }")
From 678e0aa8cfdaae1d17536fdcf489bebe1e12cfc6 Mon Sep 17 00:00:00 2001
From: Saahil Mahato <115351000+saahil-mahato@users.noreply.github.com>
Date: Sat, 7 Oct 2023 15:20:23 +0545
Subject: [PATCH 290/808] Mention square matrices in strassen docs and make it
more clear (#9839)
* refactor: fix strassen matrix multiplication docs
* refactor: make docs more clear
---
divide_and_conquer/strassen_matrix_multiplication.py | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/divide_and_conquer/strassen_matrix_multiplication.py b/divide_and_conquer/strassen_matrix_multiplication.py
index 1d03950ef9fe..f529a255d2ef 100644
--- a/divide_and_conquer/strassen_matrix_multiplication.py
+++ b/divide_and_conquer/strassen_matrix_multiplication.py
@@ -74,7 +74,7 @@ def print_matrix(matrix: list) -> None:
def actual_strassen(matrix_a: list, matrix_b: list) -> list:
"""
Recursive function to calculate the product of two matrices, using the Strassen
- Algorithm. It only supports even length matrices.
+ Algorithm. It only supports square matrices of any size that is a power of 2.
"""
if matrix_dimensions(matrix_a) == (2, 2):
return default_matrix_multiplication(matrix_a, matrix_b)
@@ -129,8 +129,8 @@ def strassen(matrix1: list, matrix2: list) -> list:
new_matrix1 = matrix1
new_matrix2 = matrix2
- # Adding zeros to the matrices so that the arrays dimensions are the same and also
- # power of 2
+ # Adding zeros to the matrices to convert them both into square matrices of equal
+ # dimensions that are a power of 2
for i in range(maxim):
if i < dimension1[0]:
for _ in range(dimension1[1], maxim):
From 78af0c43c623332029c9ad1d240d81577aac5d72 Mon Sep 17 00:00:00 2001
From: Pronay Debnath
Date: Sat, 7 Oct 2023 21:21:30 +0530
Subject: [PATCH 291/808] Create fractional_cover_problem.py (#9973)
* Create fractional_cover_problem.py
* Update fractional_cover_problem.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update fractional_cover_problem.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update fractional_cover_problem.py
* Update fractional_cover_problem.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update fractional_cover_problem.py
* Update fractional_cover_problem.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update fractional_cover_problem.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Lose __eq__()
* Update fractional_cover_problem.py
* Define Item property ratio
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
greedy_methods/fractional_cover_problem.py | 102 +++++++++++++++++++++
1 file changed, 102 insertions(+)
create mode 100644 greedy_methods/fractional_cover_problem.py
diff --git a/greedy_methods/fractional_cover_problem.py b/greedy_methods/fractional_cover_problem.py
new file mode 100644
index 000000000000..e37c363f1db9
--- /dev/null
+++ b/greedy_methods/fractional_cover_problem.py
@@ -0,0 +1,102 @@
+# https://en.wikipedia.org/wiki/Set_cover_problem
+
+from dataclasses import dataclass
+from operator import attrgetter
+
+
+@dataclass
+class Item:
+ weight: int
+ value: int
+
+ @property
+ def ratio(self) -> float:
+ """
+ Return the value-to-weight ratio for the item.
+
+ Returns:
+ float: The value-to-weight ratio for the item.
+
+ Examples:
+ >>> Item(10, 65).ratio
+ 6.5
+
+ >>> Item(20, 100).ratio
+ 5.0
+
+ >>> Item(30, 120).ratio
+ 4.0
+ """
+ return self.value / self.weight
+
+
+def fractional_cover(items: list[Item], capacity: int) -> float:
+ """
+ Solve the Fractional Cover Problem.
+
+ Args:
+ items: A list of items, where each item has weight and value attributes.
+ capacity: The maximum weight capacity of the knapsack.
+
+ Returns:
+ The maximum value that can be obtained by selecting fractions of items to cover
+ the knapsack's capacity.
+
+ Raises:
+ ValueError: If capacity is negative.
+
+ Examples:
+ >>> fractional_cover((Item(10, 60), Item(20, 100), Item(30, 120)), capacity=50)
+ 240.0
+
+ >>> fractional_cover([Item(20, 100), Item(30, 120), Item(10, 60)], capacity=25)
+ 135.0
+
+ >>> fractional_cover([Item(10, 60), Item(20, 100), Item(30, 120)], capacity=60)
+ 280.0
+
+ >>> fractional_cover(items=[Item(5, 30), Item(10, 60), Item(15, 90)], capacity=30)
+ 180.0
+
+ >>> fractional_cover(items=[], capacity=50)
+ 0.0
+
+ >>> fractional_cover(items=[Item(10, 60)], capacity=5)
+ 30.0
+
+ >>> fractional_cover(items=[Item(10, 60)], capacity=1)
+ 6.0
+
+ >>> fractional_cover(items=[Item(10, 60)], capacity=0)
+ 0.0
+
+ >>> fractional_cover(items=[Item(10, 60)], capacity=-1)
+ Traceback (most recent call last):
+ ...
+ ValueError: Capacity cannot be negative
+ """
+ if capacity < 0:
+ raise ValueError("Capacity cannot be negative")
+
+ total_value = 0.0
+ remaining_capacity = capacity
+
+ # Sort the items by their value-to-weight ratio in descending order
+ for item in sorted(items, key=attrgetter("ratio"), reverse=True):
+ if remaining_capacity == 0:
+ break
+
+ weight_taken = min(item.weight, remaining_capacity)
+ total_value += weight_taken * item.ratio
+ remaining_capacity -= weight_taken
+
+ return total_value
+
+
+if __name__ == "__main__":
+ import doctest
+
+ if result := doctest.testmod().failed:
+ print(f"{result} test(s) failed")
+ else:
+ print("All tests passed")
From 112daddc4de91d60bbdd3201fc9a6a4afc60f57a Mon Sep 17 00:00:00 2001
From: dhruvtrigotra <72982592+dhruvtrigotra@users.noreply.github.com>
Date: Sun, 8 Oct 2023 00:34:24 +0530
Subject: [PATCH 292/808] charging_capacitor (#10016)
* charging_capacitor
* charging_capacitor
* Final edits
---------
Co-authored-by: Christian Clauss
---
electronics/charging_capacitor.py | 71 +++++++++++++++++++++++++++++++
1 file changed, 71 insertions(+)
create mode 100644 electronics/charging_capacitor.py
diff --git a/electronics/charging_capacitor.py b/electronics/charging_capacitor.py
new file mode 100644
index 000000000000..4029b0ecf267
--- /dev/null
+++ b/electronics/charging_capacitor.py
@@ -0,0 +1,71 @@
+# source - The ARRL Handbook for Radio Communications
+# https://en.wikipedia.org/wiki/RC_time_constant
+
+"""
+Description
+-----------
+When a capacitor is connected with a potential source (AC or DC). It starts to charge
+at a general speed but when a resistor is connected in the circuit with in series to
+a capacitor then the capacitor charges slowly means it will take more time than usual.
+while the capacitor is being charged, the voltage is in exponential function with time.
+
+'resistance(ohms) * capacitance(farads)' is called RC-timeconstant which may also be
+represented as τ (tau). By using this RC-timeconstant we can find the voltage at any
+time 't' from the initiation of charging a capacitor with the help of the exponential
+function containing RC. Both at charging and discharging of a capacitor.
+"""
+from math import exp # value of exp = 2.718281828459…
+
+
+def charging_capacitor(
+ source_voltage: float, # voltage in volts.
+ resistance: float, # resistance in ohms.
+ capacitance: float, # capacitance in farads.
+ time_sec: float, # time in seconds after charging initiation of capacitor.
+) -> float:
+ """
+ Find capacitor voltage at any nth second after initiating its charging.
+
+ Examples
+ --------
+ >>> charging_capacitor(source_voltage=.2,resistance=.9,capacitance=8.4,time_sec=.5)
+ 0.013
+
+ >>> charging_capacitor(source_voltage=2.2,resistance=3.5,capacitance=2.4,time_sec=9)
+ 1.446
+
+ >>> charging_capacitor(source_voltage=15,resistance=200,capacitance=20,time_sec=2)
+ 0.007
+
+ >>> charging_capacitor(20, 2000, 30*pow(10,-5), 4)
+ 19.975
+
+ >>> charging_capacitor(source_voltage=0,resistance=10.0,capacitance=.30,time_sec=3)
+ Traceback (most recent call last):
+ ...
+ ValueError: Source voltage must be positive.
+
+ >>> charging_capacitor(source_voltage=20,resistance=-2000,capacitance=30,time_sec=4)
+ Traceback (most recent call last):
+ ...
+ ValueError: Resistance must be positive.
+
+ >>> charging_capacitor(source_voltage=30,resistance=1500,capacitance=0,time_sec=4)
+ Traceback (most recent call last):
+ ...
+ ValueError: Capacitance must be positive.
+ """
+
+ if source_voltage <= 0:
+ raise ValueError("Source voltage must be positive.")
+ if resistance <= 0:
+ raise ValueError("Resistance must be positive.")
+ if capacitance <= 0:
+ raise ValueError("Capacitance must be positive.")
+ return round(source_voltage * (1 - exp(-time_sec / (resistance * capacitance))), 3)
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
From 60291738d2552999545c414bb8a8e90f86c69678 Mon Sep 17 00:00:00 2001
From: Kosuri L Indu <118645569+kosuri-indu@users.noreply.github.com>
Date: Sun, 8 Oct 2023 00:38:38 +0530
Subject: [PATCH 293/808] add : trapped water program under dynamic programming
(#10027)
* to add the trapped water program
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* to make changes for error : B006
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* to make changes for error : B006
* to make changes for error : B006
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* to make changes in doctest
* to make changes in doctest
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update dynamic_programming/trapped_water.py
Co-authored-by: Christian Clauss
* Update dynamic_programming/trapped_water.py
Co-authored-by: Christian Clauss
* to make changes in parameters
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* to make changes in parameters
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update dynamic_programming/trapped_water.py
Co-authored-by: Christian Clauss
* to make changes in parameters
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* for negative heights
* Update dynamic_programming/trapped_water.py
Co-authored-by: Christian Clauss
* to remove falsy
* Final edits
* tuple[int, ...]
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
dynamic_programming/trapped_water.py | 60 ++++++++++++++++++++++++++++
1 file changed, 60 insertions(+)
create mode 100644 dynamic_programming/trapped_water.py
diff --git a/dynamic_programming/trapped_water.py b/dynamic_programming/trapped_water.py
new file mode 100644
index 000000000000..8bec9fac5fef
--- /dev/null
+++ b/dynamic_programming/trapped_water.py
@@ -0,0 +1,60 @@
+"""
+Given an array of non-negative integers representing an elevation map where the width
+of each bar is 1, this program calculates how much rainwater can be trapped.
+
+Example - height = (0, 1, 0, 2, 1, 0, 1, 3, 2, 1, 2, 1)
+Output: 6
+This problem can be solved using the concept of "DYNAMIC PROGRAMMING".
+
+We calculate the maximum height of bars on the left and right of every bar in array.
+Then iterate over the width of structure and at each index.
+The amount of water that will be stored is equal to minimum of maximum height of bars
+on both sides minus height of bar at current position.
+"""
+
+
+def trapped_rainwater(heights: tuple[int, ...]) -> int:
+ """
+ The trapped_rainwater function calculates the total amount of rainwater that can be
+ trapped given an array of bar heights.
+ It uses a dynamic programming approach, determining the maximum height of bars on
+ both sides for each bar, and then computing the trapped water above each bar.
+ The function returns the total trapped water.
+
+ >>> trapped_rainwater((0, 1, 0, 2, 1, 0, 1, 3, 2, 1, 2, 1))
+ 6
+ >>> trapped_rainwater((7, 1, 5, 3, 6, 4))
+ 9
+ >>> trapped_rainwater((7, 1, 5, 3, 6, -1))
+ Traceback (most recent call last):
+ ...
+ ValueError: No height can be negative
+ """
+ if not heights:
+ return 0
+ if any(h < 0 for h in heights):
+ raise ValueError("No height can be negative")
+ length = len(heights)
+
+ left_max = [0] * length
+ left_max[0] = heights[0]
+ for i, height in enumerate(heights[1:], start=1):
+ left_max[i] = max(height, left_max[i - 1])
+
+ right_max = [0] * length
+ right_max[-1] = heights[-1]
+ for i in range(length - 2, -1, -1):
+ right_max[i] = max(heights[i], right_max[i + 1])
+
+ return sum(
+ min(left, right) - height
+ for left, right, height in zip(left_max, right_max, heights)
+ )
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+ print(f"{trapped_rainwater((0, 1, 0, 2, 1, 0, 1, 3, 2, 1, 2, 1)) = }")
+ print(f"{trapped_rainwater((7, 1, 5, 3, 6, 4)) = }")
From 895dffb412d80f29c65a062bf6d91fd2a70d8818 Mon Sep 17 00:00:00 2001
From: "pre-commit-ci[bot]"
<66853113+pre-commit-ci[bot]@users.noreply.github.com>
Date: Sat, 7 Oct 2023 21:32:28 +0200
Subject: [PATCH 294/808] [pre-commit.ci] pre-commit autoupdate (#9543)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.291 → v0.0.292](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.291...v0.0.292)
- [github.com/codespell-project/codespell: v2.2.5 → v2.2.6](https://github.com/codespell-project/codespell/compare/v2.2.5...v2.2.6)
- [github.com/tox-dev/pyproject-fmt: 1.1.0 → 1.2.0](https://github.com/tox-dev/pyproject-fmt/compare/1.1.0...1.2.0)
* updating DIRECTORY.md
* Fix typos in test_min_spanning_tree_prim.py
* Fix typos
* codespell --ignore-words-list=manuel
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Tianyi Zheng
Co-authored-by: Christian Clauss
---
.pre-commit-config.yaml | 2 +-
.../cnn_classification.py.DISABLED.txt | 4 +--
computer_vision/mosaic_augmentation.py | 2 +-
dynamic_programming/min_distance_up_bottom.py | 11 +++---
graphs/tests/test_min_spanning_tree_prim.py | 8 ++---
hashes/sha1.py | 36 ++++++++++---------
maths/pi_generator.py | 31 +++++++---------
maths/radians.py | 4 +--
maths/square_root.py | 7 ++--
neural_network/convolution_neural_network.py | 8 ++---
neural_network/gan.py_tf | 2 +-
other/graham_scan.py | 8 ++---
other/linear_congruential_generator.py | 4 +--
other/password.py | 12 +++----
physics/speed_of_sound.py | 30 +++++++---------
project_euler/problem_035/sol1.py | 12 +++----
project_euler/problem_135/sol1.py | 30 +++++++---------
project_euler/problem_493/sol1.py | 2 +-
pyproject.toml | 2 +-
19 files changed, 97 insertions(+), 118 deletions(-)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index dbf7ff341243..8a88dcc07622 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -26,7 +26,7 @@ repos:
- id: black
- repo: https://github.com/codespell-project/codespell
- rev: v2.2.5
+ rev: v2.2.6
hooks:
- id: codespell
additional_dependencies:
diff --git a/computer_vision/cnn_classification.py.DISABLED.txt b/computer_vision/cnn_classification.py.DISABLED.txt
index 9b5f8c95eebf..b813b71033f3 100644
--- a/computer_vision/cnn_classification.py.DISABLED.txt
+++ b/computer_vision/cnn_classification.py.DISABLED.txt
@@ -11,10 +11,10 @@ Download dataset from :
https://lhncbc.nlm.nih.gov/LHC-publications/pubs/TuberculosisChestXrayImageDataSets.html
1. Download the dataset folder and create two folder training set and test set
-in the parent dataste folder
+in the parent dataset folder
2. Move 30-40 image from both TB positive and TB Negative folder
in the test set folder
-3. The labels of the iamges will be extracted from the folder name
+3. The labels of the images will be extracted from the folder name
the image is present in.
"""
diff --git a/computer_vision/mosaic_augmentation.py b/computer_vision/mosaic_augmentation.py
index c150126d6bfb..cd923dfe095f 100644
--- a/computer_vision/mosaic_augmentation.py
+++ b/computer_vision/mosaic_augmentation.py
@@ -8,7 +8,7 @@
import cv2
import numpy as np
-# Parrameters
+# Parameters
OUTPUT_SIZE = (720, 1280) # Height, Width
SCALE_RANGE = (0.4, 0.6) # if height or width lower than this scale, drop it.
FILTER_TINY_SCALE = 1 / 100
diff --git a/dynamic_programming/min_distance_up_bottom.py b/dynamic_programming/min_distance_up_bottom.py
index 4870c7ef4499..6b38a41a1c0a 100644
--- a/dynamic_programming/min_distance_up_bottom.py
+++ b/dynamic_programming/min_distance_up_bottom.py
@@ -1,11 +1,8 @@
"""
Author : Alexander Pantyukhin
Date : October 14, 2022
-This is implementation Dynamic Programming up bottom approach
-to find edit distance.
-The aim is to demonstate up bottom approach for solving the task.
-The implementation was tested on the
-leetcode: https://leetcode.com/problems/edit-distance/
+This is an implementation of the up-bottom approach to find edit distance.
+The implementation was tested on Leetcode: https://leetcode.com/problems/edit-distance/
Levinstein distance
Dynamic Programming: up -> down.
@@ -30,10 +27,10 @@ def min_distance_up_bottom(word1: str, word2: str) -> int:
@functools.cache
def min_distance(index1: int, index2: int) -> int:
- # if first word index is overflow - delete all from the second word
+ # if first word index overflows - delete all from the second word
if index1 >= len_word1:
return len_word2 - index2
- # if second word index is overflow - delete all from the first word
+ # if second word index overflows - delete all from the first word
if index2 >= len_word2:
return len_word1 - index1
diff = int(word1[index1] != word2[index2]) # current letters not identical
diff --git a/graphs/tests/test_min_spanning_tree_prim.py b/graphs/tests/test_min_spanning_tree_prim.py
index 91feab28fc81..66e5706dadb1 100644
--- a/graphs/tests/test_min_spanning_tree_prim.py
+++ b/graphs/tests/test_min_spanning_tree_prim.py
@@ -22,12 +22,12 @@ def test_prim_successful_result():
[1, 7, 11],
]
- adjancency = defaultdict(list)
+ adjacency = defaultdict(list)
for node1, node2, cost in edges:
- adjancency[node1].append([node2, cost])
- adjancency[node2].append([node1, cost])
+ adjacency[node1].append([node2, cost])
+ adjacency[node2].append([node1, cost])
- result = mst(adjancency)
+ result = mst(adjacency)
expected = [
[7, 6, 1],
diff --git a/hashes/sha1.py b/hashes/sha1.py
index 8a03673f3c9f..a0fa688f863e 100644
--- a/hashes/sha1.py
+++ b/hashes/sha1.py
@@ -1,26 +1,28 @@
"""
-Demonstrates implementation of SHA1 Hash function in a Python class and gives utilities
-to find hash of string or hash of text from a file.
+Implementation of the SHA1 hash function and gives utilities to find hash of string or
+hash of text from a file. Also contains a Test class to verify that the generated hash
+matches what is returned by the hashlib library
+
Usage: python sha1.py --string "Hello World!!"
python sha1.py --file "hello_world.txt"
When run without any arguments, it prints the hash of the string "Hello World!!
Welcome to Cryptography"
-Also contains a Test class to verify that the generated Hash is same as that
-returned by the hashlib library
-SHA1 hash or SHA1 sum of a string is a cryptographic function which means it is easy
+SHA1 hash or SHA1 sum of a string is a cryptographic function, which means it is easy
to calculate forwards but extremely difficult to calculate backwards. What this means
-is, you can easily calculate the hash of a string, but it is extremely difficult to
-know the original string if you have its hash. This property is useful to communicate
-securely, send encrypted messages and is very useful in payment systems, blockchain
-and cryptocurrency etc.
-The Algorithm as described in the reference:
+is you can easily calculate the hash of a string, but it is extremely difficult to know
+the original string if you have its hash. This property is useful for communicating
+securely, send encrypted messages and is very useful in payment systems, blockchain and
+cryptocurrency etc.
+
+The algorithm as described in the reference:
First we start with a message. The message is padded and the length of the message
is added to the end. It is then split into blocks of 512 bits or 64 bytes. The blocks
are then processed one at a time. Each block must be expanded and compressed.
-The value after each compression is added to a 160bit buffer called the current hash
-state. After the last block is processed the current hash state is returned as
+The value after each compression is added to a 160-bit buffer called the current hash
+state. After the last block is processed, the current hash state is returned as
the final hash.
+
Reference: https://deadhacker.com/2006/02/21/sha-1-illustrated/
"""
import argparse
@@ -30,18 +32,18 @@
class SHA1Hash:
"""
- Class to contain the entire pipeline for SHA1 Hashing Algorithm
+ Class to contain the entire pipeline for SHA1 hashing algorithm
>>> SHA1Hash(bytes('Allan', 'utf-8')).final_hash()
'872af2d8ac3d8695387e7c804bf0e02c18df9e6e'
"""
def __init__(self, data):
"""
- Inititates the variables data and h. h is a list of 5 8-digit Hexadecimal
+ Initiates the variables data and h. h is a list of 5 8-digit hexadecimal
numbers corresponding to
(1732584193, 4023233417, 2562383102, 271733878, 3285377520)
respectively. We will start with this as a message digest. 0x is how you write
- Hexadecimal numbers in Python
+ hexadecimal numbers in Python
"""
self.data = data
self.h = [0x67452301, 0xEFCDAB89, 0x98BADCFE, 0x10325476, 0xC3D2E1F0]
@@ -90,7 +92,7 @@ def final_hash(self):
For each block, the variable h that was initialized is copied to a,b,c,d,e
and these 5 variables a,b,c,d,e undergo several changes. After all the blocks
are processed, these 5 variables are pairwise added to h ie a to h[0], b to h[1]
- and so on. This h becomes our final hash which is returned.
+ and so on. This h becomes our final hash which is returned.
"""
self.padded_data = self.padding()
self.blocks = self.split_blocks()
@@ -135,7 +137,7 @@ def test_sha1_hash():
def main():
"""
Provides option 'string' or 'file' to take input and prints the calculated SHA1
- hash. unittest.main() has been commented because we probably don't want to run
+ hash. unittest.main() has been commented out because we probably don't want to run
the test each time.
"""
# unittest.main()
diff --git a/maths/pi_generator.py b/maths/pi_generator.py
index dcd218aae309..addd921747ba 100644
--- a/maths/pi_generator.py
+++ b/maths/pi_generator.py
@@ -3,60 +3,53 @@ def calculate_pi(limit: int) -> str:
https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80
Leibniz Formula for Pi
- The Leibniz formula is the special case arctan 1 = 1/4 Pi .
+ The Leibniz formula is the special case arctan(1) = pi / 4.
Leibniz's formula converges extremely slowly: it exhibits sublinear convergence.
Convergence (https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80#Convergence)
We cannot try to prove against an interrupted, uncompleted generation.
https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80#Unusual_behaviour
- The errors can in fact be predicted;
- but those calculations also approach infinity for accuracy.
+ The errors can in fact be predicted, but those calculations also approach infinity
+ for accuracy.
- Our output will always be a string since we can defintely store all digits in there.
- For simplicity' sake, let's just compare against known values and since our outpit
- is a string, we need to convert to float.
+ Our output will be a string so that we can definitely store all digits.
>>> import math
>>> float(calculate_pi(15)) == math.pi
True
- Since we cannot predict errors or interrupt any infinite alternating
- series generation since they approach infinity,
- or interrupt any alternating series, we are going to need math.isclose()
+ Since we cannot predict errors or interrupt any infinite alternating series
+ generation since they approach infinity, or interrupt any alternating series, we'll
+ need math.isclose()
>>> math.isclose(float(calculate_pi(50)), math.pi)
True
-
>>> math.isclose(float(calculate_pi(100)), math.pi)
True
- Since math.pi-constant contains only 16 digits, here some test with preknown values:
+ Since math.pi contains only 16 digits, here are some tests with known values:
>>> calculate_pi(50)
'3.14159265358979323846264338327950288419716939937510'
>>> calculate_pi(80)
'3.14159265358979323846264338327950288419716939937510582097494459230781640628620899'
-
- To apply the Leibniz formula for calculating pi,
- the variables q, r, t, k, n, and l are used for the iteration process.
"""
+ # Variables used for the iteration process
q = 1
r = 0
t = 1
k = 1
n = 3
l = 3
+
decimal = limit
counter = 0
result = ""
- """
- We will avoid using yield since we otherwise get a Generator-Object,
- which we can't just compare against anything. We would have to make a list out of it
- after the generation, so we will just stick to plain return logic:
- """
+ # We can't compare against anything if we make a generator,
+ # so we'll stick with plain return logic
while counter != decimal + 1:
if 4 * q + r - t < n * t:
result += str(n)
diff --git a/maths/radians.py b/maths/radians.py
index 465467a3ba08..b8ac61cb135c 100644
--- a/maths/radians.py
+++ b/maths/radians.py
@@ -3,7 +3,7 @@
def radians(degree: float) -> float:
"""
- Coverts the given angle from degrees to radians
+ Converts the given angle from degrees to radians
https://en.wikipedia.org/wiki/Radian
>>> radians(180)
@@ -16,7 +16,7 @@ def radians(degree: float) -> float:
1.9167205845401725
>>> from math import radians as math_radians
- >>> all(abs(radians(i)-math_radians(i)) <= 0.00000001 for i in range(-2, 361))
+ >>> all(abs(radians(i) - math_radians(i)) <= 1e-8 for i in range(-2, 361))
True
"""
diff --git a/maths/square_root.py b/maths/square_root.py
index 2cbf14beae18..4462ccb75261 100644
--- a/maths/square_root.py
+++ b/maths/square_root.py
@@ -19,14 +19,13 @@ def get_initial_point(a: float) -> float:
def square_root_iterative(
- a: float, max_iter: int = 9999, tolerance: float = 0.00000000000001
+ a: float, max_iter: int = 9999, tolerance: float = 1e-14
) -> float:
"""
- Square root is aproximated using Newtons method.
+ Square root approximated using Newton's method.
https://en.wikipedia.org/wiki/Newton%27s_method
- >>> all(abs(square_root_iterative(i)-math.sqrt(i)) <= .00000000000001
- ... for i in range(500))
+ >>> all(abs(square_root_iterative(i) - math.sqrt(i)) <= 1e-14 for i in range(500))
True
>>> square_root_iterative(-1)
diff --git a/neural_network/convolution_neural_network.py b/neural_network/convolution_neural_network.py
index f5ec156f3593..f2e88fe7bd88 100644
--- a/neural_network/convolution_neural_network.py
+++ b/neural_network/convolution_neural_network.py
@@ -2,7 +2,7 @@
- - - - - -- - - - - - - - - - - - - - - - - - - - - - -
Name - - CNN - Convolution Neural Network For Photo Recognizing
Goal - - Recognize Handing Writing Word Photo
- Detail:Total 5 layers neural network
+ Detail: Total 5 layers neural network
* Convolution layer
* Pooling layer
* Input layer layer of BP
@@ -24,7 +24,7 @@ def __init__(
self, conv1_get, size_p1, bp_num1, bp_num2, bp_num3, rate_w=0.2, rate_t=0.2
):
"""
- :param conv1_get: [a,c,d],size, number, step of convolution kernel
+ :param conv1_get: [a,c,d], size, number, step of convolution kernel
:param size_p1: pooling size
:param bp_num1: units number of flatten layer
:param bp_num2: units number of hidden layer
@@ -71,7 +71,7 @@ def save_model(self, save_path):
with open(save_path, "wb") as f:
pickle.dump(model_dic, f)
- print(f"Model saved: {save_path}")
+ print(f"Model saved: {save_path}")
@classmethod
def read_model(cls, model_path):
@@ -210,7 +210,7 @@ def _calculate_gradient_from_pool(
def train(
self, patterns, datas_train, datas_teach, n_repeat, error_accuracy, draw_e=bool
):
- # model traning
+ # model training
print("----------------------Start Training-------------------------")
print((" - - Shape: Train_Data ", np.shape(datas_train)))
print((" - - Shape: Teach_Data ", np.shape(datas_teach)))
diff --git a/neural_network/gan.py_tf b/neural_network/gan.py_tf
index deb062c48dc7..9c6e1c05b8b4 100644
--- a/neural_network/gan.py_tf
+++ b/neural_network/gan.py_tf
@@ -158,7 +158,7 @@ if __name__ == "__main__":
# G_b2 = np.random.normal(size=(784),scale=(1. / np.sqrt(784 / 2.))) *0.002
G_b7 = np.zeros(784)
- # 3. For Adam Optimzier
+ # 3. For Adam Optimizer
v1, m1 = 0, 0
v2, m2 = 0, 0
v3, m3 = 0, 0
diff --git a/other/graham_scan.py b/other/graham_scan.py
index 2eadb4e56668..3f11d40f141c 100644
--- a/other/graham_scan.py
+++ b/other/graham_scan.py
@@ -1,5 +1,5 @@
"""
-This is a pure Python implementation of the merge-insertion sort algorithm
+This is a pure Python implementation of the Graham scan algorithm
Source: https://en.wikipedia.org/wiki/Graham_scan
For doctests run following command:
@@ -142,8 +142,8 @@ def graham_scan(points: list[tuple[int, int]]) -> list[tuple[int, int]]:
stack.append(sorted_points[0])
stack.append(sorted_points[1])
stack.append(sorted_points[2])
- # In any ways, the first 3 points line are towards left.
- # Because we sort them the angle from minx, miny.
+ # The first 3 points lines are towards the left because we sort them by their angle
+ # from minx, miny.
current_direction = Direction.left
for i in range(3, len(sorted_points)):
@@ -164,7 +164,7 @@ def graham_scan(points: list[tuple[int, int]]) -> list[tuple[int, int]]:
break
elif current_direction == Direction.right:
# If the straight line is towards right,
- # every previous points on those straigh line is not convex hull.
+ # every previous points on that straight line is not convex hull.
stack.pop()
if next_direction == Direction.right:
stack.pop()
diff --git a/other/linear_congruential_generator.py b/other/linear_congruential_generator.py
index c016310f9cfa..c7de15b94bbd 100644
--- a/other/linear_congruential_generator.py
+++ b/other/linear_congruential_generator.py
@@ -8,9 +8,9 @@ class LinearCongruentialGenerator:
A pseudorandom number generator.
"""
- # The default value for **seed** is the result of a function call which is not
+ # The default value for **seed** is the result of a function call, which is not
# normally recommended and causes ruff to raise a B008 error. However, in this case,
- # it is accptable because `LinearCongruentialGenerator.__init__()` will only be
+ # it is acceptable because `LinearCongruentialGenerator.__init__()` will only be
# called once per instance and it ensures that each instance will generate a unique
# sequence of numbers.
diff --git a/other/password.py b/other/password.py
index 9a6161af87d7..1ce0d52316e6 100644
--- a/other/password.py
+++ b/other/password.py
@@ -63,11 +63,12 @@ def random_characters(chars_incl, i):
pass # Put your code here...
-# This Will Check Whether A Given Password Is Strong Or Not
-# It Follows The Rule that Length Of Password Should Be At Least 8 Characters
-# And At Least 1 Lower, 1 Upper, 1 Number And 1 Special Character
def is_strong_password(password: str, min_length: int = 8) -> bool:
"""
+ This will check whether a given password is strong or not. The password must be at
+ least as long as the provided minimum length, and it must contain at least 1
+ lowercase letter, 1 uppercase letter, 1 number and 1 special character.
+
>>> is_strong_password('Hwea7$2!')
True
>>> is_strong_password('Sh0r1')
@@ -81,7 +82,6 @@ def is_strong_password(password: str, min_length: int = 8) -> bool:
"""
if len(password) < min_length:
- # Your Password must be at least 8 characters long
return False
upper = any(char in ascii_uppercase for char in password)
@@ -90,8 +90,6 @@ def is_strong_password(password: str, min_length: int = 8) -> bool:
spec_char = any(char in punctuation for char in password)
return upper and lower and num and spec_char
- # Passwords should contain UPPERCASE, lowerase
- # numbers, and special characters
def main():
@@ -104,7 +102,7 @@ def main():
"Alternative Password generated:",
alternative_password_generator(chars_incl, length),
)
- print("[If you are thinking of using this passsword, You better save it.]")
+ print("[If you are thinking of using this password, You better save it.]")
if __name__ == "__main__":
diff --git a/physics/speed_of_sound.py b/physics/speed_of_sound.py
index a4658366a36c..3fa952cdb411 100644
--- a/physics/speed_of_sound.py
+++ b/physics/speed_of_sound.py
@@ -2,39 +2,35 @@
Title : Calculating the speed of sound
Description :
- The speed of sound (c) is the speed that a sound wave travels
- per unit time (m/s). During propagation, the sound wave propagates
- through an elastic medium. Its SI unit is meter per second (m/s).
+ The speed of sound (c) is the speed that a sound wave travels per unit time (m/s).
+ During propagation, the sound wave propagates through an elastic medium.
- Only longitudinal waves can propagate in liquids and gas other then
- solid where they also travel in transverse wave. The following Algo-
- rithem calculates the speed of sound in fluid depanding on the bulk
- module and the density of the fluid.
+ Sound propagates as longitudinal waves in liquids and gases and as transverse waves
+ in solids. This file calculates the speed of sound in a fluid based on its bulk
+ module and density.
- Equation for calculating speed od sound in fluid:
- c_fluid = (K_s*p)**0.5
+ Equation for the speed of sound in a fluid:
+ c_fluid = sqrt(K_s / p)
c_fluid: speed of sound in fluid
K_s: isentropic bulk modulus
p: density of fluid
-
-
Source : https://en.wikipedia.org/wiki/Speed_of_sound
"""
def speed_of_sound_in_a_fluid(density: float, bulk_modulus: float) -> float:
"""
- This method calculates the speed of sound in fluid -
- This is calculated from the other two provided values
+ Calculates the speed of sound in a fluid from its density and bulk modulus
+
Examples:
- Example 1 --> Water 20°C: bulk_moduls= 2.15MPa, density=998kg/m³
- Example 2 --> Murcery 20°: bulk_moduls= 28.5MPa, density=13600kg/m³
+ Example 1 --> Water 20°C: bulk_modulus= 2.15MPa, density=998kg/m³
+ Example 2 --> Mercury 20°C: bulk_modulus= 28.5MPa, density=13600kg/m³
- >>> speed_of_sound_in_a_fluid(bulk_modulus=2.15*10**9, density=998)
+ >>> speed_of_sound_in_a_fluid(bulk_modulus=2.15e9, density=998)
1467.7563207952705
- >>> speed_of_sound_in_a_fluid(bulk_modulus=28.5*10**9, density=13600)
+ >>> speed_of_sound_in_a_fluid(bulk_modulus=28.5e9, density=13600)
1447.614670861731
"""
diff --git a/project_euler/problem_035/sol1.py b/project_euler/problem_035/sol1.py
index 17a4e9088ae2..644c992ed8a5 100644
--- a/project_euler/problem_035/sol1.py
+++ b/project_euler/problem_035/sol1.py
@@ -11,18 +11,18 @@
How many circular primes are there below one million?
To solve this problem in an efficient manner, we will first mark all the primes
-below 1 million using the Seive of Eratosthenes. Then, out of all these primes,
-we will rule out the numbers which contain an even digit. After this we will
+below 1 million using the Sieve of Eratosthenes. Then, out of all these primes,
+we will rule out the numbers which contain an even digit. After this we will
generate each circular combination of the number and check if all are prime.
"""
from __future__ import annotations
-seive = [True] * 1000001
+sieve = [True] * 1000001
i = 2
while i * i <= 1000000:
- if seive[i]:
+ if sieve[i]:
for j in range(i * i, 1000001, i):
- seive[j] = False
+ sieve[j] = False
i += 1
@@ -36,7 +36,7 @@ def is_prime(n: int) -> bool:
>>> is_prime(25363)
False
"""
- return seive[n]
+ return sieve[n]
def contains_an_even_digit(n: int) -> bool:
diff --git a/project_euler/problem_135/sol1.py b/project_euler/problem_135/sol1.py
index d71a0439c7e9..ac91fa4e2b9d 100644
--- a/project_euler/problem_135/sol1.py
+++ b/project_euler/problem_135/sol1.py
@@ -1,28 +1,22 @@
"""
Project Euler Problem 135: https://projecteuler.net/problem=135
-Given the positive integers, x, y, and z,
-are consecutive terms of an arithmetic progression,
-the least value of the positive integer, n,
-for which the equation,
+Given the positive integers, x, y, and z, are consecutive terms of an arithmetic
+progression, the least value of the positive integer, n, for which the equation,
x2 − y2 − z2 = n, has exactly two solutions is n = 27:
342 − 272 − 202 = 122 − 92 − 62 = 27
-It turns out that n = 1155 is the least value
-which has exactly ten solutions.
+It turns out that n = 1155 is the least value which has exactly ten solutions.
-How many values of n less than one million
-have exactly ten distinct solutions?
+How many values of n less than one million have exactly ten distinct solutions?
-Taking x,y,z of the form a+d,a,a-d respectively,
-the given equation reduces to a*(4d-a)=n.
-Calculating no of solutions for every n till 1 million by fixing a
-,and n must be multiple of a.
-Total no of steps=n*(1/1+1/2+1/3+1/4..+1/n)
-,so roughly O(nlogn) time complexity.
-
+Taking x, y, z of the form a + d, a, a - d respectively, the given equation reduces to
+a * (4d - a) = n.
+Calculating no of solutions for every n till 1 million by fixing a, and n must be a
+multiple of a. Total no of steps = n * (1/1 + 1/2 + 1/3 + 1/4 + ... + 1/n), so roughly
+O(nlogn) time complexity.
"""
@@ -42,15 +36,15 @@ def solution(limit: int = 1000000) -> int:
for first_term in range(1, limit):
for n in range(first_term, limit, first_term):
common_difference = first_term + n / first_term
- if common_difference % 4: # d must be divisble by 4
+ if common_difference % 4: # d must be divisible by 4
continue
else:
common_difference /= 4
if (
first_term > common_difference
and first_term < 4 * common_difference
- ): # since x,y,z are positive integers
- frequency[n] += 1 # so z>0 and a>d ,also 4d 0, a > d and 4d < a
count = sum(1 for x in frequency[1:limit] if x == 10)
diff --git a/project_euler/problem_493/sol1.py b/project_euler/problem_493/sol1.py
index c9879a528230..4d96c6c3207e 100644
--- a/project_euler/problem_493/sol1.py
+++ b/project_euler/problem_493/sol1.py
@@ -9,7 +9,7 @@
This combinatorial problem can be solved by decomposing the problem into the
following steps:
-1. Calculate the total number of possible picking cominations
+1. Calculate the total number of possible picking combinations
[combinations := binom_coeff(70, 20)]
2. Calculate the number of combinations with one colour missing
[missing := binom_coeff(60, 20)]
diff --git a/pyproject.toml b/pyproject.toml
index f9091fb8578d..75da7a04513e 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -130,5 +130,5 @@ omit = [".env/*"]
sort = "Cover"
[tool.codespell]
-ignore-words-list = "3rt,ans,crate,damon,fo,followings,hist,iff,kwanza,mater,secant,som,sur,tim,zar"
+ignore-words-list = "3rt,ans,crate,damon,fo,followings,hist,iff,kwanza,manuel,mater,secant,som,sur,tim,zar"
skip = "./.*,*.json,ciphers/prehistoric_men.txt,project_euler/problem_022/p022_names.txt,pyproject.toml,strings/dictionary.txt,strings/words.txt"
From fa077e6703758afcae4f19347a4388b9230d568f Mon Sep 17 00:00:00 2001
From: hollowcrust <72879387+hollowcrust@users.noreply.github.com>
Date: Sun, 8 Oct 2023 16:58:48 +0800
Subject: [PATCH 295/808] Add doctests, type hints; fix bug for
dynamic_programming/minimum_partition.py (#10012)
* Add doctests, type hints; fix bug
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---
dynamic_programming/minimum_partition.py | 24 +++++++++++++++++++++---
1 file changed, 21 insertions(+), 3 deletions(-)
diff --git a/dynamic_programming/minimum_partition.py b/dynamic_programming/minimum_partition.py
index 3daa9767fde4..e6188cb33b3a 100644
--- a/dynamic_programming/minimum_partition.py
+++ b/dynamic_programming/minimum_partition.py
@@ -3,13 +3,25 @@
"""
-def find_min(arr):
+def find_min(arr: list[int]) -> int:
+ """
+ >>> find_min([1, 2, 3, 4, 5])
+ 1
+ >>> find_min([5, 5, 5, 5, 5])
+ 5
+ >>> find_min([5, 5, 5, 5])
+ 0
+ >>> find_min([3])
+ 3
+ >>> find_min([])
+ 0
+ """
n = len(arr)
s = sum(arr)
dp = [[False for x in range(s + 1)] for y in range(n + 1)]
- for i in range(1, n + 1):
+ for i in range(n + 1):
dp[i][0] = True
for i in range(1, s + 1):
@@ -17,7 +29,7 @@ def find_min(arr):
for i in range(1, n + 1):
for j in range(1, s + 1):
- dp[i][j] = dp[i][j - 1]
+ dp[i][j] = dp[i - 1][j]
if arr[i - 1] <= j:
dp[i][j] = dp[i][j] or dp[i - 1][j - arr[i - 1]]
@@ -28,3 +40,9 @@ def find_min(arr):
break
return diff
+
+
+if __name__ == "__main__":
+ from doctest import testmod
+
+ testmod()
From 937ce83b150f0a217c7fa63c75a095534ae8bfeb Mon Sep 17 00:00:00 2001
From: Om Alve
Date: Sun, 8 Oct 2023 16:35:01 +0530
Subject: [PATCH 296/808] Added fractionated_morse_cipher (#9442)
* Added fractionated_morse_cipher
* Added return type hint for main function
* Added doctest for main
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Replaced main function
* changed the references section
Co-authored-by: Christian Clauss
* removed repetitive datatype hint in the docstring
Co-authored-by: Christian Clauss
* changed dictionary comprehension variable names to something more compact
Co-authored-by: Christian Clauss
* Update fractionated_morse_cipher.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
---
ciphers/fractionated_morse_cipher.py | 167 +++++++++++++++++++++++++++
1 file changed, 167 insertions(+)
create mode 100644 ciphers/fractionated_morse_cipher.py
diff --git a/ciphers/fractionated_morse_cipher.py b/ciphers/fractionated_morse_cipher.py
new file mode 100644
index 000000000000..c1d5dc6d50aa
--- /dev/null
+++ b/ciphers/fractionated_morse_cipher.py
@@ -0,0 +1,167 @@
+"""
+Python program for the Fractionated Morse Cipher.
+
+The Fractionated Morse cipher first converts the plaintext to Morse code,
+then enciphers fixed-size blocks of Morse code back to letters.
+This procedure means plaintext letters are mixed into the ciphertext letters,
+making it more secure than substitution ciphers.
+
+http://practicalcryptography.com/ciphers/fractionated-morse-cipher/
+"""
+import string
+
+MORSE_CODE_DICT = {
+ "A": ".-",
+ "B": "-...",
+ "C": "-.-.",
+ "D": "-..",
+ "E": ".",
+ "F": "..-.",
+ "G": "--.",
+ "H": "....",
+ "I": "..",
+ "J": ".---",
+ "K": "-.-",
+ "L": ".-..",
+ "M": "--",
+ "N": "-.",
+ "O": "---",
+ "P": ".--.",
+ "Q": "--.-",
+ "R": ".-.",
+ "S": "...",
+ "T": "-",
+ "U": "..-",
+ "V": "...-",
+ "W": ".--",
+ "X": "-..-",
+ "Y": "-.--",
+ "Z": "--..",
+ " ": "",
+}
+
+# Define possible trigrams of Morse code
+MORSE_COMBINATIONS = [
+ "...",
+ "..-",
+ "..x",
+ ".-.",
+ ".--",
+ ".-x",
+ ".x.",
+ ".x-",
+ ".xx",
+ "-..",
+ "-.-",
+ "-.x",
+ "--.",
+ "---",
+ "--x",
+ "-x.",
+ "-x-",
+ "-xx",
+ "x..",
+ "x.-",
+ "x.x",
+ "x-.",
+ "x--",
+ "x-x",
+ "xx.",
+ "xx-",
+ "xxx",
+]
+
+# Create a reverse dictionary for Morse code
+REVERSE_DICT = {value: key for key, value in MORSE_CODE_DICT.items()}
+
+
+def encode_to_morse(plaintext: str) -> str:
+ """Encode a plaintext message into Morse code.
+
+ Args:
+ plaintext: The plaintext message to encode.
+
+ Returns:
+ The Morse code representation of the plaintext message.
+
+ Example:
+ >>> encode_to_morse("defend the east")
+ '-..x.x..-.x.x-.x-..xx-x....x.xx.x.-x...x-'
+ """
+ return "x".join([MORSE_CODE_DICT.get(letter.upper(), "") for letter in plaintext])
+
+
+def encrypt_fractionated_morse(plaintext: str, key: str) -> str:
+ """Encrypt a plaintext message using Fractionated Morse Cipher.
+
+ Args:
+ plaintext: The plaintext message to encrypt.
+ key: The encryption key.
+
+ Returns:
+ The encrypted ciphertext.
+
+ Example:
+ >>> encrypt_fractionated_morse("defend the east","Roundtable")
+ 'ESOAVVLJRSSTRX'
+
+ """
+ morse_code = encode_to_morse(plaintext)
+ key = key.upper() + string.ascii_uppercase
+ key = "".join(sorted(set(key), key=key.find))
+
+ # Ensure morse_code length is a multiple of 3
+ padding_length = 3 - (len(morse_code) % 3)
+ morse_code += "x" * padding_length
+
+ fractionated_morse_dict = {v: k for k, v in zip(key, MORSE_COMBINATIONS)}
+ fractionated_morse_dict["xxx"] = ""
+ encrypted_text = "".join(
+ [
+ fractionated_morse_dict[morse_code[i : i + 3]]
+ for i in range(0, len(morse_code), 3)
+ ]
+ )
+ return encrypted_text
+
+
+def decrypt_fractionated_morse(ciphertext: str, key: str) -> str:
+ """Decrypt a ciphertext message encrypted with Fractionated Morse Cipher.
+
+ Args:
+ ciphertext: The ciphertext message to decrypt.
+ key: The decryption key.
+
+ Returns:
+ The decrypted plaintext message.
+
+ Example:
+ >>> decrypt_fractionated_morse("ESOAVVLJRSSTRX","Roundtable")
+ 'DEFEND THE EAST'
+ """
+ key = key.upper() + string.ascii_uppercase
+ key = "".join(sorted(set(key), key=key.find))
+
+ inverse_fractionated_morse_dict = dict(zip(key, MORSE_COMBINATIONS))
+ morse_code = "".join(
+ [inverse_fractionated_morse_dict.get(letter, "") for letter in ciphertext]
+ )
+ decrypted_text = "".join(
+ [REVERSE_DICT[code] for code in morse_code.split("x")]
+ ).strip()
+ return decrypted_text
+
+
+if __name__ == "__main__":
+ """
+ Example usage of Fractionated Morse Cipher.
+ """
+ plaintext = "defend the east"
+ print("Plain Text:", plaintext)
+ key = "ROUNDTABLE"
+
+ ciphertext = encrypt_fractionated_morse(plaintext, key)
+ print("Encrypted:", ciphertext)
+
+ decrypted_text = decrypt_fractionated_morse(ciphertext, key)
+ print("Decrypted:", decrypted_text)
From 08d394126c9d46fc9d227a0dc1e343ad1fa70679 Mon Sep 17 00:00:00 2001
From: Kausthub Kannan
Date: Sun, 8 Oct 2023 21:18:22 +0530
Subject: [PATCH 297/808] Changed Mish Activation Function to use Softplus
(#10111)
---
neural_network/activation_functions/mish.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/neural_network/activation_functions/mish.py b/neural_network/activation_functions/mish.py
index e4f98307f2ba..e51655df8a3f 100644
--- a/neural_network/activation_functions/mish.py
+++ b/neural_network/activation_functions/mish.py
@@ -7,6 +7,7 @@
"""
import numpy as np
+from softplus import softplus
def mish(vector: np.ndarray) -> np.ndarray:
@@ -30,7 +31,7 @@ def mish(vector: np.ndarray) -> np.ndarray:
array([-0.00092952, -0.15113318, 0.33152014, -0.04745745])
"""
- return vector * np.tanh(np.log(1 + np.exp(vector)))
+ return vector * np.tanh(softplus(vector))
if __name__ == "__main__":
From 6860daea60a512b202481bd5dd00d6534e162b77 Mon Sep 17 00:00:00 2001
From: Aarya Balwadkar <142713127+AaryaBalwadkar@users.noreply.github.com>
Date: Sun, 8 Oct 2023 21:23:38 +0530
Subject: [PATCH 298/808] Made Changes shifted CRT, modular division to maths
directory (#10084)
---
{blockchain => maths}/chinese_remainder_theorem.py | 0
{blockchain => maths}/modular_division.py | 0
2 files changed, 0 insertions(+), 0 deletions(-)
rename {blockchain => maths}/chinese_remainder_theorem.py (100%)
rename {blockchain => maths}/modular_division.py (100%)
diff --git a/blockchain/chinese_remainder_theorem.py b/maths/chinese_remainder_theorem.py
similarity index 100%
rename from blockchain/chinese_remainder_theorem.py
rename to maths/chinese_remainder_theorem.py
diff --git a/blockchain/modular_division.py b/maths/modular_division.py
similarity index 100%
rename from blockchain/modular_division.py
rename to maths/modular_division.py
From 81b29066d206217cb689fe2c9c8d530a1aa66cbe Mon Sep 17 00:00:00 2001
From: Arnav Kohli <95236897+THEGAMECHANGER416@users.noreply.github.com>
Date: Sun, 8 Oct 2023 21:34:43 +0530
Subject: [PATCH 299/808] Created folder for losses in Machine_Learning (#9969)
* Created folder for losses in Machine_Learning
* Update binary_cross_entropy.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update mean_squared_error.py
* Update binary_cross_entropy.py
* Update mean_squared_error.py
* Update binary_cross_entropy.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update mean_squared_error.py
* Update binary_cross_entropy.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update mean_squared_error.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update binary_cross_entropy.py
* Update mean_squared_error.py
* Update binary_cross_entropy.py
* Update mean_squared_error.py
* Update machine_learning/losses/binary_cross_entropy.py
Co-authored-by: Christian Clauss
* Update machine_learning/losses/mean_squared_error.py
Co-authored-by: Christian Clauss
* Update machine_learning/losses/binary_cross_entropy.py
Co-authored-by: Christian Clauss
* Update mean_squared_error.py
* Update machine_learning/losses/mean_squared_error.py
Co-authored-by: Tianyi Zheng
* Update binary_cross_entropy.py
* Update mean_squared_error.py
* Update binary_cross_entropy.py
* Update mean_squared_error.py
* Update mean_squared_error.py
* Update binary_cross_entropy.py
* renamed: losses -> loss_functions
* updated 2 files
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update mean_squared_error.py
* Update mean_squared_error.py
* Update binary_cross_entropy.py
* Update mean_squared_error.py
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Christian Clauss
Co-authored-by: Tianyi Zheng