Skip to content

[pull] master from TheAlgorithms:master #17

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions DIRECTORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,7 @@
* [Run Length Encoding](compression/run_length_encoding.py)

## Computer Vision
* [Cnn Classification](computer_vision/cnn_classification.py)
* [Flip Augmentation](computer_vision/flip_augmentation.py)
* [Haralick Descriptors](computer_vision/haralick_descriptors.py)
* [Harris Corner](computer_vision/harris_corner.py)
Expand Down Expand Up @@ -344,6 +345,7 @@
* [Floyd Warshall](dynamic_programming/floyd_warshall.py)
* [Integer Partition](dynamic_programming/integer_partition.py)
* [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py)
* [K Means Clustering Tensorflow](dynamic_programming/k_means_clustering_tensorflow.py)
* [Knapsack](dynamic_programming/knapsack.py)
* [Largest Divisible Subset](dynamic_programming/largest_divisible_subset.py)
* [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py)
Expand Down Expand Up @@ -571,6 +573,8 @@
* [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py)
* [Logistic Regression](machine_learning/logistic_regression.py)
* [Loss Functions](machine_learning/loss_functions.py)
* Lstm
* [Lstm Prediction](machine_learning/lstm/lstm_prediction.py)
* [Mfcc](machine_learning/mfcc.py)
* [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py)
* [Polynomial Regression](machine_learning/polynomial_regression.py)
Expand Down Expand Up @@ -801,6 +805,7 @@
* [Swish](neural_network/activation_functions/swish.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Input Data](neural_network/input_data.py)
* [Simple Neural Network](neural_network/simple_neural_network.py)

## Other
Expand Down
16 changes: 6 additions & 10 deletions ciphers/rsa_cipher.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,11 +76,9 @@ def encrypt_and_write_to_file(
key_size, n, e = read_key_file(key_filename)
if key_size < block_size * 8:
sys.exit(
"ERROR: Block size is {} bits and key size is {} bits. The RSA cipher "
"requires the block size to be equal to or greater than the key size. "
"Either decrease the block size or use different keys.".format(
block_size * 8, key_size
)
f"ERROR: Block size is {block_size * 8} bits and key size is {key_size} "
"bits. The RSA cipher requires the block size to be equal to or greater "
"than the key size. Either decrease the block size or use different keys."
)

encrypted_blocks = [str(i) for i in encrypt_message(message, (n, e), block_size)]
Expand All @@ -102,11 +100,9 @@ def read_from_file_and_decrypt(message_filename: str, key_filename: str) -> str:

if key_size < block_size * 8:
sys.exit(
"ERROR: Block size is {} bits and key size is {} bits. The RSA cipher "
"requires the block size to be equal to or greater than the key size. "
"Did you specify the correct key file and encrypted file?".format(
block_size * 8, key_size
)
f"ERROR: Block size is {block_size * 8} bits and key size is {key_size} "
"bits. The RSA cipher requires the block size to be equal to or greater "
"than the key size. Were the correct key file and encrypted file specified?"
)

encrypted_blocks = []
Expand Down
4 changes: 2 additions & 2 deletions machine_learning/astar.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def __init__(self, world_size=(5, 5)):
def show(self):
print(self.w)

def get_neigbours(self, cell):
def get_neighbours(self, cell):
"""
Return the neighbours of cell
"""
Expand Down Expand Up @@ -110,7 +110,7 @@ def astar(world, start, goal):
_closed.append(_open.pop(min_f))
if current == goal:
break
for n in world.get_neigbours(current):
for n in world.get_neighbours(current):
for c in _closed:
if c == n:
continue
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,11 @@
make sure you set the price column on line number 21. Here we
use a dataset which have the price on 3rd column.
"""
df = pd.read_csv("sample_data.csv", header=None)
len_data = df.shape[:1][0]
sample_data = pd.read_csv("sample_data.csv", header=None)
len_data = sample_data.shape[:1][0]
# If you're using some other dataset input the target column
actual_data = df.iloc[:, 1:2]
actual_data = actual_data.values.reshape(len_data, 1)
actual_data = sample_data.iloc[:, 1:2]
actual_data = actual_data.to_numpy().reshape(len_data, 1)
actual_data = MinMaxScaler().fit_transform(actual_data)
look_back = 10
forward_days = 5
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,22 @@
"""


import collections
import gzip
import os
import typing
import urllib

import numpy
from tensorflow.python.framework import dtypes, random_seed
from tensorflow.python.platform import gfile
from tensorflow.python.util.deprecation import deprecated

_Datasets = collections.namedtuple("_Datasets", ["train", "validation", "test"])

class _Datasets(typing.NamedTuple):
train: "_DataSet"
validation: "_DataSet"
test: "_DataSet"


# CVDF mirror of http://yann.lecun.com/exdb/mnist/
DEFAULT_SOURCE_URL = "https://storage.googleapis.com/cvdf-datasets/mnist/"
Expand Down
5 changes: 3 additions & 2 deletions other/lfu_cache.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,9 @@ def __init__(self, key: T | None, val: U | None):
self.prev: DoubleLinkedListNode[T, U] | None = None

def __repr__(self) -> str:
return "Node: key: {}, val: {}, freq: {}, has next: {}, has prev: {}".format(
self.key, self.val, self.freq, self.next is not None, self.prev is not None
return (
f"Node: key: {self.key}, val: {self.val}, freq: {self.freq}, "
f"has next: {self.next is not None}, has prev: {self.prev is not None}"
)


Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ rich
scikit-learn
statsmodels
sympy
tensorflow ; python_version < '3.12'
tensorflow
tweepy
# yulewalker # uncomment once audio_filters/equal_loudness_filter.py is fixed
typing_extensions
Expand Down