40 Practical Python Scripts for AI Beginners

 

Welcome back to your favorite section on https://aizonex.blogspot.com. Today, we’re bringing you 40 Practical Python Scripts for AI Beginners that you can start using in your work. They’re simple examples for now, but this is just the beginning. Of course, in just one month, this site will be fully packed with projects you can completely rely on for your work. And don’t forget to follow our Facebook page at https://www.facebook.com/profile.php?id=61579052772784 because what’s coming next is truly amazing!


1. Print Current Time

from datetime import datetime
print(datetime.now())

Explanation: Useful for logging when your AI program runs.


2. Create a Sequential Array

import numpy as np
arr = np.arange(10)
print(arr)

Explanation: Quickly generates test data for AI projects.


3. Calculate Standard Deviation

import numpy as np
data = [1,2,3,4,5]
print(np.std(data))

Explanation: Measures data spread, important for preprocessing.


4. Generate Identity Matrix

import numpy as np
print(np.eye(4))

Explanation: Common in linear algebra for AI algorithms.


5. Load Iris Dataset

from sklearn.datasets import load_iris
iris = load_iris()
print(iris.data[:5])

Explanation: Classic dataset for learning classification.


6. Plot Iris Dataset

import matplotlib.pyplot as plt
plt.scatter(iris.data[:,0], iris.data[:,1])
plt.show()

Explanation: Visualizes feature relationships.


7. Decision Tree Classifier

from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier()
model.fit(iris.data, iris.target)

Explanation: Train a basic decision tree classifier.


8. Predict with Decision Tree

print(model.predict([[5.1,3.5,1.4,0.2]]))

Explanation: Test the trained model on new input.


9. View Decision Tree Structure

from sklearn import tree
print(tree.export_text(model))

Explanation: Understand how your model makes decisions.


10. Confusion Matrix

from sklearn.metrics import confusion_matrix
print(confusion_matrix(iris.target, model.predict(iris.data)))

Explanation: Evaluate classification accuracy.


11. Confusion Matrix Heatmap

import seaborn as sns
sns.heatmap(confusion_matrix(iris.target, model.predict(iris.data)), annot=True)
plt.show()

Explanation: Easier-to-read evaluation results.


12. PCA for Dimensionality Reduction

from sklearn.decomposition import PCA
pca = PCA(n_components=2)
reduced = pca.fit_transform(iris.data)
print(reduced[:5])

Explanation: Speeds up training by reducing features.


13. Plot PCA Results

plt.scatter(reduced[:,0], reduced[:,1], c=iris.target)
plt.show()

Explanation: Shows data clusters in fewer dimensions.


14. Load MNIST Dataset

from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784')
print(mnist.data.shape)

Explanation: Famous dataset for digit recognition.


15. Train SVM on MNIST

from sklearn.svm import SVC
clf = SVC()
clf.fit(mnist.data[:1000], mnist.target[:1000])

Explanation: Classify images with a powerful algorithm.


16. Test SVM Model

print(clf.predict([mnist.data[1001]]))

Explanation: Predict on unseen handwritten digit.


17. One-Hot Encoding

import pandas as pd
df = pd.DataFrame({'color': ['red','blue','green']})
print(pd.get_dummies(df))

Explanation: Convert text categories into numbers.


18. Normalize Data

from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaled = scaler.fit_transform([[10],[20],[30]])
print(scaled)

Explanation: Scale values between 0 and 1.


19. Standardize Data

from sklearn.preprocessing import StandardScaler
std_scaled = StandardScaler().fit_transform([[10],[20],[30]])
print(std_scaled)

Explanation: Makes data have mean 0 and variance 1.


20. Histogram Plot

plt.hist(iris.data[:,0], bins=10)
plt.show()

Explanation: See how values are distributed.


21. Simple Neural Network

import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])

Explanation: Foundation for deep learning projects.


22. Compile TensorFlow Model

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

Explanation: Prepares the model for training.


23. Train Neural Network

model.fit(mnist.data[:5000], mnist.target[:5000], epochs=5)

Explanation: Improves accuracy with training cycles.


24. Evaluate Neural Network

print(model.evaluate(mnist.data[5000:6000], mnist.target[5000:6000]))

Explanation: Tests model on unseen data.


25. Add Dropout Layer

model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10, activation='softmax')
])

Explanation: Prevents overfitting during training.


26. Load & Preprocess Image

import tensorflow as tf
import numpy as np
img = tf.keras.preprocessing.image.load_img('image.jpg', target_size=(28,28), color_mode='grayscale')
arr = np.array(img)/255.0
print(arr.shape)

Explanation: Prepare images for neural networks.


27. Early Stopping

callback = tf.keras.callbacks.EarlyStopping(patience=3)

Explanation: Stops training when performance stops improving.


28. Save Best Model Only

checkpoint = tf.keras.callbacks.ModelCheckpoint('best_model.h5', save_best_only=True)

Explanation: Keeps only the top-performing model.


29. Convert Array to DataFrame

df = pd.DataFrame(np.random.rand(5,3), columns=list('ABC'))
print(df)

Explanation: Organizes raw data into table format.


30. Group Data

print(df.groupby('A').mean())

Explanation: Summarizes data by key feature.


31. Covariance Matrix

print(np.cov(np.random.rand(3,5)))

Explanation: Understand variable relationships.


32. Boxplot

import seaborn as sns
sns.boxplot(x=iris.data[:,0])
plt.show()

Explanation: Detects outliers in data.


33. Hyperparameter Tuning (GridSearchCV)

from sklearn.model_selection import GridSearchCV
param_grid = {'max_depth': [3,5,7]}
grid = GridSearchCV(DecisionTreeClassifier(), param_grid)
grid.fit(iris.data, iris.target)
print(grid.best_params_)

Explanation: Finds the best model parameters.


34. Cross-Validation

from sklearn.model_selection import cross_val_score
print(cross_val_score(DecisionTreeClassifier(), iris.data, iris.target, cv=5))

Explanation: Reliable model performance testing.


35. ROC Curve

from sklearn.metrics import roc_curve
fpr, tpr, _ = roc_curve([0,0,1,1],[0.1,0.4,0.35,0.8])
plt.plot(fpr, tpr)
plt.show()

Explanation: Evaluates binary classifiers.


36. AUC Score

from sklearn.metrics import roc_auc_score
print(roc_auc_score([0,0,1,1],[0.1,0.4,0.35,0.8]))

Explanation: Single score for classifier quality.


37. Heatmap of Large Data

sns.heatmap(np.random.rand(10,10), cmap='coolwarm')
plt.show()

Explanation: Visualizes data correlations.


38. Random Forest Classifier

from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(iris.data, iris.target)

Explanation: Strong ensemble classifier.


39. Feature Importance

print(rf.feature_importances_)

Explanation: Shows which features matter most.


40. Save Model as Pickle

import pickle
with open('model.pkl','wb') as f:
pickle.dump(rf,f)

Explanation: Save and reload models easily.

Post a Comment

0 Comments