Machine learning : The basic Code which used for any model implementation to reduce the programmer thinking
Machine learning :
Libraries Importing
1Import Numpy as np:
This Libraries used for mathematical calculation and many equation to calculate easily.
2.Import Pandas as pd:
This Libraries used for Importing datasets and operation on data sets.it also contain some built in function for easy to transform data in some specific format
3.Import Matlibplot.pyplot as plot:
This Libraries used for scientific plotting.it used for representation of data in some specific format like tree ,pie ,graphs etc.
4.Libraries Slearn:
This Libraries used for Formatting data and transform the data in specific format.The some class of sklearn are OneHotEncoder ,Preprocessing.The Sklearn is used for data transform ,Scaling rang & finding missing data.
Data Processing
Pandas:
Used to import the data
pandas
Sklearn:
It best libraries for processing ,while this is used for #dummy create pasiing index # categorial data #missing data conditions
For Learning Sklearn
Sklearn : https://scikit-learn.org/stable/tutorial/index.html
Tensorflow:
Used Many Machine learning Model like face detection and mant thing
Tensorflow
# template is heavnly need and also feature scaling is not need for mant time but if required we used
import numpy as np
import matplotlib.pyplot as plot
import pandas as pd
#import the data sets
dataset = pd.read_csv('data.csv')
X = dataset.iloc[: , :-1].values
Y = dataset.iloc[: ,:3].values
#split the set to traning and test sets
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.2) # test size should be displayed
#scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
# scale dumay vriable
Feature Scaling :
#feature scaling .. range is gavien like ages ,saleries. Ecludien Distane ... decrease the value to scale in limted range
# feature of scaling .. (standartlization )Xstand =X-mean(x)/standard deviation(x) && (Normalization) Xnorm=X-min(x)/max(X)-min(X)
import numpy as mp
import matplotlib.pyplot as plot
import pandas as pd
#data processing
dataset = pd.read_csv('data.csv')
X = dataset.iloc[:,:-1].values
Y = dataset.iloc[:,3].values
#missing data conditions
from sklearn.impute import SimpleImputer
simp = SimpleImputer(missing_values = 'NaN' , strategy = 'mean' )
simp = SimpleImputer().fit(X[:, 1:3])
X[: ,1:3] = simp.transform(X[: ,1:3])
print(X)
# categorial data
from sklearn.preprocessing import LabelEncoder ,OneHotEncoder
X = X.reshape(1,-3)
labelencoder_X = LabelEncoder()
X[:, 0]=labelencoder_X.fit_transform(X[:,0])
onehotencoder = OneHotEncoder(OneHotEncoder(categories='auto', sparse=False))
#dummy create pasiing index
X = onehotencoder.fit_transform(X[0]).toarray()
labelencoder_Y = LabelEncoder()
Y= labelencoder_Y.fit_transform(Y)
#split the set to traning and test sets
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.2) # test size should be displayed
#scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
print(X_train)
print(X_test)
# scale dumay vriable
I am GR,3+ years Exp of SEO, content writing, and keyword research ,software dev with skills in OS, web dev, Flask, Python, C++, data structures, and algorithms ,Reviews.