User:Mathurin.ache/Books/Machine Learning v1
Appearance
The Wikimedia Foundation's book rendering service has been withdrawn. Please upload your Wikipedia book to one of the external rendering services. |
You can still create and edit a book design using the Book Creator and upload it to an external rendering service:
|
This user book is a user-generated collection of Wikipedia articles that can be easily saved, rendered electronically, and ordered as a printed book. If you are the creator of this book and need help, see Help:Books (general tips) and WikiProject Wikipedia-Books (questions and assistance). Edit this book: Book Creator · Wikitext Order a printed copy from: PediaPress [ About ] [ Advanced ] [ FAQ ] [ Feedback ] [ Help ] [ WikiProject ] [ Recent Changes ] |
Machine Learning
[edit]- Introduction and Main Principles
- Machine learning
- Data analysis
- Occam's razor
- Curse of dimensionality
- No free lunch theorem
- Accuracy paradox
- Overfitting
- Regularization (machine learning)
- Inductive bias
- Data dredging
- Ugly duckling theorem
- Uncertain data
- Background and Preliminaries
- Knowledge discovery in Databases
- Knowledge discovery
- Data mining
- Predictive analytics
- Predictive modelling
- Business intelligence
- Reactive business intelligence
- Business analytics
- Reactive business intelligence
- Pattern recognition
- Reasoning
- Abductive reasoning
- Inductive reasoning
- First-order logic
- Inductive logic programming
- Reasoning system
- Case-based reasoning
- Textual case based reasoning
- Causality
- Search Methods
- Nearest neighbor search
- Stochastic gradient descent
- Beam search
- Best-first search
- Breadth-first search
- Hill climbing
- Grid search
- Brute-force search
- Depth-first search
- Tabu search
- Anytime algorithm
- Statistics
- Exploratory data analysis
- Covariate
- Statistical inference
- Algorithmic inference
- Bayesian inference
- Base rate
- Bias (statistics)
- Gibbs sampling
- Cross-entropy method
- Latent variable
- Maximum likelihood
- Maximum a posteriori estimation
- Expectation–maximization algorithm
- Expectation propagation
- Kullback–Leibler divergence
- Generative model
- Main Learning Paradigms
- Supervised learning
- Unsupervised learning
- Active learning (machine learning)
- Reinforcement learning
- Multi-task learning
- Transduction
- Explanation-based learning
- Offline learning
- Online learning model
- Online machine learning
- Hyperparameter optimization
- Classification Tasks
- Classification in machine learning
- Concept class
- Features (pattern recognition)
- Feature vector
- Feature space
- Concept learning
- Binary classification
- Decision boundary
- Multiclass classification
- Class membership probabilities
- Calibration (statistics)
- Concept drift
- Prior knowledge for pattern recognition
- Online Learning
- Margin Infused Relaxed Algorithm
- Semi-supervised learning
- Semi-supervised learning
- One-class classification
- Coupled pattern learner
- Lazy learning and nearest neighbors
- Lazy learning
- Eager learning
- Instance-based learning
- Cluster assumption
- K-nearest neighbor algorithm
- IDistance
- Large margin nearest neighbor
- Decision Trees
- Decision tree learning
- Decision stump
- Pruning (decision trees)
- Mutual information
- Adjusted mutual information
- Information gain ratio
- Information gain in decision trees
- ID3 algorithm
- C4.5 algorithm
- CHAID
- Information Fuzzy Networks
- Grafting (decision trees)
- Incremental decision tree
- Alternating decision tree
- Logistic model tree
- Random forest
- Linear Classifiers
- Linear classifier
- Margin (machine learning)
- Margin classifier
- Soft independent modelling of class analogies
- Statistical classification
- Statistical classification
- Probability matching
- Discriminative model
- Linear discriminant analysis
- Multiclass LDA
- Multiple discriminant analysis
- Optimal discriminant analysis
- Fisher kernel
- Discriminant function analysis
- Multilinear subspace learning
- Quadratic classifier
- Variable kernel density estimation
- Category utility
- Evaluation of Classification Models
- Data classification (business intelligence)
- Training set
- Test set
- Synthetic data
- Cross-validation (statistics)
- Loss function
- Hinge loss
- Generalization error
- Type I and type II errors
- Sensitivity and specificity
- Precision and recall
- F1 score
- Confusion matrix
- Matthews correlation coefficient
- Receiver operating characteristic
- Lift (data mining)
- Stability in learning
- Features Selection and Features Extraction
- Data Pre-processing
- Discretization of continuous features
- Feature selection
- Feature extraction
- Dimension reduction
- Principal component analysis
- Multilinear principal-component analysis
- Multifactor dimensionality reduction
- Targeted projection pursuit
- Multidimensional scaling
- Nonlinear dimensionality reduction
- Kernel principal component analysis
- Kernel eigenvoice
- Gramian matrix
- Gaussian process
- Kernel adaptive filter
- Isomap
- Manifold alignment
- Diffusion map
- Elastic map
- Locality-sensitive hashing
- Spectral clustering
- Minimum redundancy feature selection
- Clustering
- Cluster analysis
- K-means clustering
- K-means++
- K-medians clustering
- K-medoids
- DBSCAN
- Fuzzy clustering
- BIRCH (data clustering)
- Canopy clustering algorithm
- Cluster-weighted modeling
- Clustering high-dimensional data
- Cobweb (clustering)
- Complete-linkage clustering
- Constrained clustering
- Correlation clustering
- CURE data clustering algorithm
- Data stream clustering
- Dendrogram
- Determining the number of clusters in a data set
- FLAME clustering
- Hierarchical clustering
- Information bottleneck method
- Lloyd's algorithm
- Nearest-neighbor chain algorithm
- Neighbor joining
- OPTICS algorithm
- Pitman–Yor process
- Single-linkage clustering
- SUBCLU
- Thresholding (image processing)
- UPGMA