This course covers the fundamental topics in machine learning and prepares the audience for more advanced topics (for example, image processing, natural language processing (NLP), and deep learning) and practical courses, such as training and application of machine learning algorithms in R and Python.
The course includes a combination of graphical presentation and intuition and essential mathematical notations.
- Provide a thorough introduction into probability theory and statistical inference including maximum-likelihood and Bayesian approaches;
- Introduce supervised learning methods: linear and nonlinear regressions and classification algorithms;
- Introduce unsupervised learning methods: clustering, and dimensionality reduction;
- Brief introduction to Directed Graphical Models with a case study/example.
- Be able to describe the difference between frequentist and bayesian statistics;
- Understand the fundamentals of probability theory, bayesian rule and inference, and the characteristics of major probability distributions;
- Get a good understanding of major supervised learning algorithms specifically linear in parameter regression, bayesian linear regression, and classification methods;
- Get a good understanding of main unsupervised learning algorithms specifically clustering and data dimensionality reduction algorithms;
- Get familiar with directed graphical method as a technique of combining supervised and unsupervised learning into one modelling framework;
- Be prepared to build on their current knowledge or take on more advanced courses such as application of machine learning techniques in natural language processing;
- Be prepared to apply their knowledge through formulating machine learning problems and coding using standard libraries (e.g. in R and Python).
E learning – Available
Self learning – Not available
Face to face – Available
Some basic knowledge in linear algebra and statistics is expected.
To discuss booking this course for remote delivery, please contact the Data Science Campus Faculty.