This site is out of date.

To see our most recent course site, click here!

Lecture 1 Recap - Welcome to CS181

Date: January 25, 2022

Relevant Textbook Sections: Ch 1

Lecture Video

Slide Deck



Lecture Recaps

Welcome to CS 181! The teaching staff will be writing "Lecture Recaps" for each lecture. The purpose of these lecture recaps is to serve as a resource and reference for you when you wish to review or revisit material as taught in class. These lecture recaps are not scribes, and their focus will be on mathematical content.

Lecture 1 Summary

Course Introduction

This course is about giving you the core fundamentals to how machine learning actually works. Although Friday "Beyond 181" sections will discuss the cutting edge techniques in machine learning, we will mainly see current (and older) mainstream systems.

It is important to keep an eye on the ethics. Machine learning algorithms seem to promise the luxury of offloading important tasks entirely onto a machine. This, unsurprisngly, can be problematic. Amazon, for example, had a hiring tool that showed bias against women. How did this happen? They used their current database of resumes to train their algorithm and that current database had biases! The software learn to replicate this bias.

Machine learning is not just math. When the math goes out there into the world, we have a huge responsibility to make sure that math is integrated properly and morally.

The issues that follow machine learning are not even just the ethics; Sometimes, the issue is the rigor. Machine learning systems may not be robust or mathematically sound, hence leading you to make the wrong investments or recommending improper drug dosages.

In the real world, one might want to predict drug dosage for HIV. This is difficult, because we want to make sure we sequence the drugs so that if the virus figures out a way against a certain set of treatments, it doesn't knock out other options. There is a sequential and difficult nature to this issue, but that does not stop us from trying a simple solution: There are many people who have been on this path of HIV treatment before you. If we have data of patients similar to you, we can prescribe you according to these similar patients or "neighbors". As good engineers, we need to consider the scenario where we have no neighbors near you. In this situation, we can work towards creating a more complex solution, like a reinforced learning model.

After we do all this math and beep-boop-machinery, we have to pass information onto a clinician (after all, the robot is not the one treating the patient). The clinician will always know something the robot cannot. Some knowledge cannot be codified. Therefore, we need to realize that our algorithm should be providing actionable information that aids the clinician and does not try to make the decision for them. This is why it is very popular to have machine learning models be able to estimate uncertainty associated with every prediction it gives humans.

Through experience, we realized that if we provide too much information to medical professionals, they will trust the systems too much and disregard their own judgement. This is obviously not ideal, and is an example of machine learning systems very easily be unintentionally misued. We're going to begin the course with a short discussion about Deepfake videos. Before we get too technical, we're going to think through potential societal implications of the ML. This is an example of a video that very convincingly looks like Obama, but was created using ML:



Some questions one might consider are:

  • How might you detect a Deepfake video? Were there characteristics of either video that were give-aways that it wasn't real?
  • Under what conditions might you have issues with these Deepfake videos? What are the ethical concerns?
Generative adversarial networks are used to create Deepfakes. It's somewhat of a "race" against folks detecting Deepfakes, who find ways to distinguish or identify them, and the Deepfake designers who specifically improve their algorithms to address these use cases.

There's other forms of impersionation and manipulation too, that are enabled by technology. But impersonation in itself is not a new phenomenon: photos and media have been faked for years. What is it that makes Deepfakes so concerning? There's been a large amount of media about Deepfakes as a political problem. But one of the most pressing and popular uses of Deepfakes is in revenge porn. A lot of the social consequences are not necessarily political, but deeply interpersonal, shaping the fabric of our relationships.

So how do we make machine learning models? Much of machine learning is about perfecting the "zen", or gaining the wisdom through experience to:

  • Make appropriate modeling choices
  • Have sufficient understanding to be able to apply new techniques
  • Anticipate and identify potential sources of error
  • Evaluate carefully
Yet to get there, we also need to do a lot of "push-ups". By doing math and deriving popular methods, we'll develop a better understanding of ML.

ML Taxonomy

In CS 181, the methods we study can be split into 3 groups: supervised, unsupervised, and reinforcement learning.

Supervised learning

Supervised learning is defined by using labels $y$ during training. At run-time, an ML model is given a new input $x$, and predicts a label $y$.

There's two variations of supervised learning:

  • Regression, where labels $y$ are continuous and numeric, or real numbers. Example: Virtu Financial uses regression to predict a stock's future price.
  • Classification, where labels $y$ are discrete and categorical. Example: "Swipe typing" uses a language model to predict which word is intended from one's typing.

Unsupervised learning

The crucial difference between unsupervised and supervised learning is that in unsupervised learning, there are no labels $y$ available when training. All that is available is data $x$.

Two types of unsupervised learning we'll discuss in-depth are clustering and embedding. Clustering is used to find natural groupings of examples in the data; one popular example is Google News, which delivers groupings of stories about the same topic. Embedding techniques are used to embed a high-dimensional dataset in a low-dimensional space. One example of an embedding technique is point-of-sales data from supermarkets. If we look at time-series data of 1073 products in different locations, and embed the time-series into a lower-dimensional space using a technique called Principal Components Analysis, one of the components actually clearly illustrates the economic effect of the 2008 financial crisis.

Reinforcement learning

In Reinforcement Learning, the data is a sequence of triples: states, actions, and rewards. If a robot is rolling around Cambridge, its state is its current location, the action is which way it moves, and the reward it gets is based on what happens to it after it takes the action (for example, whether it falls into a grate or accomplishes its goal).