Question
These models are memory efficient because the Lagrange multipliers in their objective function are zero for most of the points in the training set. For 10 points each:
[10m] Name these supervised learning models that solve an optimization problem to find the maximum-margin hyperplane that divides a linearly separable dataset.
ANSWER: support vector machines [or SVMs; reject “(vector) machines” or “support vector”]
[10e] The support vector machine, or SVM, “kernel trick” can be used because the dual problem’s objective function is written in terms of this operation between two vectors. This operation generalizes the dot product.
ANSWER: inner product
[10h] The misclassification penalty and the spread of the kernel function for an SVM can be tuned using this method. This hyperparameter tuning method uses a cross-validation score to select the best combination of explicitly enumerated values.
ANSWER: grid search [accept GridSearchCV]
<Other Science>
Summary
2023 ACF Nationals | 04/22/2023 | Y | 5 | 18.00 | 80% | 60% | 40% |
Data
Brown A | Florida A | 0 | 10 | 0 | 10 |
Chicago B | Chicago A | 10 | 10 | 0 | 20 |
Vanderbilt A | Johns Hopkins A | 0 | 0 | 0 | 0 |
Georgia Tech A | Stanford A | 10 | 10 | 10 | 30 |
UC Berkeley A | Columbia A | 10 | 10 | 10 | 30 |