Question
Indyk and Naor introduced embeddings that preserve this algorithm’s output for sets with a low doubling constant or low aspect ratio. A “condensed” version of this algorithm that uses prototypes to reduce the size of the dataset was developed by Peter Hart. (r1, r2, p1, p2)-sensitive families of functions were originally introduced to perform this algorithm using locality-sensitive hashing. Cover and Hart showed that the simplest version of this algorithm has error bounded by two times the Bayes error rate. Usage of a (*) k-d tree allows single queries in this algorithm to be computed in O(log n) time. When used for classification, this algorithm’s namesake parameter is often chosen to be odd to avoid ties. For 10 points, distance-based or simple majority voting can be used in what classification algorithm that examines close samples to an input? ■END■
Buzzes
Player | Team | Opponent | Buzz Position | Value |
---|---|---|---|---|
Sky Li | Simpson Agonistes: The Crisis of Donut | You cannot go to Aarhus to see his peat-brown head / With eyes like ripening fruit | 58 | 15 |
Kais Jessa | Communism is Soviet power plus the yassification of the whole country | The Only Existing Manuscript from A Clockwork Orange | 136 | 10 |
Kunaal Chandrashekar | as rational as the square root of two power bottoms | I'd prefer to have the team name be Christensen et al. than anything that Erik cooks up | 137 | 0 |
Jason Zhang | She Dicer On My Argonaute Till I RNA Interfere | Moderator Can't Neg me While in Alpha | 137 | 10 |
Kane Nguyen | I'd prefer to have the team name be Christensen et al. than anything that Erik cooks up | as rational as the square root of two power bottoms | 137 | 0 |