Question
This architecture was notable for requiring less time to train than other recurrent neural architecture. For 10 points each:
[10m] Identify this deep learning architecture, first proposed in the paper “Attention Is All You Need” by Ashish Vaswani on the Google Brain team. This architecture is used to train large language models like Chat-GPT.
ANSWER: transformer
[10e] GPT was created by OpenAI, which is owned by this big tech company. In 2023, GPT-4 was integrated as part of Bing, a search engine owned by this company.
ANSWER: Microsoft
[10h] This precursor to attention-based architectures is a type of recurrent neural network that uses a cell that stores and retrieves information to handle the vanishing gradient problem.
ANSWER: Long short-term memory network [or LSTM]
<Leo Law, Other Science>
Summary
2023 Penn Bowl @ Waterloo | 10/28/2023 | Y | 4 | 20.00 | 100% | 50% | 50% |
2023 Penn Bowl @ FSU | 10/28/2023 | Y | 2 | 15.00 | 100% | 50% | 0% |
2023 Penn Bowl (Harvard) | 10/21/2023 | Y | 3 | 16.67 | 100% | 33% | 33% |
2023 Penn Bowl (Mainsite) | 10/21/2023 | Y | 7 | 11.43 | 86% | 29% | 0% |
2023 Penn Bowl (Norcal) | 10/28/2023 | Y | 2 | 20.00 | 100% | 50% | 50% |
2023 Penn Bowl (South Central) | 10/28/2023 | Y | 3 | 10.00 | 100% | 0% | 0% |
2023 Penn Bowl (UK) | 10/28/2023 | Y | 5 | 10.00 | 100% | 0% | 0% |
2023 Penn Bowl @ UNC | 10/28/2023 | Y | 3 | 16.67 | 100% | 33% | 33% |
Data
Berkeley A | Stanford | 10 | 10 | 10 | 30 |
Berkeley C | Berkeley B | 0 | 10 | 0 | 10 |