Question

This architecture was notable for requiring less time to train than other recurrent neural architecture. For 10 points each:
[10m] Identify this deep learning architecture, first proposed in the paper “Attention Is All You Need” by Ashish Vaswani on the Google Brain team. This architecture is used to train large language models like Chat-GPT.
ANSWER: transformer
[10e] GPT was created by OpenAI, which is owned by this big tech company. In 2023, GPT-4 was integrated as part of Bing, a search engine owned by this company.
ANSWER: Microsoft
[10h] This precursor to attention-based architectures is a type of recurrent neural network that uses a cell that stores and retrieves information to handle the vanishing gradient problem.
ANSWER: Long short-term memory network [or LSTM]
<Leo Law, Other Science>

Back to bonuses

Summary

2023 Penn Bowl @ Waterloo10/28/2023Y420.00100%50%50%
2023 Penn Bowl @ FSU10/28/2023Y215.00100%50%0%
2023 Penn Bowl (Harvard)10/21/2023Y316.67100%33%33%
2023 Penn Bowl (Mainsite)10/21/2023Y711.4386%29%0%
2023 Penn Bowl (Norcal)10/28/2023Y220.00100%50%50%
2023 Penn Bowl (South Central)10/28/2023Y310.00100%0%0%
2023 Penn Bowl (UK)10/28/2023Y510.00100%0%0%
2023 Penn Bowl @ UNC10/28/2023Y316.67100%33%33%

Data

Library of Babel School of Continuing StudiesWaterloo Miku0101020
Toronto WearyMixed-Affiliated Contingency, Off the Team & Absent: Wong, Adrian10101030
La Clique du ChâteauToronto Joy1010020
Toronto RoflWaterloo Hatsune010010