Our first reading group met on Week 2 of Winter Quarter 2018, and we wanted to go over some of the key papers regading recent advancements in Convolutional Neural Networks. To this end, we read the AlexNet paper, GoogLeNet (Incepetion modules) paper, and the ResNet paper. During our discussion, we dove deep into a discussion on the AlexNet paper and the ConvNet architecture in general. These are some notes regarding the important points of the paper, and you can find the corresponding slides here.
Slides for the workshop are available here.
Today, we read a paper that presents a higher level overview of machine learning, the core of the learning problem and makes it difficult, and many valuable insights relating to a model’s ability to generalize and how to prevent overfitting. We thought this paper was valuable because while it did not discuss in detail specific machine learning algorithms or rigorous optimization and probabilility theory, it reminded us of the core problems all of us are trying to solve using machine learning.
A link to the slides given at the workshop is available here. This week’s workshop is best viewed as a downloaded Jupyter notebook, so we’d recomment downloading the Pandas Tutorial and the Kaggle Starter notebooks.
A link to the slides given at the workshop is available here.
Last Week’s Review
Welcome to our website! We are the premiere artificial intelligence and machine learning group here at UCLA, and we’re a committee under the umbrella organization, UCLA ACM.