Most of the NAS methods today operate on a fixed search space.
The idea is to expand or shrink certain search dimensions at search time, so that the entire NAS method becomes more efficient both in terms of search quality and convergence time.
Neural Network Architecture Search (NAS) methods [1] are algorithmic solutions to automate the manual process of designing the architectures of neural networks (eg. placement and routing of different neural network layers). There is now also a growing interest in making NAS methods training-free [2].
In this project, we are interested in utilising one of these existing training-free NAS methods, but on a more dynamic search space. Most NAS methods today operate on a fixed search space, ignoring the fact that different designs of the search space can heavily affect the search quality. For instance, a search of possible activation functions might have a limited impact compared to a search of possible convolution types. This project will focus on working on an extended training-free NAS that can also discover a customised search space using the loss landscape on different search dimensions.
[1] https://arxiv.org/abs/1806.09055
[2] https://arxiv.org/abs/2006.04647
Local server (your desktop)
JADE HPC
slurm
nothing more than a job submission systemPytorch
Pytorch lightning
tensorboard
conda
Look at one classic gradient-based NAS
PC-DARTS: Partial Channel Connections for Memory-Efficient...
Zero-cost NAS (without training)