Combining Label Propagation and Simple Models Out-performs Graph Neural Networks
Horace He*, Qian Huang*, Abhay Singh, Ser-Nam Lim, Austin Benson
TL;DR: We demonstrate that for many popular transductive node classification tasks, state-of-the-art GNN models can be out-performed by a shallow MLP prediction followed by the post-processing of two Label Propagation variants. This simple framework directly uses label information and on some benchmarks, can outperform SOTA GNNs with orders of magnitude less parameters and runtime. Highlight result: We outperform SOTA GNNs on ogbn-products with 137x less parameters and >100x less runtime.
Geometry Types for Graphics Programming
Dietrich Geisler, Irene Yoon, Aditi Kabra, Horace He, Yinnon Sanders, Adrian Sampson
[Paper] [Twitter Thread]
TL;DR: Incorrect usage of coordinate systems (such as adding a vector in model space to a vector in world space) is responsible for a class of geometry bugs. Even worse, these bugs tend to be extremely nasty - often, it's not obvious if you've even had a bug. Gator marks geometric objects with their coordinate system and reference frame. Not only does this prevent the class of bugs mentioned above, it also allows for automatic generation of transformation code from one space to another.
Better Set Representations for Relational Reasoning
Horace He*, Qian Huang*, Abhay Singh, Yan Zhang, Ser-Nam Lim, Austin Benson
ICML 2020: Object-Oriented Learning Workshop
[Paper] [OOL Talk]
TL;DR: Most methods for relational reasoning, like graph neural networks or transformers, need to operate on some kind of unordered set. However, the input(often an image) is not. Existing methods ignore the set structure. We show that by generating sets "properly", we can improve performance and robustness on a wide variety of tasks.
Enhancing Adversarial Example Transferability with an Intermediate Level Attack
Horace He*, Qian Huang*, Isay Katsman*, Zeqi Gu*, Serge Belongie, Ser-Nam Lim
[Paper] [Talk at WIML Workshop] [Twitter Thread] [Cornell Chronicle]
TL;DR: By optimizing the orthogonal projection of our perturbation onto an existing perturbation in the feature space, we can improve transferability significantly. Choosing the layer at which we optimize the projection changes the transferability significantly. Pretty surprising that this works.
Open Question: Why does this method work? We provide some guesses in the paper, but optimizing for the orthogonal projection obviously isn't fundamentally the right thing to do. After all, recursively applying our method to the perturbations doesn't generate better perturbations.
Adversarial Example Decomposition
Horace He, Aaron Lou*, Qingxuan Jiang*, Isay Katsman*, Serge Belongie, Ser-Nam Lim
ICML Workshop on Security and Privacy of Machine Learning
TL;DR: If you take the vector projection from a transferable perturbation to a regular perturbation, you get an extremely non-transferable perturbation.