Welcome! I’m Taco Cohen, machine learning researcher / principal engineer at Qualcomm AI Research, working on interactive learning (RL, skill learning, causality) and geometric deep learning / equivariant networks.

Here is my CV and Google Scholar page.

### Bio

Taco Cohen is a machine learning researcher at Qualcomm AI Research in Amsterdam. He received a BSc in theoretical computer science from Utrecht University, and a MSc in artificial intelligence and PhD in machine learning (with prof. Max Welling) from the University of Amsterdam. He was a co-founder of Scyfer, a company focussed on deep active learning, acquired by Qualcomm in 2017. His research is focused on equivariant networks and geometric deep learning, causality and interactive learning. During his studies he has interned at Google Deepmind (working with Geoff Hinton) and OpenAI. He received the 2014 University of Amsterdam MSc thesis prize, a Google PhD Fellowship, ICLR 2018 best paper award for “Spherical CNNs”, was named one of 35 innovators under 35 by MIT Tech Review, and won the 2022 ELLIS PhD Award and 2022 Kees Schouhamer Immink prize for his PhD research.

### Research

#### Equivariant Networks & Geometric DL

During my PhD I developed group equivariant convolutional networks (G-CNNs) and several generalizations such as Steerable CNNs, Spherical CNNs, and Gauge Equivariant CNNs on manifolds, as well as a general theory of equivariant convolutions. Equivariant networks leverage knowledge of the symmetries of a learning problem to improve data efficiency, and have been shown to enjoy better scaling behavior than non-equivariant methods in problems with symmetries. Symmetries can be found in many medical and scientific applications of machine learning, such as medical imaging / computer aided diagnosis, drug design, analysis of protein structure, global climate data, materials design, lattice gauge theory, cryo-electron microscopy, and many others, and many researchers are now working to develop equivariant networks for use in these domains. In some cases, such as lattice gauge theory or the analysis of graph-structured data, the group of symmetries is so large that without equivariant networks, learning fails completely for any feasible dataset size and data augmentation protocol.

In my work I have tried to be systematic, developing a general class of network architectures, a theory, and an approach to modeling rather than a specific network architecture. The general theory of equivariant convolutional networks, described in Part II of my PhD thesis, covers a large class of neural network layers, which can be categorized by the base space on which the data live (e.g. images on the plane, sphere, a graph, or other spaces), the kind of geometric quantity attached to each point in the base space (e.g. scalars, vectors, tensors), and the group of symmetries (permutation, translation, rotation, scaling, etc.). By varying these, one obtains most of the methods that can be found in the literature on equivariant networks. (Of course there is much more work to do than describing methods mathematically, but having a framework helps with understanding the relations between methods, express general knowledge, theorems, and implementation strategies, and makes it easier for newcomers to get an overview of the field).

Since learnable neural network layers are generally linear, it is natural to ask what is the most general equivariant linear map between two representation spaces. For spaces of fields or signals on a space, the answer is given by the “convolution is all you need” theorem. That is, for such spaces the most general equivariant linear map is a generalized convolution with a linearly constrained kernel, so by using such convolutions we are not imposing any constraint on the parameter space other than equivariance. (Note that this does not exclude transformers / attention, as these are built from linear maps, fixed nonlinearities, and products, all of which can be made equivariant). The mathematical theory uses well-known concepts like groups, group representations, and fiber bundles, and the mentioned result is not truly new, in the sense that it has been known in one (abstract) form or another for decades by mathematicians and physicists in the relevant disciples. Nevertheless it has been an interesting journey to gradually discover that these ideas that are at the core of modern theoretical physics fit so well with deep learning.

Together with Michael Bronstein, Joan Bruna and Petar Veličković, we are writing a book on geometric deep learning that aims to make equivariant networks accessible to a general machine learning audience. An early draft and lecture course can be found here.

**Generative models & data compression**

In 2019 I started the generative models & data compression team at Qualcomm, with the goal to take the spectacular advances in deep generative models and apply them to the problem of image, video and speech compression. These models hold great potential for compression, not only because a better probability model should lead to more efficient coding, but also because generative models have the unique capacity to “imagine” details that are not coded in the bitstream. So whereas a classical video codec will produce artefacts such as blocking at low bitrates, a generative compression system could generate realistic looking textures. Together with a semantic understanding of the video afforded by deep networks, this allows for fine-grained control over which areas to encode with high fidelity, and which ones to generate on the receiver side.

#### Causality & Interactive Learning

Recently I have become interested in causality, which appears to play an important role in human cognition. Our understanding of cause and effect helps us reason about the effect of actions on outcomes, make plans, avoid dangerous outcomes without ever experiencing them, generalize to wildly different environments and tasks, and reason counterfactually about the past and thereby perform credit assignment. Causal relations are (by definition) robust to a large number of changes / interventions that would distort most probabilistic relations between variables, so our ability to discover causal relations may in part explain the remarkable robustness of human intelligence. Causality also plays an important role in explanation, communication, teaching, and even moral judgements of credit and blame.

There is a well-developed mathematical theory of causation and a large number of algorithms for causal discovery and causal inference, which are used by scientists in areas like medicine, economics, and social science. In these cases, there is a domain expert who knows how to choose and measure relevant variables and perhaps has some partial knowledge about which variables influence which other variables. Furthermore, it is usually the domain expert who actually performs interventions in the real world, when running an experiment for instance, and this process is not modeled explicitly. In typical AI applications however, these high-level causal variables are not given but must be learned (causal representation learning), and similarly an agent would have to learn what I call “Intervention skills” that actually implement an intervention in the environment. Together with my team we are investigating how we can enable autonomous embodied DL-based AI agents to do causal learning and reasoning, and learn surgical intervention skills, in order to actually realize some of the promised benefits mentioned above.

### Links

Twitter: @TacoCohen

Google Scholar: link

Geometric Deep Learning Book & Video Lectures: link

NeurIPS Tutorial on Equivariant Networks with Risi Kondor: link

awesome-equivariant-network: overview of papers & videos on equivariant networks

Article in Quanta / Wired (2020)

TWIML Podcast on Natural Graph Networks (2020)

### Publications

#### Selected publications

T.S. Cohen, *Towards a Grounded Theory of Causation for Embodied AI*, Causal Representation Learning Workshop @ UAI 2022

[ArXiv]

T.S. Cohen, * Equivariant Convolutional Networks*, PhD Thesis, University of Amsterdam, 2021

[pdf, defence ceremony] (

**Note: Part II contains a lot of new material, not published before**)

Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković, ** Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges, **2021

[ArXiv, website, youtube lectures]

P. de Haan, T. Cohen, M. Welling, * Natural Graph Networks,* NeurIPS 2020

[ArXiv]

T.S. Cohen, M. Weiler, B. Kicanaoglu, M. Welling, * Gauge Equivariant Convolutional Networks and the Icosahedral CNN, *Proceedings of the International Conference on Machine Learning (ICML), 2019

*[ArXiv]*

T.S. Cohen, M. Geiger, M. Weiler, ** A General Theory of Equivariant CNNs on Homogeneous Spaces**, NeurIPS 2019

[ArXiv]

M. Winkels, T.S. Cohen, * 3D G-CNNs for Pulmonary Nodule Detection.* International Conference on Medical Imaging with Deep Learning (MIDL), 2018.

[ArXiv]

T.S. Cohen, M. Geiger, J. Koehler, M. Welling, * Spherical CNNs. *ICLR 2018 (

**Best paper award**).

[pdf] [code]

T.S. Cohen, M. Welling, ** Steerable CNNs**. International Conference on Learning Representations (ICLR), 2017

[pdf]

T.S. Cohen, M. Welling, * Group Equivariant Convolutional Networks.* Proceedings of the International Conference on Machine Learning (ICML), 2016

[pdf] [supp. mat.] [code for experiments] [G-Conv code]

#### All Publications

J. Brehmer*, J. Bose*, P. De Haan, T. Cohen, *EDGI: Equivariant diffusion for planning with embodied agents*, Reincarnating RL workshop @ ICLR 2023

[pdf]

C. K. Joshi, C. Bodnar, S. V. Mathis, T. Cohen, P. Liò, *On the Expressive Power of Geometric Graph Neural Networks*, 2023

[ArXiv]

N. Butt, A. Wiggers, T. Cohen, M. Welling, *Program Synthesis for Integer Sequence Generation*, MATH-AI Workshop @ NeurIPS 2022

[pdf]

R. Vuorio, J. Brehmer, H. Ackermann, D. Dijkman, T. Cohen, P. de Haan, *Deconfounded Imitation Learning*, 2022

[ArXiv]

J. Brehmer, P. De Haan, P. Lippe, T. Cohen, *Weakly supervised causal representation learning*, NeurIPS 2022

Earlier versions published at the OSC Workshop @ ICLR 2022 and Causal Representation Learning Workshop @ UAI 2022

[ArXiv]

G. Cesa, A. Behboodi, T.S. Cohen, M. Welling, On the symmetries of the synchronization problem in Cryo-EM: Multi-Frequency Vector Diffusion Maps on the Projective Plane, NeurIPS 2022

[soon]

A. Behboodi, G. Cesa, T.S. Cohen, A PAC-Bayesian Generalization Bound for Equivariant Networks, NeurIPS 2022

[ArXiv]

T.S. Cohen, *Towards a Grounded Theory of Causation for Embodied AI*, Causal Representation Learning Workshop @ UAI 2022

[ArXiv]

P. Lippe, S. Magliacane, S. Löwe, Y. M. Asano, T. Cohen, E. Gavves, *Intervention Design for Causal Representation Learning*, Causal Representation Learning Workshop @ UAI 2022

[pdf]

P. Lippe, S. Magliacane, S. Löwe, Y. M. Asano, T. Cohen, E. Gavves, *iCITRIS: Causal Representation Learning for Instantaneous Temporal Effects*, Causal Representation Learning Workshop @ UAI 2022

[ArXiv]

Y. Perugachi-Diaz, G. Sautière, D. Abati, Y. Yang, A. Habibian, T. Cohen, *Region-of-Interest Based Neural Video Compression*, BMVC 2022

[ArXiv]

S. Basu, J. Gallego-Posada, F. Viganò, J. Rowbottom, T. Cohen, *Equivariant Mesh Attention Networks*, TMLR 2022

[ArXiv]

P. Lippe, S. Magliacane, S. Löwe, Y. M Asano, T. Cohen, Efstratios Gavves, *CITRIS: Causal Identifiability from Temporal Intervened Sequences, *ICML 2022

Earlier version published at OSC Workshop @ ICLR, 2022

[ArXiv]

Y. Zhang, T. van Rozendaal, J. Brehmer, M. Nagel, T. Cohen, *Implicit Neural Video Compression*, DGM4HSD Workshop @ ICLR, 2022

[ArXiv]

P. Lippe, T. Cohen, E. Gavves, *Efficient Neural Causal Discovery without Acyclicity Constraints,* ICLR 2022

[ArXiv]

Y. Zhu, Y. Yang, T. Cohen, *Transformer-based transform coding*, ICLR 2022

[pdf]

T.S. Cohen, *Equivariant Convolutional Networks*, PhD Thesis, University of Amsterdam, 2021

[pdf, defence ceremony] (**Note: Part II contains a lot of new material, not published before**)

Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković, *Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges, *2021

[ArXiv, website, youtube lectures]

T. van Rozendaal, J. Brehmer, Y. Zhang, R. Pourreza, T.S. Cohen, *Instance-Adaptive Video Compression: Improving Neural Codecs by Training **on the Test Set*, ArXiv 2021

[ArXiv]

A. K. Singh, H. E. Egilmez, R. Pourreza, M. Coban, M. Karczewicz, T. S. Cohen, *A Combined Deep Learning based End-to-End Video Coding Architecture for YUV Color Space*

[ArXiv]

H. E. Egilmez, A. K. Singh, M. Coban, M. Karczewicz, Y. Zhu, Y. Yang, A. Said, T. S. Cohen, *Transform Network Architectures for Deep Learning based End-to-End Image/Video Coding in Subsampled Color Spaces, *IEEE Open Journal of Signal Processing, 2021.

[ArXiv]

R. Pourreza, T. Cohen, *Extending Neural P-frame Codecs for B-frame Coding*, ICCV 2021.

[ArXiv]

A. Habibian, D. Abati, T. Cohen, B. Ehteshami Bejnordi, *Skip-Convolutions for Efficient Video Processing, *CVPR 2021.

[ArXiv]

Y. Lu, Y. Zhu, Y. Yang, A. Said, T. Cohen, *Progressive Neural Image Compression with Nested Quantization and Latent Ordering, *ICIP 2021

[ArXiv]

T. van Rozendaal*, I.A.M. Huijben*, T. Cohen, *Overfitting for Fun and Profit: Instance-Adaptive Data Compression,* ICLR 2021

(*equal contribution)

[ArXiv]

P. de Haan, M. Weiler, T. Cohen, M. Welling, *Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs, *ICLR 2021 (spotlight)[ArXiv]

E. Hoogeboom, T.S. Cohen, J.M. Tomczak, *Learning Discrete Distributions by Dequantization,* 3rd AABI Symposium 2021

[ArXiv]

P. de Haan, T. Cohen, M. Welling, *Natural Graph Networks,* NeurIPS 2020

[ArXiv]

D. Kianfar, A. Wiggers, A. Said, R. Pourreza, T. Cohen, *Parallelized Rate-Distortion Optimized Quantization using Deep Learning,* IEEE MMSP 2020

[ArXiv]

A. Pervez, T. Cohen, E. Gavves, *Low Bias Low Variance Gradient Estimates for Hierarchical Boolean Stochastic Networks,* ICML 2020

[pdf]

A. Golinski*, R. Pourreza*, Y. Yang*, G. Sautiere, T. Cohen, *Feedback Recurrent Autoencoder for Video Compression*, ACCV 2020

(*equal contribution)

[ArXiv]

T. van Rozendaal, G Sautiere, T.S. Cohen, *Lossy Compression with Distortion Constrained Optimization,* Workshop and Challenge on Learned Image Compression (CLIC) at CVPR 2020.

[ArXiv]

V. Veerabadran, R. Pourreza, A. Habibian, T. Cohen, *Adversarial Distortion for Learned Video Compression*, Workshop and Challenge on Learned Image Compression (CLIC) at CVPR 2020.

[ArXiv]

M. Mohamed, G. Cesa, T.S. Cohen, M. Welling, *A Data and Compute Efficient Design for Limited-Resources Deep Learning*, Practical Machine Learning for Developing Countries Workshop (ICLR), 2020

[ArXiv]

Y. Yang, G. Sautière, J. Jon Ryu, T.S. Cohen,* Feedback Recurrent AutoEncoder*, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020

[ArXiv]

A. Habibian, T. van Rozendaal, J. Tomczak, T.S. Cohen, *Video Compression with Rate-Distortion Autoencoders*, International Conference on Computer Vision (ICCV), 2019

[ArXiv]

Miranda C.N. Cheng, Vassilis Anagiannis, Maurice Weiler, Pim de Haan, Taco S. Cohen, Max Welling, *Covariance in Physics and Convolutional Networks,* Theoretical Physics for Deep Learning Workshop @ ICML, 2019

[ArXiv]

T.S. Cohen, M. Weiler, B. Kicanaoglu, M. Welling, *Gauge Equivariant Convolutional Networks and the Icosahedral CNN, *Proceedings of the International Conference on Machine Learning (ICML), 2019[ArXiv]

T.S. Cohen, M. Geiger, M. Weiler, *A General Theory of Equivariant CNNs on Homogeneous Spaces*, NeurIPS 2019

[ArXiv]

M. Weiler, M. Geiger, M. Welling, W. Boomsma, T.S. Cohen, *3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data*, Advances in Neural Information Processing Systems (NeurIPS), 2018

[ArXiv] [code] [video]

L. Falorsi, P. de Haan, T. R. Davidson, N. De Cao, M. Weiler, P. Forré and T. S. Cohen, *Explorations in Homeomorphic Variational Auto-Encoding, *ICML Workshop on Theoretical Foundations and Applications of Generative Models, 2018

[ArXiv] [code]

T.S. Cohen, M. Geiger, M. Weiler, *Intertwiners between Induced Representations (with applications to the theory of equivariant neural networks), *ArXiv preprint 1803.10743, 2018.

[ArXiv]

M. Winkels, T.S. Cohen, *Pulmonary Nodule Detection in CT Scans with Equivariant CNNs, *Medical Image Analysis, 2018.

[Link]

M. Winkels, T.S. Cohen, *3D G-CNNs for Pulmonary Nodule Detection.* International Conference on Medical Imaging with Deep Learning (MIDL), 2018.

[ArXiv]

M. Winkels, T.S. Cohen, *3D Group-Equivariant Neural Networks for Octahedral and Square Prism Symmetry Groups*, FAIM/ICML Workshop on Towards learning with limited labels: Equivariance, Invariance, and Beyond, 2018.

B.S. Veeling, J. Linmans, J. Winkens, T.S. Cohen, M. Welling, *Rotation Equivariant CNNs for Digital Pathology*. International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2018.

[ArXiv] [PCam repo] [Code]

J. Winkens, J. Linmans, B.S. Veeling, T.S. Cohen, M. Welling, *Improved Semantic Segmentation for Histopathology using Rotation Equivariant Convolutional Networks. *International Conference on Medical Imaging with Deep Learning (MIDL workshop), 2018.

[pdf]

J. Linmans, J. Winkens, B.S. Veeling, T.S. Cohen, M. Welling, *Sample Efficient Semantic Segmentation using Rotation Equivariant Convolutional Networks*, FAIM/ICML Workshop on Towards learning with limited labels: Equivariance, Invariance, and Beyond, 2018.

T.S. Cohen, M. Geiger, J. Koehler, M. Welling, *Spherical CNNs. *ICLR 2018 (**Best paper award**).

[pdf] [code]

E. Hoogeboom, J.W.T. Peters, T.S. Cohen, M. Welling, *HexaConv. *ICLR 2018.

[pdf]

T.S. Cohen, M. Geiger, J. Koehler, M. Welling, *Convolutional Networks for Spherical Signals.* In Principled Approaches to Deep Learning Workshop ICML 2017.

[ArXiv]

A. Eck, L.M. Zintgraf, E.F.J. de Groot, T.G.J. de Meij, T.S. Cohen, P.H.M. Savelkoul, M. Welling, A.E. Budding, *Interpretation of microbiota-based diagnostics by explaining individual classifier decisions,* BMC Bioinformatics, 2017.

[pubmed]

T. Matiisen, A. Oliver, T.S. Cohen, J. Schulman, *Teacher-Student Curriculum Learning*. IEEE Transactions on Neural Networks and Learning Systems, 2019. (An earlier version was presented at the Deep Reinforcement Learning Symposium, NIPS 2017)

[ArXiv]

T.S. Cohen, M. Welling, *Steerable CNNs*. International Conference on Learning Representations (ICLR), 2017

[pdf]

L.M. Zintgraf, T.S. Cohen, T. Adel, M. Welling, *Visualizing Deep Neural Network Decisions: Prediction Difference Analysis.* International Conference on Learning Representations (ICLR), 2017

[pdf]

T. Adel, T.S. Cohen, M. Caan, M. Welling, *3D Scattering Transforms for Disease Classification in Neuroimaging*. Neuroimage: clinical, 2017.

[pubmed]

T.S. Cohen, M. Welling, *Group Equivariant Convolutional Networks.* Proceedings of the International Conference on Machine Learning (ICML), 2016

[pdf] [supp. mat.] [code for experiments] [G-Conv code]

L.M. Zintgraf, T.S. Cohen, M. Welling, *A New Method to Visualize Deep Neural Networks.* ArXiv preprint 1603.02518, 2016

[ArXiv]

T.S. Cohen, M. Welling, *Harmonic Exponential Families on Manifolds.* Proceedings of the International Conference on Machine Learning (ICML), 2015

[pdf] [supp. mat.]

T.S. Cohen, M. Welling, *Transformation Properties of Learned Visual Representations. *International Conference on Learning Representations (ICLR), 2015.

[ArXiv]

T.S. Cohen, M. Welling, *Learning the Irreducible Representations of Commutative Lie Groups*. Proceedings of the International Conference on Machine Learning (ICML), 2014.

[pdf] [supp. mat.]

T.S. Cohen, *Learning Transformation Groups and their Invariants*. Master’s thesis, University of Amsterdam, 2013. (**1st place University of Amsterdam thesis prize 2014)**

[pdf]