Dhairya Gandhi

I am a data scientist at Julia Computing, and also the core developer in the FluxML ecosystem.

Talks:

20:00 UTC

Scaling up Training of Any Flux.jl Model Made Easy

07/29/2022, 8:00 PM — 8:30 PM UTC
Green

In this talk, we will be discussing some of the state of the art techniques to scale training of ML models beyond a single GPU, why they work and how to scale your own ML pipelines. We will be demonstrating how we have scaled up training of Flux models both by means of data parallelism and by model parallelism. We will be showcasing ResNetImageNet.jl and DaggerFlux.jl to accelerate training of deep learning and scientific ML models such as PINNs and the scaling it achieves.

Platinum sponsors

Julia ComputingRelational AIJulius Technology

Gold sponsors

IntelAWS

Silver sponsors

Invenia LabsBeacon BiosignalsMetalenzASMLG-ResearchConningPumas AIQuEra Computing Inc.Jeffrey Sarnoff

Media partners

Packt PublicationGather TownVercel

Community partners

Data UmbrellaWiMLDS

Fiscal Sponsor

NumFOCUS