The recent wave of research focusing on machine intelligence (machine learning and artificial intelligence) and its applications has been fuelled by both hardware improvements and deep learning frameworks that simplify the design and training of neural models. Advances in AI also accelerate research towards Reinforcement Learning (RL), where dynamic control mechanisms are designed to tackle complex tasks. Further, machine learning based optimisation, such as Bayesian Optimisation, is gaining traction in the computer systems community where optimisation needs to scale with complex and large parameter spaces; areas of interest range from hyperparameter tuning to system configuration tuning,

The EuroMLSys workshop will provide a platform for discussing emerging trends in building frameworks, programming models, optimisation algorithms, and software engineering tools to support AI/ML applications. At the same time, using ML for building such frameworks or optimisation tools will be discussed. EuroMLSys aims to bridge the gap between AI research and practice, through a technical program of fresh ideas on software infrastructure, tools, design principles, and theory/algorithms (including issues of instability, data efficiency, etc.), from a systems perspective. We will also explore potential applications that will take advantage of ML.

News

Key dates

  • Paper submission deadline (extended): March 12, 2023 (23:59 AoE)
  • Acceptance notification: April 10, 2023
  • Final paper due: April 16, 2023
  • Workshop: May 8, 2023 (full-day workshop)

Past Editions

Call for Papers

EuroMLSys is an interdisciplinary workshop that brings together researchers in computer architecture, systems and machine learning, along with practitioners who are active in these emerging areas.

Topics of interest include, but are not limited to, the following:

  • Scheduling algorithms for data processing clusters
  • Custom hardware for machine learning
  • Programming languages for machine learning
  • Benchmarking systems (for machine learning algorithms)
  • Synthetic input data generation for training
  • Systems for training and serving machine learning models at scale
  • Graph neural networks
  • Neural network compression and pruning in systems
  • Systems for incremental learning algorithms
  • Large scale distributed learning algorithms in practice
  • Database systems for large scale learning
  • Model understanding tools (debugging, visualisation, etc.)
  • Systems for model-free and model-based Reinforcement Learning
  • Optimisation in end-to-end deep learning
  • System optimisation using Bayesian Optimisation
  • Acceleration of model building (e.g., imitation learning in RL)
  • Use of probabilistic models in ML/AI application
  • Learning models for inferring network attacks, device/service fingerprinting, congestion, etc.
  • Techniques to collect and analyze network data in a privacy-preserving manner
  • Learning models to capture network events and control actions
  • Machine learning in networking (e.g., use of Deep RL in networking)
  • Analysis of distributed ML algorithms
  • Semantics for distributed ML languages
  • Probabilistic modelling for distributed ML algorithms
  • Synchronisation and state control of distributed ML algorithms

Accepted papers will be published in the ACM Digital Library (you can opt out from this).

Program

ACM Proceeding will be available on May 8, 2023 on ACM Digital Library

Please join the slack for question/discussion. Anybody can join it. Join!

Program timezone is CEST (UTC+2.00).

9:00 Opening
09:15 Session 1: Model, Training and Optimisation - (15mins presentations)
Actionable Data Insights for Machine Learning Nils Braun (Apple)
Dynamic Stashing Quantization for Efficient Transformer Training Guo Yang (University of Cambridge)
Towards A Platform for Model Training on Dynamic Datasets  Maximilian Böther (ETHZ)
Profiling and Monitoring Deep Learning Training Tasks Ehsan Yousefzadeh-Asl-Miandoab (IT University of Copenhagen)
MCTS-GEB: Monte Carlo Tree Search is a Good E-graph Builder  Guoliang He (University of Cambridge)
10:00 Coffee Break
10:30 Session 2: Decentralised Learning, Federated Learning - (15mins presentations)
Decentralized Learning Made Easy with DecentralizePy Rishi Sharma (EPFL)
Towards Practical Few-shot Federated NLP Dongqi Cai (Beiyou Shenzhen Institute)
Towards Robust and Bias-free Federated Learning Ousmane Touat (LIRIS INSA Lyon)
Gradient-less Federated Gradient Boosting Tree with Learnable Learning Rate Chenyang Ma (University of Cambridge)
Distributed Training for Speech Recognition using Local Knowledge Aggregation and Knowledge Distillation in Heterogeneous Systems Valentin Radu (U. Sheffield)
12:15 Poster Elevator Pitch
Best of both, Structured and Unstructured Sparsity in Neural Networks Sven Wagner (Bosch Sicherheitssysteme GmbH)
TSMix: time series data augmentation by mixing sources Artjom Joosen (Huawei)
Toward Pattern-based Model Selection for Cloud Resource Forecasting Georgia Christofidi & Konstantinos Papaioannou (IMDEA Software Institute)
Can Fair Federated Learning Reduce the need for Personalisation? Alex Iacob (University of Cambridge)
A First Look at the Impact of Distillation Hyper-Parameters in Federated Knowledge Distillation Norah Alballa (KAUST)
Causal fault localisation in dataflow systems Andrei Paleyes (University of Cambridge)
Accelerating Model Training: Performance Antipatterns Eliminator Framework Ravi Singh (TCS Research)
TinyMLOps for real-time ultra-low power MCUs applied to frame-based event classification Minh Tri Lê (Inria Grenoble Rhône-Alpes)
Scalable High-Performance Architecture for Evolving Recommender System  Ravi Singh (TCS Research)
13:00 Lunch Break / Poster Session
14:30 Session 3: Service Functions, TinyML, CDN - (15mins presentations)
FoldFormer: sequence folding and seasonal attention for fine-grained long-term FaaS forecasting Luke Darlow (Huawei)
Reconciling High Accuracy, Cost-Efficiency, and Low Latency of Inference Serving Systems Alireza Sanaee (Queen Mary University of London)
Robust and Tiny Binary Neural Networks using Gradient-based Explainability Methods Muhammad Sabih (Friedrich-Alexander)
Illuminating the hidden challenges of data-driven CDNs Theophilus A. Benson (CMU)
15:30 Poster Session
16:00 Coffee Break
16:30 Keynote: Next-Generation Domain-Specific Accelerators: From Hardware to System Sophia Shao (UC Berkeley)
18:00 Wrapup and Closing

Submission

Papers must be submitted electronically as PDF files, formatted for 8.5x11-inch paper. The length of the paper must be no more than 6 pages in the ACM double-column format (10-pt font). References are out of the 6 pages limit. Submitted papers must use the official ACM Master article template

Submissions will be single-blind.

Submit your paper at: https://euromlsys23.hotcrp.com/paper/new

Keynote

  • Sophia Shao

    16:30 Sophia Shao University of California

    Next-Generation Domain-Specific Accelerators: From Hardware to System

    Slides

    Decades of exponential growth in computing have transformed the way our society operates. As the benefits of traditional technology scaling fade, the computing industry has started developing vertically integrated systems with specialized accelerators to deliver improved performance and energy efficiency. In fact, domain-specific accelerators have become a key component in today’s systems-on-chip (SoCs) and systems-on-package (SoPs), driving active research and product development to build novel accelerators for emerging applications such as machine learning, robotics, cryptography, and many more, entering a golden edge for computer architecture. The natural evolution of this trend will lead to an increasing volume and diversity of accelerators on future computing platforms. In this talk, I will discuss challenges and opportunities for the next-generation of domain-specific accelerators, with a special focus on system-level implications of designing, integrating, and scheduling of future heterogeneous platforms.

    Bio: Professor Sophia Shao is an Assistant Professor of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Previously, she was a Senior Research Scientist at NVIDIA and received her Ph.D. degree in 2016 from Harvard University. Her research interests are in the area of computer architecture, with a special focus on domain-specific architecture, deep-learning accelerators, and high-productivity hardware design methodology. Her work has been awarded the Best Paper Award at DAC’2021, the Best Paper Award at JSSC’2020, a Best Paper Award at MICRO’2019, a Research Highlight of Communications of ACM (2021), Top Picks in Computer Architecture (2014), and Honorable Mentions (2019*2). Her Ph.D. dissertation was nominated by Harvard for the ACM Doctoral Dissertation Award. She is a recipient of an NSF CAREER Award, the 2022 IEEE TCCA Young Computer Architect Award, an Intel Rising Star Faculty Award, a Google Faculty Rising Star Award in System Research, a Facebook Research Award, and the inaugural Dr. Sudhakar Yalamanchili Award. Her personal webpage is https://people.eecs.berkeley.edu/~ysshao/.

Sponsors


Committees

Workshop and TPC Chairs

Technical Program Committee

  • Aaron Zhao, Imperial College London
  • Ahmed M. Abdelmoniem, Queen Mary University of London
  • Alexandros Koliousis, Northeastern University London and Institute for Experiential AI
  • Amir Payberah, KTH
  • Amitabha Roy, Kumo.ai
  • Chi Zhang, Brandeis University
  • Daniel Goodman, Oracle
  • Daniel Mendoza, Stanford University
  • Davide Sanvito, NEC Laboratories Europe
  • Dawei Li, Amazon
  • Deepak George Thomas, Iowa State University
  • Dimitris Chatzopoulos, University College Dublin
  • Fiodar Kazhamiaka,Stanford University
  • Guilherme H. Apostolo, Vrije Universiteit Amsterdam
  • Guoliang He, University of Cambridge
  • Hamed Haddadi, Imperial College London
  • Jenny Huang, NVIDIA
  • Jon Crowcroft, University of Cambridge
  • Jose Cano, University of Glasgow
  • Junru Shao, OctoML
  • Keshav Santhanam, Stanford University
  • Liang Zhang, TigerGraph
  • Lianmin Zheng, UC Berkeley
  • Mengying Zhou, Fudan University
  • Nasrullah Sheikh, IBM Research Almaden
  • Nikolas Ioannou, Google
  • Paul Patras, University of Edinburgh
  • Peter Pietzuch, Imperial College London
  • Peter Triantafillou, University of Warwick
  • Pouya Hamadanian, MIT
  • Pratik Fegade, Google
  • Qian Li, Stanford University
  • Sam Ainsworth, University of Edinburgh
  • Sami Alabed, University of Cambridge
  • Shay Vargaftik, Vmware Research
  • Stefano Cereda, Politecnico di Milano
  • Taiyi Wang, University of Cambridge
  • Thaleia Dimitra Doudali, IMDEA
  • Valentin Radu, University of Sheffield
  • Veljko Pejovic, University of Ljubljana
  • Xupeng Miao, Peking University
  • Yaniv Ben-Itzhak, Vmware Research
  • Zheng Wang, University of Leeds
  • Zhihao Jia, CMU

Web Chair

  • Alexis Duque, Net AI

Contact

For any question(s) related to EuroMLSys 2023, please contact the TPC Chairs Eiko Yoneki and Luigi Nardi.

Follow us on Twitter: @euromlsys

-->ent.write('