Define and lead the lab's R and D strategy in AI infrastructure for large:
scale model training and inference, identifying key research challenges and setting long:
term innovation objectives.
:
Drive cutting:
edge research and engineering in areas such as distributed parallel computing, automatic parallelization, graph scheduling, and numerical computing, ensuring research outcomes translate into practical, high:
quality technologies.
:
Build, mentor, and guide a high:
performing team of researchers and engineers while promoting a culture of excellence, collaboration, and innovation.
:
Work closely with global research and engineering teams to align technical efforts and deliver integrated AI infrastructure solutions.
:
Represent the lab in international academic and industrial communities, contributing to top:
tier conferences (e.g., NeurIPS, ICLR, MLSys, OSDI, SC) and participating in standardization or open:
source collaborations.
Lab Focus Areas:
:
Distributed Parallelism:
Automatic, pipeline, and tensor parallelism; communication optimization.
:
Graph Optimization:
Dynamic/static graph scheduling, compiler:
based optimization.
:
Numerical Computation:
Sparse/mixed:
precision training, low:
rank decomposition, and dimensionality reduction.
:
Heterogeneous Computing:
Integration across CPUs, GPUs, NPUs, and custom accelerators.
Requirements
:
Ph.D.
or equivalent experience in Computer Science, Electrical Engineering, or a related field.
:
Strong background in distributed systems, AI infrastructure, compiler optimization, or numerical computing.
:
Proven ability to publish in top:
tier venues or deliver impactful research results.
:
Experience with large:
scale AI or HPC platforms preferred.
:
Fluent in English; French proficiency is a plus. microTECHGlobal Ltd Information Technology CDI market rate