Find your next great role in our network

Explore thousands of open positions across 91 Wing portfolio companies — from seed stage to IPO
companies
Jobs

Member of Technical Staff, System Modeling (Simulation)

Unconventional AI

Unconventional AI

IT
Palo Alto, CA, USA · Remote
Posted on Apr 2, 2026

About Unconventional

Since 2022, AI has entered the mainstream, reshaping entire industries from education and software development to fundamental consumer behaviors. This revolution has created an unprecedented demand for computation - a demand that is now fundamentally limited by energy, not just in the datacenter, but at a global scale.

At Unconventional, our mission is to solve this. We are rethinking computing from the ground up to build a new foundation for AI that is 1000x more efficient. We're doing this by exploiting the rich physics of semiconductors, mapping neural networks directly to the device physics rather than relying on layers of inefficient abstraction.

The Role

As a Member of Technical Staff, System Modeling (Dynamic Systems Simulation), you will be part of a hands-on R&D team building simulation frameworks that enable testing and rapid iteration across all layers of unconventional physics-based computing systems for machine learning workloads. “Extreme co-design” is our guiding principle.

System Modeling is a multi-disciplinary effort, and the team we’re building reflects that. The role involves development of physics-based system models, GPU-accelerated ML system simulations, and cross-layer system integration. You don’t need to be an expert in all of these, but you have to be very strong in at least one, and solid in the rest.

Responsibilities

You will be responsible for developing high-performance PyTorch components that model complex, time-varying dynamic systems. Your work will directly enable next-generation AI architectures, requiring a holistic approach involving everything from high-level neural network design down to the fundamental differential equations that govern system behavior.

Minimum Qualifications

  • Education
    • MS/PhD in a quantitative field (AI/ML, Computer Science, Physics, Electrical Engineering, Applied Math), or BS with substantial, clear evidence of equivalent research/engineering depth.
  • Dynamical systems simulation knowledge
    • Advanced Neural Modeling (PyTorch): Deep proficiency in PyTorch, specifically in building custom autograd functions and integrating numerical solvers (e.g., Neural ODEs) to represent dynamic processes.
    • Dynamics & Differential Equations: A strong theoretical and practical grasp of linear and non-linear dynamics, state-space representations, and solving $dx/dt = f(x, u, t)$ within a machine learning context.
    • Stochastic Processes & Noise: Understanding how to model and mitigate noise in real-world systems, including experience with stochastic differential equations (SDEs) or Bayesian filtering.
    • Modeling & Simulation (M&S): Proven industry experience building high-fidelity simulations that balance computational efficiency with physical accuracy.
    • Systems Engineering (Analog/Digital): Familiarity with hardware-level concepts like circuit dynamics, signal processing, or transfer functions is highly desirable to help ground our digital models in physical reality.
  • ML and systems fluency
    • Solid understanding of modern AI/ML architectures and training/inference workflows.
    • Strong experience implementing and debugging ML models in PyTorch (preferred) or similar, with practical experience profiling, optimizing, and stabilizing non-trivial large-scale ML systems.

Preferred Qualifications

We are looking for well-rounded candidates. While the minimum qualifications focus on the core modeling and ML expertise, candidates who possess the following qualifications will be the most impactful.

  • Software engineering
    • Strong Python engineering skills: modular design, testing, packaging, CI.
    • Experience with PyTorch internals: autograd, custom modules, low-level ops; familiarity with torch.compile or similar graph capture/compile flows.
    • Experience with CUDA, Triton, or other GPU programming approaches (writing custom kernels, understanding memory hierarchy, basic performance tuning).
    • Comfort with at least some of: JAX, NumPy, TensorFlow, Modal, HPC patterns (MPI, NCCL, distributed training), SciPy.
  • Systems thinking
    • Demonstrated ability to reason across multiple layers of the stack: algorithm, software, runtime, hardware.
    • Able to connect model architecture choices to system performance implications: memory bandwidth, communication patterns, latency, energy, and numerical issues.
    • Experience applying at least some efficiency techniques (quantization, sparsity, pruning, distillation, kernel fusion, etc.).

Why Join Us?

  • The Mission: Redefine computing for the next 50 years by solving the fundamental energy limitation of AI at a global scale.
  • The Impact: Shape the company's future as a foundational team member. Enjoy massive ownership and an outsized opportunity to drive change.
  • The Challenge: Dive into deeply complex, intellectually stimulating, and unsolved problems at the cutting edge of multiple, converging fields.
  • The Perks: A comprehensive package including best-in-class health benefits, 401k matching, truly unlimited PTO, and complimentary meals when working from our Palo Alto office.