Find your next great role in our network

Explore thousands of open positions across 91 Wing portfolio companies— from seed stage to IPO
companies
Jobs

Member of Technical Staff - System Modeling

Unconventional AI

Unconventional AI

IT
United States · Palo Alto, CA, USA · Remote
Posted on Feb 13, 2026

About Unconventional

Since 2022, AI has entered the mainstream, reshaping entire industries from education and software development to fundamental consumer behaviors. This revolution has created an unprecedented demand for computation - a demand that is now fundamentally limited by energy, not just in the datacenter, but at a global scale.

At Unconventional, our mission is to solve this. We are rethinking computing from the ground up to build a new foundation for AI that is 1000x more efficient. We're doing this by exploiting the rich physics of semiconductors, mapping neural networks directly to the device physics rather than relying on layers of inefficient abstraction.

The Role

As a Member of Technical Staff, System Modeling, you will be part of a hands-on R&D team building simulation frameworks that enable testing and rapid iteration across all layers of unconventional physics-based computing systems for machine learning workloads. “Extreme co-design” is our guiding principle.

System Modeling is a multi-disciplinary effort, and the team we’re building reflects that. The role involves development of physics-based system models, GPU-accelerated ML system simulations, and cross-layer system integration. You don’t need to be an expert in all of these, but you have to be very strong in at least one, and solid in the rest.

What We're Looking For

You will be responsible for one of more of the following tasks:

  • Building a scalable, GPU-accelerated simulator for ML on analog/unconventional hardware that supports multiple architectures, rapid iteration, and rich metrics/visualization within PyTorch.
  • Developing physics-based models of device- and system-level behavior in unconventional compute, integrated with PyTorch, to expose algorithm–hardware tradeoffs and enable cross-layer optimization.
  • Creating a unified end-to-end simulation environment that links theory, algorithms, and device models, enabling easy “what if” experiments and keeping high-level and near-physical simulators aligned.
  • Establishing robust data pipelines, schemas, and experiment tracking so simulation results, configurations, and non-idealities are reproducible, comparable, and auditable.
  • Working with other teams to understand their needs for such modeling and simulation to support high level algorithm development as well as lower level verification of hardware.

Minimum Qualifications

  • Education
    • MS/PhD in a quantitative field (AI/ML, Computer Science, Physics, Electrical Engineering, Applied Math), or BS with substantial, clear evidence of equivalent research/engineering depth.
  • ML and systems fluency
    • Solid understanding of modern AI/ML architectures and training/inference workflows.
    • Strong experience implementing and debugging ML models in PyTorch (preferred) or similar, with practical experience profiling, optimizing, and stabilizing non-trivial large-scale ML systems.
  • Dynamic systems knowledge
    • Basic familiarity of analog dynamic systems, including transient responses, nonidealities such as nonlinearity, quantization, random noise, and feedback/stability
  • Software engineering
    • Strong Python engineering skills: modular design, testing, packaging, CI.
    • Experience with PyTorch internals: autograd, custom modules, low-level ops; familiarity with torch.compile or similar graph capture/compile flows.
    • Experience with CUDA or other GPU programming model (writing custom kernels, understanding memory hierarchy, basic performance tuning).
    • Comfort with at least some of: JAX, NumPy, TensorFlow, Modal, HPC patterns (MPI, NCCL, distributed training), SciPy.
  • Systems thinking
    • Demonstrated ability to reason across multiple layers of the stack: algorithm, software, runtime, hardware.
    • Able to connect model architecture choices to system performance implications: memory bandwidth, communication patterns, latency, energy, and numerical issues.
    • Experience applying at least some efficiency techniques (quantization, sparsity, pruning, distillation, kernel fusion, etc.).
  • Modeling / simulation mindset
    • Prior experience building or extending a serious simulation or modeling framework (could be ML systems, physics, circuits, or other technical domains).
    • Comfort with approximations and tradeoffs: you know when to use a simple model and when you need something closer to the physics.

Bonus Points (Nice to Have)

You do not need all of these. But you should have depth in several areas.

  • Advanced theoretical and algorithm–hardware co-design
    • Experience co-designing algorithms with constraints of unconventional computing paradigms (analog computing, oscillation-based networks, RRAM/crossbar arrays, neuromorphic systems, etc.).
    • Familiarity with dynamical systems, non-linear oscillators, or related physics is a plus.
    • Experience with advanced approximation/compression beyond “standard” quantization and pruning (low-rank methods, structured sparsity, learned approximations, etc.).
  • Physics, device, and circuit modeling
    • Experience modeling physical systems: SPICE or SPICE-like tools, Verilog-AMS, compact device models, or custom physics simulations.
    • Familiarity with noise, process variation, mismatch, drift, and aging, and how these affect computation.
    • Able to translate physical parameters and behaviors into abstractions consumable by ML engineers.
  • Compilers, IRs, and analysis tools
    • Experience with IR design and compiler toolchains (e.g., MLIR, TVM, XLA, custom IRs) and static analysis.
    • Background in building topology extractors, graph analyzers, or static cost models for ML workloads.
    • Experience implementing passes that estimate or transform workloads for energy/latency/accuracy tradeoffs.
  • Large-scale systems & infra
    • Experience building GPU-accelerated simulators, large-scale training environments, or high-throughput experimentation frameworks.
    • Experience designing experiment tracking, metrics pipelines, and data schemas for research environments.
    • Familiarity with distributed systems, cluster schedulers, or cloud-native ML infrastructure.
  • Firmware / hardware–software co-design
    • Experience with embedded systems, firmware, or hardware bring-up.
    • Prior work on SW/HW co-simulation, hardware abstraction layers, or low-level runtime systems.

Why Join Us?

  • The Mission: Tackle a fundamental problem that could redefine computing for the next 50 years.
  • The Impact: Be a foundational member of a world-class team with an outsized opportunity for ownership and impact.
  • The Challenge: Work on deeply challenging, intellectually stimulating problems that sit at the cutting edge of multiple fields.
  • The Perks: A comprehensive package including best-in-class health benefits, 401k matching, truly unlimited PTO, and complimentary meals in our Palo Alto office.