Find your next great role in our network

Explore thousands of open positions across 91 Wing portfolio companies — from seed stage to IPO
companies
Jobs

Member of Technical Staff, System Modeling (Performance Models)

Unconventional AI

Unconventional AI

IT
Palo Alto, CA, USA · Remote
Posted on Apr 2, 2026

About Unconventional

Since 2022, AI has entered the mainstream, reshaping entire industries from education and software development to fundamental consumer behaviors. This revolution has created an unprecedented demand for computation - a demand that is now fundamentally limited by energy, not just in the datacenter, but at a global scale.

At Unconventional, our mission is to solve this. We are rethinking computing from the ground up to build a new foundation for AI that is 1000x more efficient. We're doing this by exploiting the rich physics of semiconductors, mapping neural networks directly to the device physics rather than relying on layers of inefficient abstraction.

The Role

As a Member of Technical Staff, System Modeling (Performance Models), you will be part of a hands-on R&D team building simulation frameworks that enable evaluation and rapid iteration across all layers of unconventional physics-based computing systems for machine learning workloads. “Extreme co-design” is our guiding principle.

System Modeling is a multi-disciplinary effort, and the team we’re building reflects that. The role involves development of physics-based system models, GPU-accelerated ML system simulations, and cross-layer system integration. You don’t need to be an expert in all of these, but you have to be very strong in at least one, and solid in the rest.

Responsibilities

You will be responsible for one or more of the following tasks:

  • Building extensible and composable high-fidelity power, performance and area estimation tools for novel AI acceleration system architectures to enable rapid design space exploration.
  • Define and create comparative analyses across candidate architectures and existing state-of-art implementations.
  • Working with other teams to understand their needs for such modeling and simulation to support high level system design as well as lower level verification of hardware.

Minimum Qualifications

  • Education
    • MS/PhD in a quantitative field (AI/ML, Computer Science, Physics, Electrical Engineering, Applied Math), or BS with substantial, clear evidence of equivalent research/engineering depth.
  • Performance Modeling Knowledge
    • Experience with tools and development for power profiling, modeling and simulation for AI workloads.
    • Deep understanding of spatial architectures and data orchestration mechanisms
    • Deep understanding of different dataflow strategies and their tradeoffs, e.g. Weight-Stationary (WS), Output-Stationary (OS), Input-Stationary (IS) and Row-Stationary (RS).
    • Familiar with (OSS) tools for hardware accelerator design: TimLoop, Accelergy, NeuroSim, CIMLoop, CACTI, etc.
    • Familiar with different existing systolic array accelerator architectures for AI/ML workloads
  • ML and systems fluency
    • Solid understanding of modern AI/ML architectures and training/inference workflows.
    • Strong experience implementing and debugging ML models in PyTorch (preferred) or similar, with practical experience profiling, optimizing, and stabilizing non-trivial large-scale ML systems.

Preferred Qualifications

We are looking for well-rounded candidates. While the minimum qualifications focus on the core modeling and ML expertise, candidates who possess the following qualities will be the most impactful.

  • Dynamic systems knowledge
    • Basic familiarity of analog dynamic systems, including transient responses, nonidealities such as nonlinearity, quantization, random noise, and feedback/stability
  • Software engineering
    • Strong Python engineering skills: modular design, testing, packaging, CI.
    • Experience with PyTorch internals: autograd, custom modules, low-level ops; familiarity with torch.compile or similar graph capture/compile flows.
    • Experience with CUDA, Triton, or other GPU programming approaches (writing custom kernels, understanding memory hierarchy, basic performance tuning).
    • Comfort with at least some of: JAX, NumPy, TensorFlow, Modal, HPC patterns (MPI, NCCL, distributed training), SciPy.
  • Systems thinking
    • Demonstrated ability to reason across multiple layers of the stack: algorithm, software, runtime, hardware.
    • Able to connect model architecture choices to system performance implications: memory bandwidth, communication patterns, latency, energy, and numerical issues.
    • Experience applying at least some efficiency techniques (quantization, sparsity, pruning, distillation, kernel fusion, etc.).
  • Modeling / simulation mindset
    • Prior experience building or extending a serious simulation or modeling framework (could be ML systems, physics, circuits, or other technical domains).
    • Comfort with approximations and tradeoffs: you know when to use a simple model and when you need something closer to the physics.

Why Join Us?

  • The Mission: Redefine computing for the next 50 years by solving the fundamental energy limitation of AI at a global scale.
  • The Impact: Shape the company's future as a foundational team member. Enjoy massive ownership and an outsized opportunity to drive change.
  • The Challenge: Dive into deeply complex, intellectually stimulating, and unsolved problems at the cutting edge of multiple, converging fields.
  • The Perks: A comprehensive package including best-in-class health benefits, 401k matching, truly unlimited PTO, and complimentary meals when working from our Palo Alto office.