Find your next great role in our network

Explore thousands of open positions across 91 Wing portfolio companies — from seed stage to IPO
companies
Jobs

Junior Member of Technical Staff, System Modeling

Unconventional AI

Unconventional AI

IT
Palo Alto, CA, USA · Remote
Posted on Apr 2, 2026

About Unconventional AI

We are rethinking the foundations of the computer to optimize energy efficiency for AI. Founded by pioneers in the field - including Naveen Rao (Nervana, MosaicML) and Michael Carbin (MIT, MosaicML) - we are building a new computational substrate that interfaces directly with the physics of silicon to achieve biology-scale efficiency. We recently raised $475M in seed funding to turn this vision into reality.

As a Junior Member of Technical Staff, System Modeling, you will work closely with senior engineers to contribute to the development of our multi-disciplinary simulation frameworks. You will assist the hands-on R&D team in building simulation environments that enable rapid iteration and testing across all layers of our unconventional physics-based computing systems for machine learning workloads. Your work will focus on integrating physics-based models, developing GPU-accelerated simulations, and supporting the cross-layer system integration necessary for "Extreme co-design".

Key Responsibilities

  • Contribute to the implementation and optimization of GPU-accelerated simulators for ML on analog/unconventional hardware, focusing on specific modules and features within PyTorch.
  • Assist in integrating physics-based device and system models into the PyTorch simulation environment to help expose early algorithm–hardware tradeoffs and enable cross-layer optimization.
  • Support the maintenance and extension of the unified end-to-end simulation environment, helping to link theory, algorithms, and device models, and ensuring alignment between high-level and near-physical simulators.
  • Help implement and adhere to robust experiment tracking protocols to ensure simulation results, configurations, and non-idealities are reproducible and auditable.
  • Collaborate with Algorithms and Hardware teams to gather requirements and ensure the modeling environment meets their needs for high-level algorithm development and lower-level hardware verification.

What We’re Looking For

  • Strong Systems Foundation: A BS, MS, or PhD in Computer Science, Electrical Engineering, or a related technical field. You should have a deep understanding of computer architecture and operating systems.
  • Coding Proficiency: Strong skills in C++ and Python. You should be comfortable writing performance-critical code.
  • AI/ML Exposure: Basic familiarity with the internals of deep learning frameworks (e.g., how a PyTorch graph is executed) and common model architectures.
  • Mathematical Intuition: A solid grasp of linear algebra and calculus, which are essential for understanding both neural dynamics and hardware optimizations.
  • First Principles Mindset: You enjoy digging into "why" things work (or don't) and aren't afraid to challenge conventional software "best practices" to find a more efficient path.

Bonus Points

  • Experience with compilers (LLVM, MLIR) or domain-specific languages like Triton.
  • Exposure to GPU programming (CUDA) or other hardware accelerators.
  • Prior research or internship experience in high-performance computing (HPC) or neuromorphic systems.
  • Contributions to open-source AI or systems software projects.

Why Join Us?

  • Mentorship: Learn directly from the architects who built the modern AI stack at companies like Intel, Databricks, and NVIDIA.
  • Impact: You won't be a small cog in a giant machine. You will be helping build the machine itself.
  • Unconventional Problems: Work on challenges that don't have a StackOverflow answer—you’ll be defining the future of AI compute.
  • Competitive Package: Significant equity and competitive salary at a well-funded, high-growth startup.