Member of Technical Staff - Language & Reasoning Models

Unconventional AI

Unconventional AI

IT

Palo Alto, CA, USA · Remote

Posted on Apr 17, 2026

About Unconventional

Since 2022, AI has entered the mainstream, reshaping entire industries from education and software development to fundamental consumer behaviors. This revolution has created an unprecedented demand for computation - a demand that is now fundamentally limited by energy, not just in the datacenter, but at a global scale.

At Unconventional, our mission is to solve this. We are rethinking computing from the ground up to build a new foundation for AI that is 1000x more efficient. We're doing this by exploiting the rich physics of semiconductors, mapping neural networks directly to the device physics rather than relying on layers of inefficient abstraction.

The Role

As a Member of Technical Staff, Language & Reasoning Models, you will drive the development of foundational language and reasoning models that fundamentally leverage the dynamics of our novel silicon. Your goal is to map the behaviors of modern language models directly onto the physics of our hardware.

You will sit at the intersection of NLP/reasoning research and hardware codesign, proving that high-fidelity, large-scale language understanding and generation can be achieved natively on an unconventional computing substrate.

What You'll Do

  • Model Development: Design, train, and scale next-generation language and reasoning architectures (such as transformers, state space models, diffusion/flow models, and deep equilibrium models) specifically tailored for unconventional compute.
  • Physics-Informed Architecture: Rethink standard sequence modeling to exploit the continuous-time dynamics of silicon, moving away from layers of inefficient digital abstraction.
  • Evaluation & Scaling: Establish the training recipes, loss functions, and evaluation metrics needed to reach the frontier of language comprehension, logical reasoning, and generation speed while maintaining the massive energy efficiency of our platform.
  • Extreme Codesign: Collaborate with hardware designers and theorists, and system builders to co-design the model architecture alongside the underlying physical compute primitives.

Minimum Qualifications

  • Education: An MS/PhD or equivalent research/project experience in a quantitative field such as AI/Machine Learning, Computer Science, Physics, Electrical Engineering, or Applied Math.
  • Experience: Deep, hands-on expertise in the theory, architecture, and training of modern foundation models (transformers, SSMs, text diffusion/flow, etc.).
  • Systems Fluency: Hands-on, battle-tested experience dealing with model scaling. You have successfully designed and executed full-scale, distributed training runs for large language or reasoning models, managing the complexities of massive compute clusters.
  • Software Development: You are fluent in modern deep learning frameworks (PyTorch or JAX) and have a proven track record of writing clean, scalable training code for large language models.

Preferred Qualifications (Nice to Have)

  • Unconventional Experience: As a bonus, you may have experience working with hardware-in-the-loop training, mixed-signal hardware, quantization, or physics-informed neural networks

.

Why Join Us?

  • The Mission: Redefine computing for the next 50 years by solving the fundamental energy limitation of AI at a global scale.
  • The Impact: Shape the company's future as a foundational team member. Enjoy massive ownership and an outsized opportunity to drive change.
  • The Perks: A comprehensive package including best-in-class health benefits, 401k matching, truly unlimited PTO, and complimentary meals in our Palo Alto office.