Engineering Intern
Orby
Uniphore is one of the largest B2B AI-native companies—decades-proven, built-for-scale and designed for the enterprise. The company drives business outcomes, across multiple industry verticals, and enables the largest global deployments.
Uniphore infuses AI into every part of the enterprise that impacts the customer. We deliver the only multimodal architecture centered on customers that combines Generative AI, Knowledge AI, Emotion AI, workflow automation and a co-pilot to guide you. We understand better than anyone how to capture voice, video and text and how to analyze all types of data.
As AI becomes more powerful, every part of the enterprise that impacts the customer will be disrupted. We believe the future will run on the connective tissue between people, machines and data: all in the service of creating the most human processes and experiences for customers and employees.
Job Description:
What You'll Be a Part Of:
Uniphore is one of the largest B2B AI-native companies—decades-proven, built-for-scale and designed for the enterprise. The company drives business outcomes across multiple industry verticals and enables the largest global deployments. Uniphore infuses AI into every part of the enterprise that impacts the customer through our multimodal architecture combining Generative AI, Knowledge AI, Emotion AI, workflow automation and co-pilot guidance.
About the Role:
We're looking for a curious, driven engineering intern to join our Infrastructure & Platform Engineering team. This is a hands-on role where you'll work on real problems alongside senior engineers and contribute to systems that run in production and that real engineering teams depend on every day. You'll be embedded in the team that owns cloud-native infrastructure at scale spanning multi-cloud environments, hundreds of microservices, and engineering teams across the globe. Your work will focus on using AI and automation to eliminate toil, optimize infrastructure, and make engineers more productive and help the business increase velocity. If you're excited about Go, Kubernetes, and building tools that have real impact this is your role.
What You'll Work On:
Depending on team needs and your interests, you'll contribute to one or more of the following projects:
GPU Resource Optimization: Explore and implement bin-packing algorithms or AI-assisted scheduling strategies to maximize utilization across GPU node pools in a multi-cloud environment. A great fit if you're interested in optimization problems at infrastructure scale.
AI Ops Automation: Build LLM-powered tooling that automates operational tasks such as incident triage, runbook generation, or deployment analysis. Help engineers spend less time on toil and more time on impact.
Internal Developer Tooling: Design and build Go-based tools and automation that streamline engineering workflows. Think deployment operations, infrastructure audits and cost controls, status reporting, or integrations with platforms like Kubernetes or PagerDuty.
Workflow Automation Pipelines: Build or extend automated deployment and orchestration workflows using tools like Temporal or ArgoCD, reducing manual steps in how we ship and operate software.
Requirements:
Currently pursuing a degree in Computer Science, Software Engineering, or a related technical field
Comfortable learning Go (prior experience a plus, but not required)
Foundational understanding of software development concepts: APIs, data structures, version control (Git)
Curiosity about infrastructure topics such as Kubernetes, databases, cloud platforms, CI/CD, and the infrastructure that powers AI
Interest in how AI and LLMs can be applied to real engineering workflows (not just theory)
Strong problem-solving instincts and ability to work independently on a defined scope
Good written communication: you'll document what you build
Nice to Haves:
Exposure to container technologies (Docker, Kubernetes)
Familiarity with infrastructure-as-code (Terraform, Helm)
Prior internship or personal projects involving backend services or CLI tooling
Experience calling LLM APIs (OpenAI, Anthropic, etc.) or building AI-assisted tools
Interest in optimization, scheduling, or resource allocation problems
Why You'll Love This Role:
Work on real systems: you won't be building demo apps; you'll be contributing to infrastructure that powers enterprise AI deployments at global scale
Own your project: You'll have a defined scope, a real deliverable, and engineers who are counting on what you ship
Learn from practitioners: Our team has deep expertise in SRE, cloud infrastructure, and platform engineering across AWS, GCP, and beyond
Build with AI, not just read about it: You'll apply LLMs and automation to actual engineering problems, not toy examples
See the full stack: From Kubernetes internals and cloud cost optimization to workflow orchestration and developer experience, you'll get breadth that most internships don't offer
Make an outsized impact: Infrastructure work is high-leverage; the tools and automation you build will multiply across dozens of services and teams
Hiring Range:
The specific rate will depend on the successful candidate's qualifications and prior experience.
In addition to competitive base pay, this position also includes the option to enroll in our medical High Deductible Health Plan (HDHP), ModernHealth, our mental health support provider, and will be eligible for our sick time-off plan.
Location preference:
Uniphore is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.
For more information on how Uniphore uses AI to unify—and humanize—every enterprise experience, please visit www.uniphore.com.

