Find Your Partnership Track

Technology Partners
Hardware vendors, cloud providers, and ISVs who want to validate and co-market joint solutions built on Xinference. Achieve joint reference architectures and co-sell motions.
Solutions Integrators
Consulting firms and system integrators who deploy AI infrastructure for enterprise clients. Get early access, technical enablement, and priority support to accelerate delivery.
Resellers & Distributors
Channel partners who want to include Xinference Enterprise in their AI portfolio. Benefit from deal registration, margin programs, and joint go-to-market resources.

How It Works

The Partnership Process

Getting started is straightforward. Reach out and we'll guide you through every step.

1
Reach Out
Send us a brief message describing your company, the partnership track you're interested in, and what you're hoping to achieve together. We review every inquiry and endeavour to respond as soon as we can.
2
Discovery Call
We schedule a 30-minute call to learn more about your business, your customers, and the technical or commercial integration you have in mind. This lets us tailor the right partnership structure for you.
3
Partnership Agreement
We align on the scope, commercial terms, and mutual commitments in a straightforward partnership agreement.
4
Launch Together
Once onboarded, your team gets access to partner resources, technical enablement, and a dedicated partner manager. We co-create content, run joint pipeline, and grow the relationship over time.
Contact Partnerships

Get in Touch

Become a Partner

We review every inquiry and endeavour to respond as soon as we can.

Frequently Asked Questions

Everything you need to know about partnering with Xinference.

What is Xinference and how does it work?

Xinference is an open-source platform that lets you deploy and serve large language models, embedding models, image models, and more — all through a unified API. It abstracts away the complexity of model loading, hardware management, and scaling so your team can focus on building applications.

How does Xinference compare to running models via cloud providers?

Cloud providers charge you for every token processed through their managed AI services, and your data passes through their infrastructure. With Xinference, you deploy models on your own infrastructure — cloud, on-prem, or hybrid.

Xinference is a unified, production-ready inference platform giving you full control over which models to run, which GPU to use, and where to deploy; all while ensuring best-in-class performance and cost optimisation.

How does pricing work?

Pricing is based on the number of nodes per cluster. Xinference Enterprise costs US$15k per node per cluster.

For example, a small deployment of 2 nodes (usually ~16 GPUs) would cost US$30k / annum. A larger deployment of 250 nodes (usually ~2,000 GPUs) would cost US$3.8m / annum. Running multiple clusters would mean multi billing per cluster.

What is the difference between the open source and Enterprise solution?

Xinference Enterprise delivers better performance and enterprise-grade reliability. Our customers pick the Enterprise solution as it delivers comprehensive hardware compatibility, enables running multiple models on a single GPU, and super charges performance with up to 2x greater throughput.

Most importantly, Xinference Enterprise comes with critical enterprise management features like RBAC, audit logs, a unified management console and SLA guarantees.

How does Xinference handle data privacy?

With Xinference, you can choose to run your models on your own infrastructure — cloud or on-premises — so your prompts and data never leave your environment. This makes Xinference purpose-built for industries with strict data requirements like finance and healthcare.

Can Xinference integrate with our existing MLOps stack?

Xinference provides a RESTful API compatible with OpenAI's protocol, meaning any tool already built around OpenAI's API works with Xinference by changing a single line of code. Xinference integrates with popular third-party libraries including LangChain, LlamaIndex, Dify, and Chatbox. Kubernetes deployment via Helm is also supported for teams running containerised infrastructure.