In a significant move to broaden access to advanced artificial intelligence capabilities, enterprise AI company Cohere has unveiled a groundbreaking new family of multilingual models, aptly named Tiny Aya. The launch, strategically timed with the ongoing India AI Summit, signals Cohere’s commitment to fostering AI innovation across diverse linguistic and geographic landscapes. What sets Tiny Aya apart is its open-weight nature, meaning the underlying code is publicly accessible, empowering developers, researchers, and businesses worldwide to freely use, adapt, and build upon these powerful tools. This open approach stands in contrast to many proprietary AI models, fostering a collaborative ecosystem for AI development.
The Tiny Aya family boasts impressive multilingual support, encompassing over 70 languages. Crucially, these models are engineered for efficiency, capable of running on everyday devices such as laptops without necessitating a constant internet connection. This offline functionality is a game-changer, particularly for regions with less robust internet infrastructure or for applications where data privacy and low latency are paramount. The models are designed to perform complex language tasks directly on the user’s device, reducing reliance on cloud computing and enabling a new generation of responsive, privacy-preserving AI applications.
The initiative stems from Cohere Labs, the company’s dedicated research arm, underscoring a strong focus on pushing the boundaries of AI research and development. The initial release prominently features support for a rich array of South Asian languages, including Bengali, Hindi, Punjabi, Urdu, Gujarati, Tamil, Telugu, and Marathi. This targeted emphasis highlights Cohere’s understanding of the immense linguistic diversity within India and the surrounding regions, and its commitment to providing AI solutions that resonate with local user bases. By prioritizing these languages, Cohere aims to empower local developers and businesses to build AI-powered tools that are culturally relevant and linguistically nuanced.

At the core of the Tiny Aya family is a base model featuring 3.35 billion parameters. Parameters are a key metric in measuring the size and complexity of AI models; a larger number of parameters generally indicates a greater capacity for learning and nuanced understanding. For applications requiring extensive language coverage and precise command following, Cohere has introduced TinyAya-Global. This variant has been specifically fine-tuned to enhance its ability to interpret and execute user instructions, making it ideal for more general-purpose AI assistants and applications.
To further cater to specific regional needs, Cohere has also developed specialized regional variants. TinyAya-Earth is tailored for African languages, aiming to bridge linguistic divides on the continent. TinyAya-Fire focuses on South Asian languages, building upon the foundational multilingual capabilities with regional expertise. Lastly, TinyAya-Water is designed for the Asia Pacific, West Asia, and Europe regions, ensuring broad coverage and adaptability across these diverse linguistic areas. This tiered approach allows for both broad applicability and deep linguistic specialization, offering flexibility for a wide range of use cases.
According to Cohere’s statement, this differentiated approach allows each model to "develop stronger linguistic grounding and cultural nuance, creating systems that feel more natural and reliable for the communities they are meant to serve." This philosophy recognizes that effective AI must go beyond mere translation and understand the subtle cultural contexts that shape language. By building models with specific linguistic and cultural grounding, Cohere aims to create AI that is not only functional but also respectful and intuitive for its users. Simultaneously, the company emphasizes that all Tiny Aya models maintain broad multilingual coverage, serving as robust starting points for further customization and research.
A significant aspect of the Tiny Aya development is its efficient training process. Cohere revealed that these models were trained on a single cluster of 64 Nvidia H100 GPUs, utilizing relatively modest computing resources. This efficiency is crucial for making advanced AI accessible and sustainable, particularly for researchers and developers with limited access to massive computational power. The ability to achieve high performance with optimized hardware usage demonstrates Cohere’s commitment to resourcefulness and innovation in AI development. These models are thus ideal for those building applications for audiences that speak native languages, where offline capabilities and reduced computational demands are highly valued.

The on-device processing capability of Tiny Aya models opens up a multitude of new application possibilities, especially in linguistically rich countries like India. Imagine real-time translation apps that function seamlessly without a data connection, empowering travelers and local communities alike. Consider educational tools that can adapt to regional dialects and learning styles, or customer service applications that can provide support in local languages even in remote areas. The potential for offline-friendly AI to bridge digital divides and enhance accessibility is immense. Cohere’s software architecture is specifically designed to support this on-device usage, requiring significantly less computational power than many comparable models currently available.
The accessibility of Tiny Aya is further bolstered by its availability on prominent AI platforms. Developers can readily access and download these models from Hugging Face, a leading hub for sharing and collaborating on AI models, as well as Kaggle and Ollama for local deployment. This widespread availability on developer-friendly platforms democratizes access to state-of-the-art AI, enabling a global community of innovators to experiment and build. Beyond the models themselves, Cohere is also releasing the training and evaluation datasets used for Tiny Aya on Hugging Face, providing transparency and facilitating further research into multilingual model development. The company also plans to publish a technical report detailing its innovative training methodology, offering valuable insights to the AI research community.
The broader context of Cohere’s growth and ambition adds another layer to this announcement. The company’s CEO, Aidan Gomez, has previously expressed plans for an initial public offering (IPO) in the near future. Recent reports indicate that Cohere concluded 2025 with robust financial performance, achieving $240 million in annual recurring revenue. This impressive growth trajectory, marked by consistent 50% quarter-over-quarter expansion throughout the year, positions Cohere as a significant player in the enterprise AI market and strengthens its case for a successful public debut. The launch of Tiny Aya, with its focus on accessibility and diverse applications, can be seen as a strategic move to solidify its market presence and demonstrate its commitment to innovation beyond large-scale enterprise solutions.
The implications of Tiny Aya extend beyond technical specifications and financial projections. By championing open-weight models and prioritizing multilingual support, Cohere is contributing to a more inclusive and equitable AI landscape. The ability for anyone to access, modify, and deploy these models can accelerate innovation in areas previously underserved by AI, fostering local entrepreneurship and addressing specific societal needs. As AI continues to permeate every aspect of our lives, the development of models that are both powerful and accessible, that respect linguistic diversity and cultural nuances, will be crucial for ensuring that its benefits are shared broadly. The Tiny Aya family represents a significant step in that direction, promising to empower a new wave of AI-driven solutions across the globe.

