While many other existing AI language models are deployed for business or research, Google’s 1,000 languages initiative aims to improve AI language models as a whole for diverse use cases.
What is the plan?
- Google is developing a model that can support the 1,000 most spoken languages of the world.
- The company claimed this model had the “largest language coverage” among existing speech models.
- Google’s plan is to build one gigantic model for the 1,000 languages so that both widely used and rarer languages can co-exist, interact, and grow together.
Purpose of the new language model –
- Google does not have a specific use case for the language model.
- However, the end goal is to enable Google users to experience better searches, more accurate auto-generated captions, natural online translation, and faster calculations.
- The project is under development and researchers are now collecting linguistic data to train the model.
AI language models –
Through AI language models, companies aim to automate manual processes, generate new insights based on existing data, and reduce reliance on human labour in fields like translation, customer service, or computation. For instance, website chatbots etc.
Other language models –
- AI research firm, OpenAI built the GPT-3 (Generative Pre-trained Transformer 3) set of models named Davinci, Curie, Babbage, and Ada that can generate “natural” text responses and perform tasks like classification, simple summaries, address correction, answering questions, and more.
- Meta is also working on AI-based language translation.
- Facebook AI claims the M2M-100 model to be the first multilingual translation model that does not use English as the default language when it translates directly between 100 languages. It is also open source.
- It is further focusing on AI-based translation for not just text, but primarily oral languages like Hokkien.