Technology News Worldwide

Google launches AI tools for practicing languages through personalized lessons | TechCrunch

google-launches-ai-tools-for-practicing-languages-through-personalized-lessons-|-techcrunch

Google on Tuesday is releasing three new AI experiments aimed at helping people learn to speak a new language in a more personalized way. While the experiments are still in the early stages, it’s possible that the company is looking to take on Duolingo with the help of Gemini, Google’s multimodal large language model.

The first experiment helps you quickly learn specific phrases you need in the moment, while the second experiment helps you sound less formal and more like a local.

The third experiment allows you to use your camera to learn new words based on your surroundings.

Image Credits:Google

Google notes that one of the most frustrating parts of learning a new language is when you find yourself in a situation where you need a specific phrase that you haven’t learned yet.

With the new “Tiny Lesson” experiment, you can describe a situation, such as “finding a lost passport,” to receive vocabulary and grammar tips tailored to the context. You can also get suggestions for responses like “I don’t know where I lost it” or “I want to report it to the police.”

The next experiment, “Slang Hang,” wants to help people sound less like a textbook when speaking a new language. Google says that when you learn a new language, you often learn to speak formally, which is why it’s experimenting with a way to teach people to speak more colloquially, and with local slang.

Image Credits:Google

With this feature, you can generate a realistic conversation between native speakers and see how the dialogue unfolds one message a time. For example, you can learn through a conversation where a street vendor is chatting with a customer, or a situation where two long-lost friends reunite on the subway. You can hover over terms you’re not familiar with to learn about what they mean and how they’re used.

Google says that the experiment occasionally misuses certain slang and sometimes makes up words, so users need to cross-reference them with reliable sources.

Image Credits:Google

The third experiment, “Word Cam,” lets you snap a photo of your surroundings, after which Gemini will detect objects and label them in the language you’re learning. The feature also gives you additional words that you can use to describe the objects.

Google says that sometimes you just need words for the things in front of you, because it can show you how much you just don’t know yet. For instance, you may know the word for “window,” but you might not know the word for “blinds.”

The company notes that the idea behind these experiments is to see how AI can be used to make independent learning more dynamic and personalized.

The new experiments support the following languages: Arabic, Chinese (China, Hong Kong, Taiwan), English (Australia, U.K., U.S.), French (Canada, France), German, Greek, Hebrew, Hindi, Italian, Japanese, Korean, Portuguese (Brazil, Portugal), Russian, Spanish (Latin America, Spain), and Turkish. The tools can be accessed via Google Labs.

Aisha is a consumer news reporter at TechCrunch. Prior to joining the publication in 2021, she was a telecom reporter at MobileSyrup. Aisha holds an honours bachelor’s degree from University of Toronto and a master’s degree in journalism from Western University.

View Bio

Related posts

CoreWeave prices its IPO to raise at least $2.2 billion — and now the games begin | TechCrunch

AMD’s partners are already scalping their ‘MSRP’ 9070 and 9070 XT

OpenAI launches new tools to help businesses build AI agents | TechCrunch