Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Google publishes three new AI experiences on Tuesday to help people learn to speak a new language in a more personalized way. Although the experiences are still in the early stages, it is possible that the company seeks to face Duolingo with the help of Gemini, the model of large multimodal language of Google.
The first experience helps you learn quickly about the specific sentences you need at the time, while the second experience helps you appear less formal and more like a room.
The third experience allows you to use your camera to learn new words according to your environment.
Google notes that one of the most frustrating parts of learning a new language is when you find yourself in a situation where you need a specific sentence that you have not yet learned.
With the new experience of “little lesson”, you can describe a situation, as “finding a lost passport”, to receive vocabulary and grammar advice adapted to the context. You can also get responses of answers like “I don’t know where I lost it” or “I want to report it to the police”.
The next experience, “Sland Hang”, wants to help people look less like a manual when speech of a new language. Google says that when you learn a new language, you often learn to speak officially, which is why he experiences a way to teach people to speak more familiar and with the local slang.
With this feature, you can generate a realistic conversation between native speakers and see how the dialogue deploys a message in time. For example, you can learn through a conversation where a street seller chats with a client, or a situation where two friends lost for a long time meet in the metro. You can fly over the terms you don’t know to learn what they mean and how they are used.
Google says that the experience sometimes abuses certain slang and sometimes does words, so users must reference them with reliable sources.
The third experience, “Word Cam”, allows you to take a photo of your environment, after which Gemini will detect objects and label them in the language you learn. The functionality also gives you additional words that you can use to describe the objects.
Google says that sometimes you just need words for things in front of you, because it can show you how much you don’t know. For example, you may know the word for “window”, but you may not know the word for “blinds”.
The company notes that the idea behind these experiences is to see how AI can be used to make independent and personalized independent learning.
New experiences support the following languages: Arab, Chinese (China, Hong Kong, Taiwan), English (Australia, United Kingdom, United States), French (Canada, France), German, Greek, Hindi, Italian, Japanese, Korean, Portuguese (Brazil, Portugal), Russian, Spanish (Latin America, Spy). The tools are accessible via Google Labs.
(Tagstotranslate) google
Source link