Post by account_disabled on Mar 11, 2024 8:35:32 GMT
Google recently introduced Multisearch: a new search mode that includes a visual component (through Lens) and a textual or vocal component. What scenarios will open up? What will this mean for SEO? Alessio Pomaro Alessio Pomaro 22 Apr 2022 •4 min read Google's Multisearch: multimodality is the future of human-machine interfaces Google's Multisearch: multimodality is the future of human-machine interfaces Google, through Lens , has already made us aware of the potential of visual search. For example, it happened to me that I didn't know the model of a pair of running shoes that I was interested in... it was enough to frame them with Lens to obtain not only information on the model in a few seconds, but also the promotions , the reviews , the shops where you can buy them, and much more.
Recently, however, a completely new search mode has been India Mobile Number Data introduced, called " multisearch ", which, starting from the image framed on the Lens, combines visual and textual or vocal search . Through multisearch you can go beyond the search field and ask questions about what you see. The following image shows a practical example of research carried out using this new method. In simple steps, it is very easy to understand how it works. Using Google Lens, a user framed a dress; subsequently the interface allows you to add a textual question ( search query ) referring to that context; the user adds a color; obtains, as results, clothes equal or similar to the one framed in the specified color.
An example of using Google's multisearch approach (image + text) An example of using Google's multisearch approach (image + text) Would it have been possible to obtain the same result through a normal search or a visual search? I would say no. Other examples of the usefulness of the combination between two modes can be the following: we can photograph an environment, for example a living room, and add a " coffee table " to obtain a table in the same style that matches perfectly; we can photograph a plant we don't know and add " how to grow it ", to obtain details on the type of plant and detailed instructions on cultivation.
Recently, however, a completely new search mode has been India Mobile Number Data introduced, called " multisearch ", which, starting from the image framed on the Lens, combines visual and textual or vocal search . Through multisearch you can go beyond the search field and ask questions about what you see. The following image shows a practical example of research carried out using this new method. In simple steps, it is very easy to understand how it works. Using Google Lens, a user framed a dress; subsequently the interface allows you to add a textual question ( search query ) referring to that context; the user adds a color; obtains, as results, clothes equal or similar to the one framed in the specified color.
An example of using Google's multisearch approach (image + text) An example of using Google's multisearch approach (image + text) Would it have been possible to obtain the same result through a normal search or a visual search? I would say no. Other examples of the usefulness of the combination between two modes can be the following: we can photograph an environment, for example a living room, and add a " coffee table " to obtain a table in the same style that matches perfectly; we can photograph a plant we don't know and add " how to grow it ", to obtain details on the type of plant and detailed instructions on cultivation.