added

Automated tagging and interview users in any language with AI

Today, we've shipped a heap of updates to make it easier to quickly analyse your responses and make sure that Wondering asks questions in any language and format that you want. Let's jump in:

Quantify your results with automated tagging ๐Ÿ“Š

From today, you can now see automated tags applied to each theme Wondering pulls out for your AI-studies, so that you can understand the data that supports each theme and quickly uncover and quantify different topics within each theme.

Tags are created automatically and shows you how often any topic has been mentioned across the interviews in your study. Each tag is also linked to quotes and responses from the interviews in your study, so that you can easily dig deeper into each participant that spoke about a topic.

We believe that this will help you get actionable insights from your research without having to put the time in to analyse each message yourself.

Interview your participants in any language ๐ŸŒ

Language barriers can be especially challenging for researchers who are doing research across different markets, or who want to interview participants who do not have English as their first language.

Good research is accessible and representative, so weโ€™re really excited to have added support for AI-studies to be created and conducted in any language. To create a study in any language, simply write out each study block in the language you want to conduct the study in, and the AI will automatically understand to ask any follow up questions in that language. Most major languages are supported.

Recruit up to 400 participants per study ๐Ÿ”

AI-powered studies can now collect up to 400 responses per study, up from the previous cap of 50 responses. Weโ€™ll continue increasing this cap over the next months to help you automatically analyse even more conversations at any scale.

More control of Explore blocks ๐ŸŽฎ

Thank you to everyone who's been sharing feedback on how we can improve Explore blocks. A consistent theme has been that you want more control over how the AI-generated questions are written.

First, you can now fix the opening question thatโ€™s asked in any Explore block. This helps you to consistently start a conversation with a specific question or can be used to give you control over how you use certain domain specific terminology.

Second, you can now control whether voice or text answers are presented as the default option for your explore blocks. This helps you more granularly control how each question in your study is presented, and nudge participants to either record their answers by speaking aloud or by typing their answers.

Thatโ€™s all for now - we're excited to see how you use these new features to scale your research. Happy researching!

Try Out the New Features Now