Qualtrics Panels

Panels Project

Project Stages

  • 1

    Pre-Launch

  • Pre-Launch is the stage before fielding begins and is designed to give us an opportunity to confirm (one last time) the details for the project, program the needed logic, and setup the project on our end to get it ready for fielding.

      The Pre-Launch stage is comprised of four main steps:

    1. Your Account Executive will loop you in with your Project Manager via email (see FAQs to understand why you’re talking with a few people from our team).

    2. Your Project Manager will email you to confirm project details – we know that you have likely already confirmed these details with your panels rep, but it ensures that the transition from the rep to the Project Manager is seamless and that no details are lost in the process. We appreciate your patience and attention to detail during this step.

    3. Once you confirm the project details, your Project Manager will set up your survey. View FAQs to see what your project manager will do/change during this step.

    4. Once the survey is set up, the Project Manager will test the survey and then open for the Soft Launch. We want to make sure that everything on our end works and that participants will be correctly routed, so we’ll run a few test responses before launching.

  • 2

    Soft Launch

  • Soft Launch is the stage where we collect about 10% of the total sample size, though sometimes we will collect fewer (around 50 or 100) if you have a large sample size. Once this is complete, your Project Manager will pause sampling and will send you the data to review. This gives you an opportunity to identify any potential discrepancies or issues before we go live for the Full Launch.

      We use a soft launch for a few key reasons to help you do the following:

    1. Catch Potential Errors: pausing after collecting a subset of your responses gives you a chance to review the data and catch any errors on your end related to question setup, display logic, etc. See this checklist to get some ideas of what to look for.

    2. Finalize the Survey: this step allows you to make small changes before collecting all the needed results. Perhaps you want to add some display logic or want to reword a few things. A great thing to look for here is open-ended questions that are receiving a handful of similar responses and turning those into multiple-choice questions (this helps improve overall data quality).

    3. Verify Data Quality: the Soft Launch is also a great time to ensure the data quality is what you’re looking for. If the data quality is questionable in any way, let us know! We can then look into options to improve data quality moving forward.

      We also use the Soft Launch for ourselves. At the most basic level, we want to make sure we’re doing our job correctly. We will check to ensure that we’ve programmed the survey without any errors: that we’re screening people out who should be screened out and allowing in the correct people. We also want to make sure that the quotas (if you have any) are incrementing correctly. The Soft Launch also gives us a chance to review the incidence rate and length of the survey so that we can take any necessary steps in terms of timeline, feasibility, costs, etc.

      Here are the main steps of the Soft Launch:

    1. Launch: Upon completing the Pre-Launch stage, your Project Manager will begin the Soft Launch and will let you know the target number or percentage of responses that we’re aiming for.

    2. Launch Completion: Within about 24 business hours, you can usually count on your Project Manager to email you the Soft Launch data with instructions regarding what to look for. Note that there will sometimes be delays that cause the Soft Launch data to be sent later.

    3. Review: Using the data from your Project Manger or the data via Qualtrics, you can review the data and discuss any changes or concerns with your Project Manager. If you have any questions, refer to FAQs for common issues after the Soft Launch.

    4. Approval: when you have reviewed the data and are ready to move forward, let your Project Manager know, and he or she will proceed with the Full Launch with your approval.

  • 3

    Full Launch

  • Full Launch is the stage where we collect the rest of the sample. Note that the Soft Launch responses do count toward this number, so at this stage, we will resume for the remainder of the completes needed to hit your target sample size. This stage should require minimal work from you – we aim to keep you updated, proactively keep the study moving, etc. to make things on your end as smooth and easy as possible.

      Here are the steps of the Full Launch:

    1. Your Project Manager will confirm that the survey has been Full Launched.

    2. Your Project Manager will send you a status update email each business day (unless he/she notes otherwise).

    3. Once data collection is complete, your Project Manager will let you know that the study is complete, including sending you the data.

  • 4

    Review & Approval

  • The Review & Approval stage is after the data has been collected and sent to you. We allow for a 7-day period following collection for you to review the data. In the unlikely case that you find a problem with the data, please let us know within this 7-day period so we can quickly replace any necessary data. After 7 days, the data is considered approved and the participants receive compensation.

      Here are the steps involved in the Review and Approval stage:

    1. Your Project Manager will discontinue sending sample for the survey.

    2. You have 7 days to review the data.

    3. After 7 days, the data is considered approved and participants are compensated.

    4. What should you look for during the 7-day period? The best thing to ensure is that you have enough valid participants for your analysis needs. Unfortunately, with online data collection, since it is self-completed, participants always have the opportunity to disengage or provide dishonest responses. We try to weed these out by using industry standards for data quality and by ensuring high quality standards across surveys, but if you do find anyone who’s invalid, let us know! If discarding the low-quality respondents leaves you short of your target, we’ll be happy to discuss options for removal and replacement.

FAQs

People Involved & Coverage Hours

  • Why have so many different people reached out to me? I prefer to work with one point of contact.

  • What will the Account Executive or Operations Specialist help with?

  • What will the Project Manager or Support Project Manager help with?

  • What coverage can I expect while my study is in field?

  • What do I do if I need urgent help outside of normal business hours and/or coverage hours?

Fielding Process

  • My Account Executiveresentative looped me in with a project manager. Is my study live?

  • What process does Qualtrics use? Now that a project manager has been assigned, what can I expect?

  • Is soft launch data test data or does it count toward my total?

  • What am I supposed to look for in the data you send me after the soft launch?

  • How long should I expect my study to be in field?

  • I want to sample incrementally – 100 at a time for the 1000 completes. Can you do this?

  • Why did you collect more responses than I requested?

Survey Setup

  • What is the project manager going to change?

  • What is your QC/QA process?

  • Can I unlock my survey to make changes?

  • My study (copied from a past project, live in field, or completed) won’t let me test. It just kicks me out! Why is this happening?

Sampling

  • What are your sampling methods?

  • Do you deploy invitations that are nationally representative?

  • Why does the soft launch seem to not have the quotas implemented that I requested?

  • My study is complete and I need some extra information – response rate, number of invitations, size of the panel, a generic example invite, respondent incentives, etc. Can I get that?

Data & Analysis

  • What are partials? Why would I record them? Why not? What’s an annual license?

  • I recorded partial responses and am not sure how to easily identify the valid completes versus the partial data. Can you help?

  • Why do the values in the Excel file not make sense? How do I read them & why are some not intuitive?

  • Will you be scrubbing my data for me?

  • Can I delete the invalid responses that I found?

  • My study is complete – can you help me with reporting and analysis?

  • Can you download the data for me? How can I do that myself? What formats are available?

Best Practices

We discuss each of these in more detail earlier here, but here are some recommended best practices for your study:

    Adding approved quality checks

  • These can help improve data quality by automatically screening out those who aren’t paying attention or aren’t taking your survey seriously. New research shows that previously recommended red-herring attention filters (e.g., please select strongly disagree for this line) increase the social desirability bias amongst the respondents and are not a good measure of attention. We instead recommend using a commitment question at the start of the study, requiring participants to commit to providing their best and most honest answers throughout the study before letting them proceed.
  • Selecting Forced Response

  • Forcing responses ensures you get a response from each person on each question. We recommend adding a “Prefer not to answer” or “Don’t know” option on questions where respondents may not have an answer or may not feel comfortable answering. This way, you can still ensure each participant answers all questions that are displayed.
  • Carefully reviewing soft launch data

  • The purpose of the soft launch is to ensure everything’s working properly and without issues! Take this time to carefully review your setup, data quality, responses, etc. so that we can make changes before the full data collection is complete.
  • SHARING PERTINENT DETAILS WITH YOUR Account Executive

  • If your survey contains specific items, we may need to change pricing/feasibility/timeline, so your project will run much smoother if we know about these items as early as possible. If you collect any PII (personally identifiable information, such as name, phone number, email address, location, etc.), we will need to get extra approval prior to launch. Similarly, if your study contains sensitive questions or topics, such as drug use, abuse, etc., we will need to discuss prior to launch, as these topics may limit our sample pool. If you have anything unusual or sensitive in your survey, just let us know! And, if you are unsure whether you need to let us know what's in your survey prior to launch, err on the side of letting us know, and we'll work with you to see if it affects the study in any way on our end.

Top Pitfalls

There are a few commonalities of studies that run into issues, and we want to help you avoid those to get the best data possible! Here are a few things to avoid:

    Extensive open-end questions

  • Most of our sample sources are focused on quantitative research and provide great results for that. However, when there are a large number of open-ended responses, data quality usually falls. We do have sample sources that specialize in qualitative research, but we do need to charge more for that – talk with your Account Executiveresentative if you hope to focus on qual research.
  • Too many or too long of matrix tables

  • Matrix tables are a great way to quickly collect opinions. However, they can be repetitive, especially when they get long. At these times, we see respondent fatigue kick in and notice that even valid respondents will get lazy or bored and will start straightlining, marking patterns, etc. This can be prevented by using varying question types, splitting up matrix tables into smaller ones, inserting images between matrix tables, etc.
  • Nested quotas

  • An example of nesting quotas would be to have age quotas broken down by gender (i.e., males 18-24, females 18-24, etc.). Nesting quota groups together makes the groups we need very specific and slows down data collection. These often decrease feasibility and nearly always extend field time. We recommend against nesting quotas for these reasons but can add them with some expectations in place. Reach out to your Account Executive or Project Manager for more information.
  • NOT CAREFULLY CONFIRMING THE SPECS SENT BY YOUR PROJECT MANAGER

  • While we try to get all the correct details from you on the sales end before assigning a Project Manager, we do sometimes have inaccurate information. The Project Managers are responsible for ensuring your study works as needed in terms of screeners and quotas and then sending your survey out to the correct participants, so please carefully review what they send over and mention if anything is missing or incorrect. Not mentioning items that should be included at this time may result in added costs on your end or participants counting toward your total who should not have been included in the sample.
  • NOT REVIEWING THE SOFT-LAUNCH DATA

  • The soft launch is the best time to catch any potential issues before it's too late - please carefully review the soft launch data that the project manager sends over to you, checking for issues with your programming, issues with data quality, questions without any responses, etc. Feel free to review the soft launch section at the top of this page for more details on what to look for. It's much easier to solve potential issues (and hopefully at little or no cost to you!) after the soft launch rather than at the end of the full study.