Validity determines whether the research truly measures that which it was intended to measure or how truthful the research results are. In other words, does the research instrument allow you to hit "the bulls eye" of your research object? Researchers generally determine validity by asking a series of questions, and will often look for the answers in the research of others.
Starting with the research question itself, you need to ask yourself whether you can actually answer the question you have posed with the research instrument selected. For instance, if you want to determine the profile of Canadian ecotourists, but the database that you are using only asked questions about certain activities, you may have a problem with the face or content validity of the database for your purpose.
Similarly, if you have developed a questionnaire, it is a good idea to pre-test your instrument. You might first ask a number of people who know little about the subject matter whether the questions are clearly worded and easily understood (whether they know the answers or not). You may also look to other research and determine what it has found with respect to question wording or which elements need to be included in order to provide an answer to the specific aspect of your research. This is particularly important when measuring more subjective concepts such as attitudes and motivations. Sometimes, you may want to ask the same question in different ways or repeat it at a later stage in the questionnaire to test for consistency in the response. This is done to confirm criterion validity. All of these approaches will increase the validity of your research instrument.
Probing for attitudes usually requires a series of questions that are similar, but not the same. This battery of questions should be answered consistently by the respondent. If it is, the scale items are said to have high internal validity.
What about the sample itself? Is it truly representative of the population chosen? If a certain type of respondent was not captured, even though they may have been contacted, then your research instrument does not have the necessary validity. In a door-to-door interview, for instance, perhaps the working population is severely underrepresented due to the times during which people were contacted. Or perhaps those in upper income categories, more likely to live in condominiums with security could not be reached. This may lead to poor external validity since the study results are likely to be biased and not applicable in a wider sense.
Most field research has relatively poor external validity since the researcher can rarely be sure that there were no extraneous factors at play that influenced the studys outcomes. Only in experimental settings can variables be isolated sufficiently to test their impact on a single dependent variable.
Although an instruments validity presupposes that it's reliability , the reverse is not always true. Indeed, you can have a research instrument that is extremely consistent in the answers it provides, but the answers are wrong for the objective the study sets out to attain.