Developing A Research Application
To Federal Sources of Support
Gontran Lamberty, Dr.P.H.
Director, MCH Research Program, Maternal and Child Health Bureau
To some, developing a research application is an art that few can master. For others, it is like baking a cake: If you know the recipe, then you can do it as well as anyone else. There are elements of truth in both of these assertions. Developing a research application is, above all, a demanding, sometimes daunting task. It requires a willingness to spend time doing homework prior to writing, and a willingness to acquire new knowledge and skills on one’s own or through cooperative ventures with colleagues of other disciplines. It also requires the motivation to compete and, above all, the ability to withstand criticism and cope with rejection. It also requires practice, practice, practice.
For the uninitiated, this article offers a primer on what is involved in developing a winning research application. For experienced applicants, this information may be old hat. For both experienced and new applicants, however, this article provides specific information about the MCH Research Program of the Maternal and Child Health Bureau. We hope this article will motivate prospective applicants, whether experienced or not, to both plunge into the task of research application writing and apply to the MCH Research Program.
Sources of Support
One of the first steps in preparing to apply for research support is identifying the possible sources of that support. Generally speaking, there are two major sources: federal agencies and private foundations. There is overlap in subject matter and priorities within as well as between these two funding sources. A good funding strategy is to capitalize on this overlap by submitting an application to more than one funding organization and preparing the ground for sharing support between funding sources if the application is recommended for approval by more than one review group. Sharing support is a necessity if the amount of funding required is in excess of what a single funding agency can ordinarily afford. The degree of success in securing shared support will depend on how important and/or topically current the proposed research is at the time the application is reviewed. Success also depends on how good a salesperson the grant writer is. Both are important components of what is called "grantsmanship."
There are five main sources of federal support for MCH research. These are (1) MCHB; (2) the National institute of Child Health and Human Development; (3) the National Center for Nursing Research; (4) the Agency for Health Care Policy and Research; and (5) NIMH. The Division of Research Grants (DRG) of the National Institutes of Health (NIH) is the central intake unit for all research applications addressed to the federal units noted above, except for MCHB. Because MCHB has a review process independent from that of NIH, applicants may submit the same research proposal to both MCHB and any one of the federal agencies covered by the NIG central intake unit. However, the same application cannot by submitted to more than one agency served by the NIH intake unit. Applicants may, however, specify to the DRG which institute-and which review group within that institute-they want their application to be assigned to. Otherwise, DRG will assign the application to the institute and review group they feel is most appropriate. Any application recommended for approval by both MCHB and an NIH review group can only be funded in full by one or the other; however, shared funding is possible though an interagency transfer of funds.
Private foundations vary in how they handle the application process. As a rule, the process is less formal than in the federal agencies, and thus approval is more likely to be influenced by discussions or negotiations between the prospective applicant and foundation officials. Often, private foundations have narrow bands of interest that restrict the nature of the research they fund.
What Constitutes a Winning Proposal
A research application has an increased chance of approval if it contains the following: (1) an important and /or original research question or topic; (2) a well-written, reasonably detailed, and technically appropriate plan for conducting the research; and (3) a realistic budget.
What is meant by "important" or "original"? "Important" means that the research question is a topic of current interest (e.g., pediatric AIDS), or that it promises to expand the scientific knowledge base in some significant way, or that the expected findings could by readily applied toward amelioration of an existing problem. "Original" means that it represents a considerable departure form the norm or that is represents a different twist to something already part of the existing knowledge base. An exception to the rule of originality is the purposive validation of prior research.
How well a research application reads depends not only on organization but also on the choice of words, the amount of detail provided and the degree to which the different ideas and sections in the written protocol flow into a coherent whole. Well written applications do not evolve effortlessly, or overnight. They require the careful nurturing of a multidisciplinary group of professionals over several months. Additional months of rewriting and careful review by critical coworkers are essential.
Technical appropriateness refers to a match or fit between the nature of the research problem as stated, the circumstances under which the research will be done, and the most efficient study design, measurement, and data collection approaches that are possible. Components of what is meant by technical appropriateness include knowing when to sacrifice reliability and validity for the sake of human subject considerations or for the sake of staying within the amount of support that can be obtained. Such compromises, if communicated and argued logically and forcefully in the application document, are well received by reviewers even if the resulting technical quality of the research might by less than optimal.
The term "efficiency" refers to a project’s ability to answer the research questions proposed at an acceptable level of scientific rigor and at the least possible cost. Efficiency is entering more and more into the review process as a criterion for approval or disapproval. Applicants increasingly must justify the efficiency of their study design and research approaches. This required that applicants state in the application the designs and approaches that have been considered and discarded as a means of justifying their selection.
A realistic budget is one that requests slightly more funding than may be needed to do the task at hand and one that stays within the bounds of affordability of the support source selected. Requesting a slightly higher budget is warranted since it is difficult to estimate the real costs of a research operation with precision. This is not a license to inflate costs in order to obtain fancy equipment or pay for departmental training costs. Inflated budgets are quickly recognized as such by study section reviewers and have a negative effect on the entire review process.
Having provided a general sketch of what constitutes a good application, let us turn now to the specifics of developing an application that has a reasonable chance of being both recommended for approval and funded. We will approach this indirectly by describing the reasons why most applications get rejected by the Maternal and Child Health Research Review Committee.
Reasons for Disapproval
The percentage of all new applications reviewed and rejected in federal research supporting agencies such as NIH and MCHB ranges from 55 to 85 percent, depending on the program and type of research. As a rule, new applications reviewed by the NIH study sections are recommended for approval at a much higher rate than those reviewed by MCHB. (For the past three years, MCHB’s approval rate has been about 15 percent.) Does this indicate that MCHB’s review process is more demanding than NIH’S? Not necessarily.
This disparity is most likely the result of differing volumes of applications and other factors influencing the review process. The practical result is that although the two agencies differ in the percentage of new applications recommended for approval, the percentage of all new applications actually funded is approximately the same.
Many reasons are conjectured by applicants for the relatively low rate of success in receiving research support. Among these are: (1) unrealistic standards of excellence on the part of review panels; (2) favoritism toward established investigators and /or acquaintances; and (3) lack of research experience and review know-how on the part of reviewers. While in some limited instances any of these reasons can enter into the disapproval equation, the fact is that most research applications are rejected because of faulty conceptualization and/or methodological flaws.
What is conceptualization? Essentially, it is a process of explication and generalization that takes seemingly unconnected theoretical and empirical facts and transforms them into a coherent whole and a total rationale for justifying the proposed research. Conceptualization is said to have occurred when the following three conditions have been met: (1) The cause and effect assumptions binding the purpose of the investigation have been stated; (2) the major concepts to be used have been explicated; and (3) the hypotheses relevant to the research questions have been specified.
Conceptualization is an important-if not the most important-activity in a research undertaking. Inadequate conceptualization is a common flaw in disapproved applications submitted to the MCH Research Program. The typical approach found in these disapproved applications is to state the research problem in general terms and then launch into the specification of variables, study design, and plans for data analysis, with no effort to place the proposed research in a wider theoretical and/or framework. Consequently, reviewers are at a loss to determine not only the significance of the intended research but also the appropriateness of much of what is proposed. Faulty conceptualization is at the root of such methodological flaws as data collection overkill, disregard for validity and reliability considerations, and inappropriate use of statistical procedures.
In research applications, the conceptualization component is woven through such sections as statement of the problem, review of the literature, hypothesis and specifications of variables, and explanation of concepts. Note that covering these subjects according to instructions does not assure adequate conceptualization. The investigator must weave all of these elements into a coherent whole with economy and simplicity of assumptions. This is what members of study sections call tight conceptualization. There is nothing more selling in a research application than a tight conceptualization. Methodological deficiencies are apt to be given less significance when a tight and lucid conceptualization of the research problem has preceded the plans for its execution.
How does one develop a tight or parsimonious conceptualization of a research problem? Simply put, through total immersion in the nuances of the research problem. The first step is to reflect on the research problem, then conduct an exhaustive review of the empirical and theoretical literatures. This is also the time to begin informal consultation by diplomatically eliciting information from learned colleagues or more formally soliciting expert opinions. Total immersion over a sufficient length of time leads to clarity of thought and the logical exposition of ideas and assumptions about the research problem. A fast approaching deadline or the need to generate funds to cover salary expenses may then provide the impetus to commit to paper the conceptualization that has been developed.
The research plan component of an application is merely a declaration of how one will execute the research in the field or laboratory situation. The technical component calls for a knowledge of study design, measurement approaches, and sampling and statistical techniques. Most applications get rejected for one or more technical reasons. The two most common are methodological weaknesses and lack of detail about essential aspects of the research operation. Since methodological weaknesses frequently overlap with lack of detail, it is justifiable to say that research methodology constitutes one of the most important barriers to the successful navigation of the review process.
What methodological concerns are most frequently raised by reviewers of research applications? Often, the methodological flaws seem to be simple acts of omission or failure to explicate on the part of the investigator. Other times, they appear to reflect lack of knowledge about the technical nuances of doing research. In the first instance, the common failing seems to be an expectation on the part of the investigators that the reviewers will assume that what needs to be done will be done, even if not stated. It is important to note that errors of omission and failures to explicate frequently occur despite the investigator having been given very detailed instruction on what to include in the research plan and at what level of specificity.
How can one develop an appropriate and methodologically sophisticated research plan that does justice to the complexity of the research problem at hand and becomes a selling point in the application process? One essential prerequisite is achieving the tight conceptualization of the research problem discussed earlier. The technical requirements of a research operation largely flow from the way in which the research problem was conceptualized. Efficient use of research experts such as biostatisticians, psychometricians, and epidemiologists early in the formulation of the research problem helps considerably. An alternative approach is to make the design of a research project an interdisciplinary team effort from the start. This alternative requires a lot of give and take, particularly at the formulation stage, and one professional, usually the principal investigator of record, must assume a leadership role in putting the pieces together. If the process is not carefully orchestrated and executed, the result is a disjointed product quickly recognized by reviewers as a project developed by committee that is likely to flounder in the execution stage.
The number of variables to be included in an investigation and the amount of data to be collected are, as a rule, a function of how well the research problem has been conceptualized. If the overall conceptualization is tight, the number of variables will be relatively small and redundancy in measurement will be minimal, and data collection will be extensive, redundant, and without an apparent focus.
Most applications received by the MCH Research Program suffer from some degree of data collection overkill. In some cases, this reflects purposive research agendas rather than faulty conceptualization. The rationale in some cases seems to be to increase the amount of data for later use in exploratory analyses or to create a fail-safe situation in which a standby set of variables will be available to fall back on if the main variables do not prove to be significant. In other cases, the nature of the research itself may dictate data collection overkill. Multiple measures may be necessary to tap the same variables for the purposes of convergent validity or to develop more parsimonious measures through data reduction techniques such as factor analysis.
However, most cases of data collection overkill seen in the applications submitted to the MCH Research Program are largely unintentional, and appear to derive from loose conceptualization. While lack of research experience appears to play a significant role in these cases, the problem can frequently be found in the applications of experienced researchers as well. In general, the problem with all types of data collection overkill is that the surplus data seldom get analyzed, which ultimately translates into wasted resources and higher costs of doing research.
Few of the applicants received at the MCH Research Program even partially meet the textbook requirement of fully explaining the procedures for implementing the study design. This is particularly true for applications proposing randomized clinical control trials or field experiments. In this kind of application, the tendency is to state that a trial is being proposed without bothering to describe the many procedural details required to ensure that the chosen design will be executed faithfully.
Research applications grossly underestimate the significance of failing to describe how the study design is to be operationalized in the actual research situation. For example, randomization in clinical trials is known to offer the following benefits: (1) It protects the study from selection bias; (2) it ensures that, on the average, the groups will be equivalent or balanced; and (3) it provides the basis for statistical inference. These advantages can be easily compromised by conscious or unconscious biases introduced when study personnel apply the criteria for entry into the study and/or when they assign subjects to treatments. Similarly, failure to deliver treatments exactly as called for in the protocol weakens the power of the statistical analyses and may lead to rejection of a beneficial treatment or the acceptance of an ineffective one.
A large number of the applicants disapproved by the MCH Research Program propose small samples of convenience. Moreover, few of the applications state what the clinically or scientifically important differences are, or what the probability of detecting these differences will be. In other words, the applications fail to justify sample size in terms of statistical power. Studies using inappropriately small samples are doomed to miss clinically or scientifically relevant differences, and thus are unethical in their use of subjects and resources.
Withdrawal of subjects from studies can be due to subjects choosing to drop out or to investigators’ design, usually in the analytical phase. Regardless of the reason, attrition plays havoc with data analysis and interpretation. The most typical approach to the subject of attrition in applications rejected by MCH Research Program reviewers is not mentioning the subject at all, or, if it is mentioned, dismissing it optimistically without supportive evidence. Underestimating attrition is also common, particularly is situations where samples of convenience are to be used and where the pool of subjects is inherently small, as with conditions of low incidence and low prevalence. Failure to plan for monitoring subject attrition and doing something about it if it occurs to a significant degree is another common problem in applications.
What If Rejected?
Since 55-85 percent of all new applications is rejected the first time they go through the review process, not being recommended for approval should not be taken as a personal affront or failure. One should withhold judgment until receiving the "pink sheets" or summary statement of the review. Rejection in many cases will turn out to be a prelude to a better written, technically stronger, revised application-one with a two-or three-fold greater chance of being approved. Rejection should not be viewed as reflecting some deficiency or weakness inherent in the review process (e.g., a bias against young and new investigators). Rather, view it for what in most cases it is: an imperfect but honest and well-meant evaluation and critique by highly trained and experienced reviewers.
Summary statements or pink sheets (white sheets in the case of MCHB) are lengthy, detailed communications providing a consolidated statement of the evaluation that is done for each application. Pink sheets are the key to developing improved, more competitive applications. Summary statements should be read carefully and more than once to ensure that all descriptions of weaknesses, errors of omission and commission, etc. are identified and understood. A cover letter should be attached to any revised application regarding the weaknesses or problems previously noted, and changes relating to them should be identified in the body of the text using bold face and/or underlining. If an applicant strongly disagrees with a particular criticism, she or he should develop a considered, logical argument refuting the reviewers; comments or recommendations. The cover letter is a good place to do that. Most agencies will accept a maximum of two revisions of a previously submitted application. Some agencies instruct the applicant in the summary statement of the first submission not to revise and/or reapply if the study section has made such a recommendation.
In general, developing a fundable research application calls for hard work, painstaking attention to detail, and total commitment to the task at hand. Allowing ample time to flesh out the complexities and details is of primary importance. With a first submission, rejection or disapproval is the norm, so the applicant should always be prepared to revise and resubmit. Revised applications have a much greater chance of being approved, but success depends upon careful scrutiny of what the reviewers had to say and a willingness to revise the application in accordance with their comments and recommendations.