Elevate Solution is an application designed to enhance communication between patient and healthcare professionals. Work by Kia Alavi. Designers affect the quality of life and well-being of individuals and entire populations through the designs they create.
Today, some of the most exciting design work is being carried out in the broad and diverse field of health. Nearly every aspect of health care needs design expertise. Issues considered in the program include:. The interdisciplinary Design for Health program has four main themes delivered primarily through studio-based learning with partnered projects:.
We developed the MDes in Design for Health primarily for people who already have educational and practical experience in an area of design but also for people in the health sector who are committed to design. Most students will hold an undergraduate degree in design or a related field, and some will have experience in health administration, patient advocacy, health services, or health technology. We have many scholarships and funding opportunities available to students in the Design for Health program. For this scholarship there are special application requirements.
For further details, please click here. Established in by the Manchee Family. Semple Graduate Scholarship. Although the impact of missing outcome data and missing covariates on study results can be reduced through the use of multiple imputation techniques, no method of analysis can be expected to overcome them completely [ 17 ].
Longer and more demanding tasks might be expected to have fewer volunteers than shorter, easier tasks. The evidence from randomised trials of questionnaire length in a range of settings seems to support the notion that when it comes to questionnaire design 'shorter is better' [ 18 ]. Recent evidence that a longer questionnaire achieved the same high response proportion as that of a shorter alternative might cast doubt on the importance of the number of questions included in a questionnaire [ 19 ].
However, under closer scrutiny the results of this study There is a trade off between increased measurement error from using a simplified outcome scale and increased power from achieving measurement on a larger sample of participants from fewer losses to follow-up. If a shorter version of an outcome scale provides measures of an outcome that are highly correlated with the longer version, then it will be more efficient for the trial to use the shorter version [ 1 ].
A moderate reduction to the length of a shorter questionnaire will be more effective in reducing losses to follow-up than a moderate change to the length of a longer questionnaire [ 18 ]. In studies that seek to collect information on many outcomes, questionnaire length will necessarily be determined by the number of items required from each participant.
In very compliant populations there may be little lost by using a longer questionnaire. However, using a longer questionnaire to measure more outcomes may also increase the risk of false positive findings that result from multiple testing for example, measuring outcomes may produce 5 that are significantly associated with treatment by chance alone [ 4 , 20 ].
A recently updated Cochrane systematic review presents evidence from RCTs of methods to increase response to postal and electronic questionnaires in a range of health and non-health settings [ 3 ]. The review includes trials that evaluated different methods for increasing response to postal questionnaires and 32 trials that evaluated 27 methods for increasing response to electronic questionnaires.
The trials evaluate aspects of questionnaire design, the introductory letter, packaging and methods of delivery that might influence the tendency for participants to open the envelope or email and to engage with its contents. A summary of the results follows. The evidence favours offering monetary incentives and suggests that money is more effective than other types of incentive for example, tokens, lottery tickets, pens, and so on.
The relationship between the amount of monetary incentive offered and questionnaire response is non-linear with diminishing marginal returns for each additional amount offered [ 21 ]. Unconditional incentives appear to be more effective, as are incentives offered with the first rather than a subsequent mailing. There is less evidence for the effects of offering the results of the study when complete or offering larger non-monetary incentives.
The evidence favours non-monetary incentives for example, Amazon.
Less evidence exists for the effect of offering monetary rather than non-monetary incentives. The evidence favours using personalised materials, a handwritten address, and printing single sided rather than double sided. There is also evidence that inclusion of a participant's name in the salutation at the start of the cover letter increases response and that the addition of a handwritten signature on letters will further increase response [ 22 ]. There is less evidence for positive effects of using coloured or higher quality paper, identifying features for example, identity number , study logos, brown envelopes, coloured ink, coloured letterhead, booklets, larger paper, larger fonts, pictures in the questionnaire, matrix style questions, or questions that require recall in order of time period.
The evidence favours using a personalised approach, a picture in emails, a white background for emails, a simple header, and textual rather than a visual presentation of response categories. Response may be reduced when 'survey' is mentioned in the subject line. Less evidence exists for sending emails in text format or HTML, including a topic in email subject lines, or including a header in emails. The evidence favours sending questionnaires by first class or recorded delivery, using stamped return envelopes, and using several stamps.
There is less evidence for effects of mailing soon after discharge from hospital, mailing or delivering on a Monday, sending to work addresses, using stamped outgoing envelopes rather than franked , using commemorative or first class stamps on return envelopes, including a prepaid return envelope, using window or larger envelopes, or offering the option of response by internet.
The evidence favours contacting participants before sending questionnaires, follow-up contact with non-responders, providing another copy of the questionnaire at follow-up and sending text message reminders rather than postcards.
There is less evidence for effects of precontact by telephone rather than by mail, telephone follow-up rather than by mail, and follow-up within a month rather than later. The evidence favours placing more relevant questions and easier questions first, user friendly and more interesting or salient questionnaires, horizontal orientation of response options rather than vertical, factual questions only, and including a 'teaser'. Response may be reduced when sensitive questions are included or when a questionnaire for carers or relatives is included. There is less evidence for asking general questions or asking for demographic information first, using open-ended rather than closed questions, using open-ended questions first, including 'don't know' boxes, asking participants to 'circle answer' rather than 'tick box', presenting response options in increasing order, using a response scale with 5 levels rather than 10 levels, or including a supplemental questionnaire or a consent form.
The evidence favours using a more interesting or salient e-questionnaire. The evidence favours questionnaires that originate from a university rather than government department or commercial organisation. Less evidence exists for the effects of precontact by a medical researcher compared to non-medical , letters signed by more senior or well known people, sending questionnaires in university-printed envelopes, questionnaires that originate from a doctor rather than a research group, names that are ethnically identifiable, or questionnaires that originate from male rather than female investigators.
The evidence suggests that response is reduced when e-questionnaires are signed by male rather than female investigators. There is less evidence for the effectiveness of e-questionnaires originating from a university or when sent by more senior or well known people.
The evidence favours assuring confidentiality and mentioning an obligation to respond in follow-up letters. Response may be reduced when endorsed by an 'eminent professional' and requesting participants to not remove ID codes. Less evidence exists for the effects of stating that others have responded, a choice to opt out of the study, providing instructions, giving a deadline, providing an estimate of completion time, requesting a telephone number, stating that participants will be contacted if they do not respond, requesting an explanation for non-participation, an appeal or plea, requesting a signature, stressing benefits to sponsor, participants or society, or assuring anonymity rather than participants being identifiable.
The evidence favours stating that others have responded and giving a deadline. There is less evidence for the effect of an appeal for example, 'request for help' in the subject line of an email. So although uncertainty remains about whether some strategies increase data completeness there is sufficient evidence to produce some guidelines.
Where there is a choice, a shorter questionnaire can reduce the size of the task and burden on respondents.
Begin a questionnaire with the easier and most relevant questions, and make it user friendly and interesting for participants. A monetary incentive can be included as a little unexpected 'thank you for your time'. Participants are more likely to respond with advance warning by letter, email or phone call in advance of being sent a questionnaire. This is a simple courtesy warning participants that they are soon to be given a task to do, and that they may need to set some time aside to complete it. The relevance and importance of participation in the trial can be emphasised by addressing participants by name, signing letters by hand, and using first class postage or recorded delivery.
University sponsorship may add credibility, as might the assurance of confidentiality. Follow-up contact and reminders to non-responders are likely to be beneficial, but include another copy of the questionnaire to save participants having to remember where they put it, or if they have thrown it away. The effects of some strategies to increase questionnaire response may differ when used in a clinical trial compared with a non-health setting.
intermediary. It provides a language of shapes or symbols (full graphics) or words and. Computer Graphics in Medical Research and Hospital Administration. Download Citation on ResearchGate | Computer Graphics in Medical Research and Hospital Administration | The graphics terminal makes it possible for people .
Around half of trials included in the Cochrane review were health related patient groups, population health surveys and surveys of healthcare professionals. The other included trials were conducted among business professionals, consumers, and the general population. To assess whether the size of the effects of each strategy on questionnaire response differ in health settings will require a sufficiently sophisticated analysis that controls for covariates for example, number of pages in the questionnaire, use of incentives, and so on.
Unfortunately, these details are seldom included by investigators in the published reports [ 3 ]. However, a review of 15 RCTs of methods to increase response in healthcare professionals and patients found evidence for using some strategies for example, shorter questionnaires and sending reminders in the health-related setting [ 23 ].
There is also evidence that incentives do improve questionnaire response in clinical trials [ 24 , 25 ]. The offer of monetary incentives to participants for completion of a questionnaire may, however, be unacceptable to some ethics committees if they are deemed likely to exert pressure on individuals to participate [ 26 ]. Until further studies establish whether other strategies are also effective in the clinical trial setting, the results of the Cochrane review may be used as guidelines for improving data completeness.
More discussion on the design and administration of questionnaires is available elsewhere [ 27 ]. Irrespective of questionnaire design it is possible that some participants will not respond because: a they have never received the questionnaire or b they no longer wish to participate in the study. An analysis of the information collected at randomisation can be used to identify any factors for example, gender, severity of condition that are predictive of loss to follow-up [ 28 ]. Follow-up strategies can then be tailored for those participants most at risk of becoming lost for example, additional incentives for 'at risk' participants.
Interviews with a sample of responders and non-responders may also identify potential improvements to the questionnaire design, or to participant information. The need for improved questionnaire saliency, explanations of trial procedures, and stressing the importance of responding have all been identified using this method [ 29 ].
Few clinical trials appear to have nested trials of methods that might increase the quality and quantity of the data collected by questionnaire, and of participation in trials more generally. Trials of alternative strategies that may increase the quality and quantity of data collected by questionnaire in clinical trials are needed. Reports of these trials must include details of the alternative instruments used for example, number of items, number of pages, opportunity to save data electronically and resume completion at another time , mean or median time to completion of electronic questionnaires, material costs and the amount of staff time required.
Data collection in clinical trials is costly, and so care is needed to design data collection instruments that will provide sufficiently reliable measures of outcomes whilst ensuring high levels of follow-up. Whether shorter 'quick and dirty' outcome measures for example, a few simple questions are better than more sophisticated questionnaires will require assessment of the costs in terms of their impact on bias, precision, trial completion time, and overall costs. Questionnaire design still does remain as much an art as a science, but the evidence base for improving the quality and completeness of data collection in clinical trials is growing.
I would like to thank Lambert Felix for his help with updating the Cochrane review summarised in this article, and Graham Try for his comments on earlier drafts of the manuscript. National Center for Biotechnology Information , U. Journal List Trials v. Published online Jan Phil Edwards 1.