15
Generally the school curriculum is organized to expose students tosubjects that provide opportunity for them to acquire the knowledge andskills that should help them practice. Sometimes students who havepassed written examination and certified fit to practice fail to do so.Considering the legal and financial implications of employeeperformance and safe practice in a rapidly changing environment, a majorconcern of an educational administrator of an institution should be toproduce manpower that is competent. It is therefore important inassessing students for certification to practice, in this case, in a healthcare institution, to generate appropriate data that will help in makingdecision on whether they are able to perform tasks that the knowledgethey have acquired should help them to accomplish. This can be done ifan appropriate assessment tool is in place.Stressing the importance of assessing what nursing care providers cando, not what they know, Del Bueno (1990) cited situations in whichpeople who had performed excellently well in examination had difficultyperforming a procedure or recognizing warning signs in patientsexperiencing difficulty. This kind of situation is unacceptable and16
informed the reforms in nursing education which led to calls forassessment of clinical performance to contribute to academicqualifications that incorporate professional awards. In response to thiscall, training institutions have developed clinical assessment tools.However, Redfern, Norman, Calman, Watson & Murrels, (2002)expressed some concern about the psychometric quality of the tools thatare available and the ability of the tools to distinguish between differentlevels of practice. They analyzed some tools of assessing competence topractice in nursing, while Norman, Watson, Murrels, Calman, andRedfern (2002) tested selected nursing and midwifery competenceassessment tools for reliability and validity. Both team of researchersconcluded that a multi-method approach which enhances validity andensures comprehensive assessment is needed for clinical competenceassessment for nursing and midwifery.In order to ensure such a tool, Lenburg (2006) created a constellationof ten basic concepts and suggested that they should be adapted fordeveloping and implementing objective performance examination. Theyinclude:17
ï‚· Concept of examination ï‚· Dimensions of practice ï‚· Required critical elements ï‚· Objectivity of the assessment process ï‚· Sampling critical skills for the testing period ï‚· Level of acceptability ï‚· Comparability in extent, difficulty and requirements ï‚· Consistency in implementation ï‚· Flexibility in actual clinical environment ï‚· Systematized conditions. These concepts are very useful to the development of accurateassessment instruments. Thus far in the nursing context in Nigeria, suchtool does not exist. The administrators of nursing schools are facing theproblem of subjectivity in practical examination of student nurses. This isevident in situations where students are given different tasks to performduring clinical examination and awarded grades based on the tasks theyperform. By this some students are exposed to more difficult tasks thanothers, all depending on the inclination of the examiner and yet judged onthe same maximum score. This is unfair. It is therefore necessary todevelop an assessment tool that will examine the students on the sametasks for a particular examination episode.In order to accomplish this, consideration should be given to theconcepts proposed by Lenburg (2006) which were mentioned earlier. To18
achieve objectivity in an assessment process two components must beconsidered. First the content (skills and critical elements) for theparticular assessment should be specified in writing and second, thereshould be a consensual agreement of everyone directly involved in anyaspects of the examination process. When individual examiners begin todigress from the established standards and protocols, objectivity erodesback into subjectivity and inconsistency. This regression destroys theprocess and the purpose.To prevent this from occurring, the educational administratorshould ensure that the content of the examination is specified by the listof the dimensions of practice, that is, the skills and competencies andtheir required critical elements that determine the extent and conditions ofcompetence.The use of a conceptual framework to systematically guide theassessment process increases the likelihood that concepts and variablesuniversally salient to nursing and health care practice will be identifiedand explicated (Waltz, Strickland & Lenz, 2005).19
Concepts of interest to nurses and other health professionals areusually difficult to operationalise, that is to render measurable. This ispartly because nurses and other health professionals deal with amultiplicity of complex variables in diverse settings, employing a myriadof roles as they collaborate with a variety of others to attain their own andothers goals. Hence, the dilemma that they are apt to encounter inmeasuring concepts is two fold; first; the significant variables to bemeasured must, by any means, be isolated, and second, very ambiguousand abstract notions must be reduced to a set of concrete behaviouralindicators. It is therefore the responsibility of the educationaladministrator who knows the goals that are intended and that selected thecontent that should help in the achievement of the goal to select thevariables that must be measured and to reduce them to concretebehavioural indicators of competence. These should be incorporated intoa protocol that will guide the assessor.Protocols ensure that each test episode for a given group iscomparable in extent, difficulty and requirements. Protocol also ensuresthat the process is implemented consistently, regardless of who20
administers the examination or when it was conducted. Whenperformance examinations are administered in actual clinicalenvironment, not simulation, the concept of flexibility is essential as eachclient is different. The responsible educational administrator, whoprepares students for professional practice is therefore challenged todevelop appropriate competency-based assessment tools for use in theassessment of students’ clinical competence.Competency-based assessment tool focuses on measuring theactual performance of what a person can do rather than what the personknows. It is based on criterion-referenced assessment methods where thelearner’s performance is assessed against a set of criteria provided so thatboth the learner and assessor are clear on what performance is required.Competency-based assessment technique addresses psychomotor,cognitive and affective domains of learning and its goal is to assessperformance for the effective application of knowledge and skill inpractice setting. The competencies can be generic to clinical practice inany setting, specific to a clinical specialty, basic or advanced (Benner,1982; Gurvis & Grey, 1995).21
Criterion-referenced measures are particularly useful in the clinicalarea when the concern is the measurement of process and outcomevariables as applies in nursing. A criterion-referenced measure of processaccording to Waltz, Strickland & Lenz (2005), requires that one identifiesstandards or the client care intervention and compares the subjects’clinical performance with the standard of performance which is thepredetermined target behaviour. When all these are taken intoconsideration in developing a clinical assessment tool, the tool is boundto be authentic.Statement of ProblemIn Nigeria, assessment of clinical performance contributes to theacademic qualification for professional award. The Nursing andMidwifery Council of Nigeria (NMCN) has adopted the ObjectiveStructured Clinical Examination (OSCE) for midwifery but has not donethe same for general nursing examination. The tool that is currently in usefor clinical assessment for the general nursing examination leaves a lot tobe desired. It lacks the comparability and consistency that are required to22
make an assessment tool objective and fair hence the need for a structuredclinical assessment tool. Some of the pitfalls of the tool include; The tool makes allowance for the selection of the procedure tobe performed by the candidate to be made by the assessor andthis is varied from one candidate to another. The implication isthat all the candidates do not perform the same tasks and thetasks they perform are not comparable and since the taskdifficulty is not the same for all tasks, the candidates are notexamined nor judged on the same premise. This is unfair. Another problem that is closely linked with not specifying tasksthat all candidates must perform is that the mark allotted to theitem, “procedure†is the same for all procedures whether simpleor complex and since some candidates are assigned simplertasks than those assigned to others and are judged on the sameoptimal score for less work, the tool is unfair. Again, becausethe activities expected to be carried out for each procedure isnot specified, the scoring of the candidates’ performance isbased on what the scorer thinks is right and this may vary from23
one scorer to another. The implication is that most times, thescoring is subjective. Sometimes, the length of time required to accomplish a certaintask the assessor assigned to a candidate to perform may notallow the assessor opportunity to assess the candidate on all theareas that are listed on the clinical performance assessmentguide. Since all the items sum up to give the maximum score, itcreates the difficulty of determining what to do about scoringthose items particularly as it was not the fault of the candidatethat he was not examined in those areas by the particularassessor. Again, some of the criteria on which the candidates are judgedare not stated in specific terms. For example such statements as“handles patients gently and skillfully†and “adapts theenvironment for the patient’s comfort†are not specific enoughas to what the candidate is expected to do and therefore leavesroom for assessor’s subjective conclusions. The implication ofall these is that some of the results of assessments using this24
kind of tool are not valid and may have negative impact on thecandidate who failed when actually he/she should have passedand on the consumers of nursing care where a candidate whohad not acquired the necessary skills for competent and safepractice passed when he/she should have failed.In view of this problem, there is the need to develop a clinicalassessment tool that is objective and fair. This is the intent in thisstudy.Purpose of the StudyThe main purpose of the study is to develop and validate astructured Clinical Assessment Tool which will provide opportunity forall the students to be examined on the same tasks for a particularexamination period and be scored based on predetermined performancecriteria. This will ensure a fair, objective and valid assessment of studentnurses’ clinical performance.Specifically the objectives are to:1. develop appropriate tasks for assessing student nurses’ clinicalcompetence.25
2. develop appropriate activities for determining competency inthe tasks3. determine the content validity of the Structured ClinicalAssessment Tool (SCAT) that was developed4. determine the construct validity of the Structured ClinicalAssessment Tool (SCAT).5. determine the inter-rater reliability of the SCAT.Significance of the StudyThe study will result in the availability of an instrument for a morecomprehensive and objective clinical assessment of student nurses.Because the instrument will cover the core practice competency areas innursing, it will be useful in determining whether or not student nurseshave acquired the complex repertoire of knowledge, skills and attitudesrequired for competent practice before they enter the profession. Theinstrument will be useful to nurse educators and clinicalsupervisors/managers of health care institutions who are preparingstudents for practice because it will show them the core elements ofcompetence in nursing and thus help them to guide the students26
appropriately to acquire the skills necessary to become competent andsafe. It will also be useful to the students because they will know from thestart what is expected of them, and being focused, they will work towardsuccess.The instrument will eliminate the problem of leaving thecandidates to the whims and caprices of their assessors which results insome candidates carrying out more complex tasks than others and yetjudged on equal score. Instead, the candidates will perform the same andspecified tasks. This way, the candidates will be examined on the samepremise and any judgment made on the results that are generated by theinstrument will be worthy and valid.Again, because the instrument has broken down the elements ofcompetence into performance criteria on which the performance can bejudged acceptable, scoring of students’ performance during assessmentwill be easier and will be devoid of subjectivity and therefore will makethe result more authentic. The tool will serve as an impetus for theNursing and Midwifery Council of Nigeria (NMCN) to revise the toolthat is currently in use for the final qualifying examination to become27
more objective and fair. If this is done only those who have acquired thenecessary knowledge and skill will be certified competent and licensed topractice and the consumers of nursing care will be sure to receive qualityand safe care. The tool will also be a reference for other researchers whowill want to develop tools that will address procedures that are notaccommodated in the present study.The Scope of the StudyThe study is delimited to developing a structured clinicalassessment tool, developing a scoring scheme for the tool, establishingthe content and construct validity of the tool and determining the internalconsistency reliability of the tool. Only the average congruencypercentage for determining content validity; mean and standard deviationof contrasted groups for determining the construct validity, as well as theinternal consistency reliability using index of inter-rater agreement weredetermined.The clinical events that were assessed were limited to procedures thatwould be completed within 5 minutes. This was to ensure that thestudents are assessed on a good variety of events within the one hour they28
are normally assessed during practical exams. Exposing them toprocedures that take longer will limit the number of events that they willbe assessed on. The tool however presupposes that the students wouldhave been assessed (using a structured assessment tool) on thoseprocedures that take longer time to accomplish prior to this finalassessment.Though the tool is developed for assessing clinical competence ofstudent nurses in Nigeria, the validation of the instrument was conductedin the South East zone of Nigeria using three randomly selected Schoolsof Nursing.Research QuestionsThe study is guided by the following Research Questions1. How appropriate are the tasks of SCAT for assessing studentnurses’ clinical competence?2. How appropriate are the activities for determining competencein the selected items?3. How valid is the content of SCAT?4. What is the inter-rater reliability coefficient of SCAT?29
HypothesesThe following hypotheses were tested at an alpha of 0.05Ho1: There is no significant difference in the mean scores on SCATof high and low achievers.Ho2: There will be no significant difference in the scores of thestudents on any of the procedure stations of SCAT as determinedby the three assessors.