This sample Assessment of Ethics Education Research Paper is published for educational and informational purposes only. If you need help writing your assignment, please use our research paper writing service and buy a paper on any topic at affordable price. Also check our tips on how to write a research paper, see the lists of research paper topics, and browse research paper examples.
Abstract
The assessment of ethics education events requires high-level awareness and application of educational principles and methods. Essentially assessment focuses on evidence-based endeavors to determine the extent to which students have internalized a unique individualized set of skills, specifically skills to address ethical dilemmas in their personal and professional lives. This research paper focuses on the following aspects: definition of assessment, assessment approaches (summative assessment, formative assessment), quality principles of assessment (transparency, validity, reliability, sufficiency, fairness), assessment instruments and methods (text-based methods, oratory-oriented methods, arts-based methods), rating scales (quantitative scales, qualitative feedback), and assessors (educator-assessors, peerassessors, self-assessors).
Introduction
Ethics education is the widely accepted process that strives to raise students’ ethical self-awareness; ethical issue recognition, an understanding and insight in different ethical perspectives; ethical decision-making; and the application of ethical principles (Carlin et al. 2011). However, the assessment of ethics education is one of the more difficult aspects of the educational process.
Assessment challenges arise from the diverse course content goals and learning outcomes, as well as the wide variety of assessment approaches, assessment methods, rating scales, and assessor entities across the various education contexts (Canary and Herkert 2012). According to DuBois and Burkemper (2002), the significance of assessing students in bioethics courses is twofold, namely, to assess their development toward learning outcomes and, more importantly, to emphasize that ethics is as rigorous and deserving of attention as other program areas in the curriculum.
The overall aim of assessment should be to contribute positively to the subject matter of bioethics where not only the learning activities of students are evaluated but also where the roles of educators are valued and recognized. The assessment tasks and learning outcomes can be assessed through various methods and approaches which will be discussed in more detail below. However, the process should always be transparent in that students will know what is expected from them and how they can apply this to further their knowledge and skills of the subject matter. This includes receiving detailed feedback regarding their competency levels so that the necessary adjustments can be made to remedy any shortcoming. Also, it is important that all assessment instruments and methods should be reliable and valid to ensure a truly informed perspective and insight into each student’s understanding of the field.
Historically most educators of theory-based subjects tended to rely on awarding quantitative marks to pen-and-paper tests/examinations as an indication of students’ knowledge of the demarcated subject matter at a specific point in time. One outcome would be that different students’ performances would be compared against each other to ascertain whether they have sufficiently mastered the prescribed course content. Such a quantitative focus on understanding and comprehension clearly ignores important qualitative aspects related to problem-solving and the integration of specific skills – all elements that are of great importance in the subject matter of bioethics. However, there has been a movement away from the exclusive use of traditional standardized (usually norm-based) assessment methods to the inclusion of a whole range of alternative continuous assessment (usually criteria-based) methods. Alternative continuous assessment methods, also known as formative assessment methods, focus more on monitoring each student’s progress in accordance with specific assessment criteria and learning outcomes rather than comparing it against the performance of fellow students – making this a more realistic and holistic assessment approach in bioethics education contexts. In other words, the paradigm shift is away from merely producing factually correct answers to theoretical questions to a process that requires understanding and applying the course content to real-life scenarios.
Ethics educators must be mindful that ethics education, in general, does not aim to train bioethicists but rather to yield students from various professional fields who can effectively recognize ethical issues and think critically about it in their specific contexts (Carlin et al. 2011). As such, ethics education assessment must align as closely as possible with the central aim to develop the student’s ability to firstly recognize and analyze ethical issues and secondly to make decisions regarding the most appropriate ethical behavior (Føllesdal 2003). The following specific sub-aims of ethics education assessment further elucidate the focus of constructive alignment (Føllesdal 2003; Davis 2012; Garrett 2014):
- Ethical sensitivity – the ability and awareness to identify ethical issues in context
- Ethics knowledge – a deeper understanding of propositional knowledge about law, regulations, and professional codes
- Value clarification – the ability to identify and describe the applicable normative dimensions, ethical principles, facts, and concepts
- Ethics skills
- Ethical judgment – the skills and ability to conduct ethical analysis and argumentation, as well as to decide on a course of action regarding the identified ethical issue
Definition Of Assessment
Assessment refers to the process of using assessment tasks and instruments to determine, estimate, or measure the amount, extent, and/or level of learning against a set of criteria (Hargreaves 2005); it involves some form of judgment according to specific standards, goals, and criteria (Taras 2009). Assessment can also be described as a process of decision-making regarding education-related evidence. These decisions are about the following: the relevant evidence for a particular purpose (learning outcome), the appropriate assessment tasks and assessment instruments to collect the evidence, how to interpret and score the evidence, and how to provide feedback to students (Harlen 2005).
Assessment serves a dual purpose, namely, to facilitate quality learning and teaching (i.e., formative assessment) and to summarize, monitor, and report on achievements at a certain time (i.e., summative assessment) (Biggs 2003; Harlen 2005). The formative aspect of assessment focuses on efforts by educators and students to reflect, review, diagnose, explore, and understand learning (Hargreaves 2005). In this regard, assessment focuses on encouraging and facilitating students to take responsibility for their learning activities and learning outcomes rather than merely focusing on performance goals (Harlen 2005).
A more refined definition of assessment includes the following aspects (Hargreaves 2005):
- To monitor, often at frequent intervals, students’ performance against learning outcomes and assessment criteria
- To provide feedback and information to educators and students alike in order to improve teaching and learning strategies for individuals or groups and to work toward deeper learning and understanding (i.e., assessment for learning)
- To view assessment as a learning event that facilitates knowledge construction and empowerment (i.e., assessment as learning)
Assessment Approaches
Various assessment tasks, assessment methods, and rating scales are utilized when determining a student’s understanding and application of bioethical theory and principles. However, two broad assessment approaches are generally recognized in all educational contexts, namely, summative and formative assessment.
Summative assessment.
The word “summative” indicates that this assessment approach provides a summary of the total learning experience and is usually conducted at the end of the learning experience for a purpose external to the actual learning experience. It is often in the form of a single main test or formal examination that is administered at the end of the course period. Its evaluative function is to determine and/or make a judgment, often quantitatively, on how much of the course content the students have mastered without providing the students with any kind of feedback or significant information about their actual progress and development. The conclusions made from summative assessment are mainly for the benefit of third parties and are used to differentiate between students who have passed or failed the course together with some form of validated record of their performance (Taras 2009).
Formative assessment.
Formative assessment is an integral part of the learning process, specifically to influence and inform learning in order to achieve effective application of knowledge in real-life settings. The focus is to provide students with feedback and support on their individual progress toward meeting learning outcomes. In this approach the various course content components are not regarded as loose-standing silos of knowledge or information but rather as an integrated and useful whole. Feedback involves a developmental focus that informs the educator’s decisions with regard to the selection of appropriate follow-up learning activities and the determination of the students’ strengths and developmental needs in relation to specific learning outcomes and criteria. In short, formative assessment aims to monitor and support progression in the learning process through a variety of feedback activities (Taras 2009).
Formative assessment is sometimes referred to as continuous assessment which points toward the ongoing process to assess students’ understanding, insight, and viewpoints. One central benefit of continuous assessment is that students receive regular feedback throughout the course on how they are progressing without the threats and anxieties associated with settings in which one test/ examination determines the final evaluation of the learning process. Another important benefit is that students are offered opportunities to correct their shortcomings during the learning process, which in turn strengthens the educator-student relationship as it requires individual monitoring. Lastly, it also enables educators to continuously modify and/or adapt teaching and learning activities to meet the identified student needs (Taras 2009).
Quality Principles Of Assessment
A number of quality-related principles guide the credibility of the assessment process, assessment tasks, assessment methods, and assessment criteria. The following principles are the most important: transparency, validity, reliability, sufficiency, and fairness.
Transparency.
Transparency refers to the clarity of the assessment procedures for students. Educators can increase and enhance assessment clarity by developing and providing assessment guides to students at the onset of a course, as well as by stating explicit assessment parameters (Ngidi 2006; Boud and Falchikov 2006; Taras 2009).
Validity.
Validity generally refers to the extent that assessment methods and instruments measure what they intend to measure. In an education context, it specifically refers to the alignment, sometimes also called constructive alignment, between the course content, learning activities, assessment tasks, and intended learning outcomes (Biggs 2003; Boud and Falchikov 2006; Canary and Herkert 2012). In schematic form it can be expressed as follows: course content and learning outcomes ! learning activities ! assessment tasks.
The specific ways in which assessors can address assessment validity include the following (Harlen 2005):
- Learning outcomes must be clearly and unambiguously identified.
- Develop clear, appropriate, and authentic assessment tasks, assessment instruments, and detailed assessment criteria.
- Clearly communicate the assessment criteria to the students.
Reliability.
Assessment reliability is generally defined as the consistency of an assessment instrument in the same or similar contexts each time it is administered, even when used by multiple educators (Lee et al. 2007). An assessment instrument with good/high reliability facilitates trust and confidence in the assessment process (Ngidi 2006). Reliability does not only apply to the assessment instrument but also to the ability and expertise of educators to consistently administer assessment tasks and to score or rate the students’ performance. The use of internal and external moderation procedures might further strengthen the process. However, it is important to keep in mind that scoring or rating always involves explicit and implicit judgment on the part of educators, meaning that it will almost invariable to some degree be subject to educator error and bias (i.e., gender, ethnicity, sexual orientation, religion, physical appearance, or previous performance (good or bad)) (Harlen 2005). Inter-rater reliability is vitally important when multiple educators are being used to teach the same course, develop assessment tasks, and administer and score or rate the assessment tasks. Consensus discussions can be employed to ensure high inter-rater reliability. A discussion between the educators should interrogate potential differences in rating standards and each person’s rationale for scoring in a particular way (Harlen 2005; Carlin et al. 2011).
Sufficiency.
Sufficiency refers to the ability of an assessment task or instrument to allow students rigorous opportunities to present their knowledge, skills, and judgments (Ngidi 2006). The focus of assessment, depending on its level (i.e., undergraduate, postgraduate, professional), should not only assess students’ value clarification ability but should also assess their ability to engage in fundamental critique and complex integration. As such, fundamental critique and complex integration can be regarded as higher-level assessment criteria. Fundamental critique specifically refers to students’ ability to challenge the basic fundamental assumptions of bioethical arguments, while complex integration refers to students’ ability to resolve ethical dilemmas and conflicts. Sufficiency requires from educators to plan and develop assessment tasks and assessment criteria that include and balance, depending on the course level, value clarification, fundamental critique, and complex integration aspects (Garrett 2014).
Fairness.
Assessment fairness refers to a variety of aspects that strive to ensure that no student is unduly disadvantaged or hindered in any way during assessment activities (Ngidi 2006; Lee et al. 2007):
- All assessment activities must be fair and sensitive in a context of diversity, specifically with regard to language, gender, and culture.
- The educator must use a variety of appropriate teaching and learning approaches.
- The educator must formulate and adhere to clear assessment criteria.
- The educator must use a variety of assessment methods, instruments, and tasks.
- Students must have clarity with regard to the course content to be assessed and the assessment criteria.
- Clear assessment system procedures must be employed for moderation, reassessment, and appeals.
- Equal support, feedback, and mentoring opportunities must be available for all students.
Assessment Instruments And Methods
The main function of an assessment instrument or method is to collect evidence regarding the progress of students toward the learning outcomes for specific learning activities. Different types of assessment instruments and methods should be used to collect a wide range of evidences linked to the various learning outcomes. In some cases, the same type of assessment instrument (e.g., multiple-choice questions) can be used for different purposes (e.g., establishing baseline knowledge and final evaluation), while in others different assessment instruments and methods enable educators and students to evaluate whether a variety of learning outcomes have been met (Harlen 2005).
An important educational justification for the use of different assessment instruments and methods is that it provides a more credible indication of students’ knowledge, insight, skills, and actions (Carlin et al. 2011). Another reason is that there are advantages and disadvantages to any particular assessment instrument and method; no single instrument or method can provide an all-encompassing assessment of learning outcomes. For example, qualitative open-ended questions and portfolios focus on different aspects of course content, learning activities, application contexts, and personal skills development (Canary and Herkert 2012).
A large variety of assessment instruments and methods can be employed in the context of ethics education. The most prominent approaches include text-based methods, oratory methods, and arts-based methods.
Text-Based Methods.
The text-based methods are characterized by the requirement for students to provide written responses (i.e., paper-based, printed, and/or electronic versions) to meet specific learning outcomes. The “text” may involve written tests, multiple-choice questions, essays/ assignments, reflective journals, or portfolios.
Written tests, whether open or closed-book tests, are widely used to assess factual knowledge at the onset and/or end of a course; pre and post-testing enable the assessment of cognitive growth in relation to the baseline at the onset of a course. Pre and post-testing can also be used to assess independent learning (e.g., journal article reading assignments, library/Internet-based audiovisual material) (Lee et al. 2007; Canary and Herkert 2012). Apart from its clear application in summative assessment, educators can also use written tests throughout the course to provide feedback without marks as part of formative assessment. Also, deeper learning can be achieved in written tests by including hypothetical moral dilemmas with probing questions (Carlin et al. 2011). Such an approach assists students to focus on learning areas in need of attention and improvement instead of on performance for marks (Taras 2009).
Multiple-choice questions are often used to assess students’ knowledge of basic ethics and law knowledge (Carlin et al. 2011). It is an objective instrument that basically requires recognition of the “correct” answer from true/false (dichotomous) or multiple-choice response options (Biggs 2003; Canary and Herkert 2012). Online versions based on learning management system platforms (e.g., Blackboard) are often used in tertiary education contexts, the advantage being immediate feedback to students regarding individual grade performance and item-specific feedback on the reason/s for incorrect answers.
Essays/assignments can be effectively used in the analysis and discussion of vignettes and case studies. Students may be required to provide reasoned responses to one or more questions related to the vignettes and/or case studies. This assessment method often requires from students to read widely and to demonstrate the interrelatedness between various ethics concepts. The scoring of assessments and essays is often done by means of rubrics. However, a weakness of the assessment method is the possibility of copying work between students or from Internet resources without any significant engagement and/or application of the ethics principles (Biggs 2003; Carlin et al. 2011).
Reflective journals, also known as self-reflection diaries, are characterized by written (i.e., pen and paper, electronic, or online) descriptive narratives of personal opinions, experiences, and insights emanating in and from the learning context and/or real-life contexts (Ngidi 2006). A distinct advantage of the reflective process is that students and educators alike gain deeper understanding of students as learners; it provides person-specific evidence of the learning journey (Biggs 2003; Hargreaves 2005; Lee et al. 2007). Another advantage is that reflection facilitates the interconnection between various ideas and concepts from different course component, as well as skills to use one or more problem-solving techniques (Ngidi 2006).
Portfolios are regarded by some authors and educators as one of the primary assessment tools currently in use for both formative and summative assessment; paper-based, electronic, and online formats are widely used (Lee et al. 2007; Buckley et al. 2010). Portfolios are purposeful collections and repositories of students’ work that provides evidence of their efforts and abilities to complete assessment tasks, as well as an indication of progress toward meeting specific learning outcomes (Ngidi 2006; Tochel et al. 2009; Buckley et al. 2010). The collections may contain a variety of material that educators and/or students regard as significant and appropriate for assessment purposes, for example, written texts, concept maps, case reports, video recordings, reflective journal entries, attendance records, and remediation plans. The strength of portfolios lies in it being student-drive and student-maintained, thereby facilitating self-reflection, self-study, self-awareness, and lifelong learning (Ngidi 2006; Lee et al. 2007; Buckley et al. 2010). Another strength of portfolios is that educator feedback can assist students to identify and analyze their learning needs (“gaps”) in order to take appropriate action toward improving their knowledge, skills, conduct, and levels of understanding (Tochel et al. 2009; Buckley et al. 2010). However, on the negative side is the fact that completing and assessing portfolios are often time-consuming and burdensome for both students and educators, which may actually inhibit students’ authentic reflection activities. As a result, some students view its summative assessment aspect as the most important motivation to engage in the significant effort to compile a meaningful portfolio (Tochel et al. 2009; Buckley et al. 2010). The complex nature of portfolios renders it susceptible to low inter-rater reliability, which can be countered by offering proper training to assessors and developing clear assessment criteria (Tochel et al. 2009). In addition, educators should avoid relying solely on quantitative rating scales to assess portfolio content as it may not adequately represent students’ learning activities. Some authors even suggest that portfolios should not at all be used for summative assessment but should rather be used in formative contexts to facilitate qualitative development in ethical judgments (Tochel et al. 2009).
Oratory-Oriented Methods.
The oratory-oriented methods are characterized by the requirement for students to provide some form of verbal response and/or input to address specific learning outcomes. The most important oratory-oriented methods include individual interviews, seminars, Socratic debate, and questioning.
Individual interviews, including oral defenses, are often used in summative assessment when posing hypothetical moral dilemmas or simulated cases to students, combined with probing questions. However, this method is time intensive and potentially difficult to score (Carlin et al. 2011). An advantage of the method is that each student’s ethical judgment and critical thinking skills can be assessed in a context that allows them to duly demonstrate their ability to identify and integrate ethical principles.
Seminars involve verbal presentations dealing with specific cases, themes, or topics; it is presented by individual students or groups to an audience of peers and/or educators. The presentations are frequently supported and enhanced by one or more audiovisual aids. An indirect advantage is that it facilitates verbal communication skills (Biggs 2003). Seminars are often used in formative assessment with scoring predominantly being done by means of rubrics.
A Socratic debate involves a structured exchange of ideas or arguments between two individuals or groups who present opposing views, usually in the form of an affirmative position and a negative position, about a specific topic or moral dilemma. It combines reasoning, critical thinking, and listening skills. As such, it is well suited to be used in formative assessment when the debate is followed by a group discussion and/or reflective journaling.
Questioning refers predominantly to educators asking probing questions in a learning environment to elicit responses and comments from a group of students about ethical issues. The primary aim of questioning is to allow students to explore, express, and discuss their ideas, opinions, and insight regarding the course content. An additional formative component involves the new information and/or insight that students may gain from the responses of others during the discussion in an iterative cycle of learning (Taras 2009).
Arts-Based Methods.
The arts-based methods are characterized by its application of artistic elements to express, represent, and/or develop assessment tasks linked to specific learning outcomes, specifically if it is also accompanied by verbal explanations to peers and/or reflective journaling on its personal meaning (Hargreaves 2005). As such, it is well suited for formative assessment. Image-based methods include collages, posters, paintings, drawings, cartoons, concept maps, and video productions (Biggs 2003). Performance-based methods include drama, role-plays, song, and poetry (Ngidi 2006; Lee et al. 2007).
Other Methods.
Lecture attendance records are sometimes used as the only “assessment” method, primarily in the case of short courses or single-lecture courses that merely require attendance for students to earn the required course credits (DuBois and Burkemper 2002; Lee et al. 2007). The primary aim of these learning activities is presumably to raise ethics awareness among the students, while the most important shortcoming is that it “assesses” mere passive attendance without any attention to the assessment of other learning outcomes that may have been formulated for the course.
Direct observations of students’ conduct and interactions in real-life or simulated scenarios allow them to demonstrate and practice specific skills and behaviors against the set assessment criteria. Educators or peers can score the observations by means of observation rubrics or checklists, even though inter-rater reliability may be low due to differences in the interpretation of students’ behavior during the assessment task. However, observation rubrics can be effectively used to provide feedback to students regarding their skills development (Ngidi 2006; Lee et al. 2007).
Rating Scales
Rating scales can be broadly categorized in numerical scales (quantitative) and descriptive scales (qualitative) (Ngidi 2006). Ideally, a combination of quantitative scoring and qualitative rating, i.e., scoring triangulation, should be employed for the various assessment tasks in a course (Davis 2012). However, from a practical point of view, as the number of students in a course increases, educators tend to focus more on quantitative assessment, especially since it is often considerably less time-consuming than qualitative assessment (Davis 2012). It is also important to note that the more educators use assessment to quantitatively grade student performance, and per implication provide less qualitative feedback, the more students tend to view and focus on scoring the minimum numerical marks required to pass the course rather than to focus on the course’s learning and development aspects (Boud and Falchikov 2006).
Quantitative Rating Scales may sometimes merely require from educators to allocate a pass/ fail score (i.e., ordinal data) according to whether the evidence presented by students in the assessment tasks meets or does not meet the set assessment criteria; in some case pass/fail may even only depend on the students’ attendance record (DuBois and Burkemper 2002). Arguably, the most widely used scoring system requires from educators to allocate numerical marks (i.e., ratio data) to specific question-and-answer items according to its level of factual correctness when compared to a preformulated memorandum; ultimately, the students’ final score must equal or exceed a preset numerical value to pass the assessment.
Rubrics are scoring instruments that consist of one or more criteria sets that each focuses on specific aspects and/or components of the assessment task. An advantage of rubrics is that it clearly and unambiguously indicates and describes to educators and students the course content, themes, activities, and assessment criteria linked to the core learning outcomes. In a formative context, rubrics facilitate focused feedback to students regarding areas of competence/growth and not-yet-achieved learning outcomes (Ngidi 2006). Essentially, rubrics are usually scored along a continuum of four or five scoring categories (ordinal data) from “no evidence of meeting the assessment criterion (i.e., total non-compliance)” to “full evidence of meeting the assessment criteria (i.e., full proficiency)” (Carlin et al. 2011). However, the use of only four to five scoring categories is sometimes criticized for being too simplistic in complex cases that require more detailed response discrimination (Carlin et al. 2011). As such, good rubrics are characterized by its ability to objectively evaluate assessment tasks, by being reliable and valid, and by being adaptable to assess a variety of student products and activities (e.g., open-ended responses in written tests, online discussions, group discussions, and individual oral presentations) (Lee et al. 2007; Carlin et al. 2011). Lastly, the versatility of rubrics is evident in that it can be used for formative purposes (e.g., as part of assignment instructions) or for summative assessment purposes (e.g., final scoring of assignments) (Carlin et al. 2011).
Qualitative Feedback is a core element of formative assessment. It involves the evaluation of students’ critical thinking and decision-making abilities as expressed in case study discussions, open-ended responses in written tests, essays, reflective journals, portfolios, interviews, seminars, questioning, and arts-based activities (Canary and Herkert 2012).
Assessors
Traditionally, assessment used to be the sole responsibility of educators; it was up to them to decide and rate whether students had indeed mastered the subject matter. Nowadays, a more holistic, valid, and reliable approach is used in educational settings. Three assessor groups can be identified who each contributes significantly to the assessment of learning outcome attainment. It is of paramount importance that both the assessors and the assessment methods must focus on the development and progress of students so that no barriers for further learning are inadvertently created. In addition, assessor objectivity is essential to effective assessment, as is transparency. Assessor subjectivity can be reduced by using specific strategies: masking the identities of students from the assessor during assessment task rating, using moderators to verify assessor objectivity in the choice/range of answers identified as correct, using valid grading rubrics, and using multiple graders (Davis 2012). In any assessment activity, it is always important to define the different roles of the assessor and the assesse before an assessment instrument can be administered.
Educator-Assessors.
Educators, as the subject experts in the educational context, have the overall responsibility to assess the development and growth of students in attaining the expected learning outcomes. Educator-assessors are specifically responsible to design, understand, apply, and manage all kinds of summative and formative assessment tools, keep detailed records, and align assessment tools according to learning outcomes (Hargreaves 2005; Harlen 2005; Taras 2009).
Peer-Assessors.
The aim of peer assessment is to determine whether students can apply their knowledge set against the understanding of the group with the same exposure. As such, peer assessment is a process of involving students to determine each other’s attainment, performance, and achievements against clearly defined learning outcomes. It may provide unique and valuable insights into student behavior and interactions that might not be readily observable by educators (Lee et al. 2007). The fundamental principle of peer assessment is that students can only fully achieve formative assessment goals once they have developed the ability to apply the assessment process within their own learning environment, thereby internalizing the subject matter into a cognitive structure of understanding it themselves. According to Taras (2009), peer assessment is a prior requirement to self-assessment as it allows students to learn by performing and experiencing the educator role. Students who act as peer-assessors may be encouraged to focus away from only being interested in summative assessment marks to being interested in deeper understanding of the reasons they had achieved or fell short of specific learning outcomes.
Peer assessment may involve formal and informal components. Informal verbal comments and ideas from other students in the learning context can be significant in facilitating peer-assessors to rethink and reassess their knowledge, insight, and skills. Formal assessment refers to situations where students accept the role of a peer-assessor who uses a rubric to evaluate others’ contribution to a task and the level of cooperation in a group work context (Ngidi 2006; Taras 2009). The value of peer assessment is linked to students’ understanding of group dynamics and group work to reach or achieve a common goal. In the context of group work, peer assessment focuses specifically on assessing social skills, time management, resource management, group dynamics, and group work outputs. The assessment criteria are linked to evidence of group cooperation, mutual assistance, work division, and individual contribution to group success; in other words, it includes group processes and group products (Biggs 2003).
Self-Assessors.
Self-assessment refers to the process in which students assess their own learning against the desired learning outcomes and assessment criteria; it assists students to decide what they need to do to improve and grow toward full competence. For example, self-assessment occurs when educators ask students to identify their best assignment or argument concerning a specific topic and to state the reason(s) for the selection. It is therefore very useful in evaluating individual values and attitudes, which in turn empowers students to accept greater responsibility for their learning endeavors (Biggs 2003; Hargreaves 2005; Lee et al. 2007). According to Taras (2009), self-assessment is technically not formative but rather summative in nature as students are not always in a position to identify and inform their own weaknesses and shortcomings.
In addition to the use of questioning to extract assessment information from students and to keep them engaged in the learning environment (Taras 2009), educators can assist students to take more responsibility for their own learning by becoming effective questioners themselves and by engaging in Socratic debates about various ethical issues. Although the Socratic debate usually involves active interaction with others, a similar approach can be used in self-assessment by asking and reflecting on questions such as the following: What were my opinions and viewpoints at the onset of this course? How have my opinions and insights changed? How can I apply these changed opinions and insights in the future?
Conclusion
Ethics education strives to raise students’ ethical self-awareness, ethical sensitivity, ethical issue recognition, insight in different ethical perspectives, ethical decision-making, and application of ethical principles (Carlin et al. 2011). However, it remains difficult to assess whether the various learning activities actually result in long-term changes in their thinking, opinions, skills, and behavior. Multidimensional and long-term changes cannot be instilled by virtue of a single course, but the focus should rather be to strive for changes in perspective and moral commitment. This does not imply that meaningful ethics education assessment is not possible, as the current assessment principles and practices can indeed establish to a greater or lesser degree whether students have internalized the intellectual skills and tools to take a reasoned and independent stand on ethical issues within specific settings (Føllesdal 2003).
Bibliography :
- Biggs, J. (2003). Aligning teaching and assessing to course objectives. In Teaching and learning in higher education: New trends and innovations, University of Aveiro, Portugal, 13–17 Apr 2003.
- Boud, D., & Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment and Evaluation in Higher Education, 31(4), 399–413.
- Buckley, S., Coleman, J., & Khan, K. (2010). Best evidence on the educational effects of undergraduate portfolios. The Clinical Teacher, 7, 187–191.
- Canary, H. E., & Herkert, J. R. (2012). Assessing ethics education in programs and centres – Challenges and strategies. In F. F. Benya, C. H. Fletcher, & R. D. Hollander (Eds.), Practical guidance on science and engineering ethics education for instructors and administrators, workshop (pp. 38–43). Washington, DC: National Academy of Engineering, National Academies Press.
- Carlin, N., Rozmus, C., Spike, J., Willcockson, I., Seifert, W., Chappell, C., Hsieh, P., Cole, T., Flaitz, C., Engebretson, J., Lunstroth, R., Amos, C., & Boutwell, B. (2011). The health professional ethics rubric: Practical assessment in ethics education for health professional schools. Journal of Academic Ethics, 9, 277–290.
- Davis, M. (2012). Instructional assessment in the classroom – Objectives, methods, and outcomes. In F. F. Benya, C. H. Fletcher, & R. D. Hollander (Eds.), Practical guidance on science and engineering ethics education for instructors and administrators, workshop (pp. 29–37). Washington, DC: National Academy of Engineering, National Academies Press.
- DuBois, J. M., & Burkemper, J. (2002). Ethics education in US medical schools: A study of syllabi. Academic Medicine, 77(5), 432–437.
- Føllesdal, D. (2003). The teaching of ethics. In UNESCO World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), proceedings of the 3rd session, Rio de Janeiro, 1–4 December 2003.
- Garrett, J. R. (2014). Two agendas for bioethics: Critique and integration. Bioethics. doi:10.1111/bioe.12116. Hargreaves, E. (2005). Assessment for learning? Thinking outside the (black) box. Cambridge Journal of Education, 35(2), 213–224.
- Harlen, W. (2005). Teachers’ summative practices and assessment for learning – Tensions and synergies. The Curriculum Journal, 16(2), 207–223.
- Lee, A. G., Beaver, H. A., Boldt, H. C., Olson, R., Oetting, T. A., Abramoff, M., & Carter, K. (2007). Teaching and assessing professionalism in ophthalmology residency training programs. Survey of Ophthalmology, 52(3), 300–314.
- Ngidi, T.Z.N. (2006). Educators’ implementation of assessment in outcomes-based education. Doctor of education thesis, Department of Curriculum and Instructional Studies, University of Zululand.
- Taras, M. (2009). Summative assessment: The missing link for formative assessment. Journal of Further and Higher Education, 33(1), 57–69.
- Tochel, C., Haig, A., Hesketh, A., Cadzow, A., Beggs, K., Colthart, I., & Peacock, H. (2009). The effectiveness of portfolios for post-graduate assessment and education: BEME guide No 12. Medical Teacher, 31, 299–318.
- Beets, P. A. D. (2012). Strengthening morality and ethics in educational assessment through ubuntu in South Africa. Educational Philosophy and Theory, 44(S2), 68–83.
- Miles, S. H., Lane, L. W., Bickel, J., Walker, R. M., & Cassel, C. K. (1989). Medical ethics education: Coming of age. Academic Medicine, 64(12), 705–714.
- Prinsloo, F., & Van Rooyen, M. (2003). Outcomes-based assessment facilitated. Cambridge: Cambridge University Press.
See also:
Free research papers are not written to satisfy your specific instructions. You can use our professional writing services to buy a custom research paper on any topic and get your high quality paper at affordable price.