NAACLS News









News



SEARCH:

 

JUMP:

National Accrediting
Agency for Clinical
Laboratory Sciences
5600 N River Rd
Suite 720
Rosemont, IL 60018

773.714.8880
773.714.8886 (FAX)

info@naacls.org
http://www.naacls.org


NAACLS logo



Get Acrobat Reader





Archives









Volume 94 - Winter 2006



President's Report
by Shauna Anderson, PhD, MT(ASCP)C, CLS(NCA)
President, Board of Directors
One of the main purposes and benefits of the accreditation process is to encourage continued improvement in programs. The professions have defined entry-level competencies and programs have defined student learning outcomes, usually course by course. The only means that an instructor or program director has of knowing that those student learning outcomes have been attained is through assessment.

Program directors and faculty members are responsible to determine both what students should learn and how that learning should be demonstrated. It is the program directors that must provide the leadership to communicate not only how those learning outcomes should be defined but also initiate the development of assessment tools that will ultimately measure the achievement of the outcomes. When the faculty become involved, they are more likely to buy into the process. Joint ownership will foster a feeling of accomplishment and pride rather than imposing punitive actions.

As we begin to evaluate student learning, we need to collect evidence that learning has occurred. This assessment can be formative or summative. Formative assessment allows students to measure where they are in their progress toward mastery and make course corrections if necessary. This type of monitoring promotes self improvement. Its hallmark is feedback-from instructors, colleagues and even self. It is an ongoing effort to enhance the student's performance or a program's effectiveness.

Summative assessment is often comparative-a measurement of progress that justifies a grade or ranking or of success in meeting a requirement. This type of assessment is not designed to provide any type of feedback during the course of instruction but it can also play a positive role in shaping courses and programs overtime.

Another term that is applied to assessment is benchmarking or the establishment of a standard of excellence as a goal to be attained. Benchmarking techniques may be applied when two similar programs compare data from outcomes. Another type of benchmarking may occur if a program sets goals (benchmarks) for achievement in a specified time.

As assessment tools are created, the evaluation of student learning can be accomplished through direct or indirect methods. If the evidence for student learning outcomes takes the form of products or performance, it is direct. Examples include pass rates on certification exams or supervisor ratings. Indirect methods imply that learning has occurred through means such as course evaluations, job placement or employer surveys. These methods provide either qualitative or quantitative data. Both forms of data provide important evidence of student learning outcomes.

Laboratorians are familiar with quantitative data which consist of numerical evidence. Because numerical evidence is subject to statistical analysis, conclusions about course or program outcomes can be tested for reliability and validity. Quantitative data, therefore, make comparisons easier to evaluate.

Qualitative data can be just as reliable, objective and valid as quantitative data. Moreover, it may provide insight about issues and concerns that are not easily assessable by other means. Consider, for example, the personal views expressed in an alumni survey.

As program directors begin to involve faculty in the assessment of student learning outcomes, the use of multiple approaches should be encouraged. The worth of stating student learning outcomes for our professions, programs and courses has long been recognized. Students improve their achievement of learning outcomes when they know what is expected. Designing the appropriate assessment tools is our next challenge. It is only through evaluation of the evidence collected that programs will be able to prove accountability and strive for improvement.
 








Change in Timing of the Initial Application Fee

NAACLS Approves Standards for the Clinical Doctorate
by David D. Gale, PhD
Chair, NAACLS Graduate Task Force

NAACLS Considers Providing Another Service to Program Directors CEO's Corner
by Dianne M. Cearlock, PhD
Chief Executive Officer

President's Report
by Shauna Anderson, PhD, MT(ASCP)C, CLS(NCA)
President, Board of Directors



An Invitation to Nominate

Dr. NAACLS
Advice for Accredited and Approved Programs

Dr. Olive Kimball Honored by NAACLS

NAACLS Board of Directors Update
September 2006



NAACLS Approves AAPA Representative Position on Board

NAACLS to Conduct Meetings in April 2007

NAACLS to Present Workshop at CLEC 07

Newly Accredited and Approved Programs
September 2006






Select an Issue     


Top

Copyright © 2008 National Accrediting Agency for Clinical Laboratory Sciences. All rights reserved.
Comments or suggestions to the site editor.





NAACLS.org Programs Students Volunteers Committees Help Accreditation Approval News About Us Search Links Home