Institutional Assessment: Artifact Collection Process



One way of telling the student learning & student success narrative is by collecting artifacts (course assignments) that directly reflect the General Education competencies. The collection of artifacts across courses/sections & programs offers an inclusive curriculum narrative grounded in evidence describing student learning and demonstrating competency attainment.

A learning artifact (or educational artifact) is an object created by students during the course of instruction. Artifacts are intentional and often paired with course student learning outcomes. The course outcomes directly connect to competency attainment. The alignment is imperative in using artifacts as evidence to demonstrate general competencies throughout the course of a student's academic experience. The Department of Education states that artifacts are a form of evidence that educators can use to tell the story of their classrooms and highlight their instructional practices. The artifact demonstrates what students know, or are able to do, as a result of the spaces created for learning. Since our goal is to build narratives demonstrating student learning and competency attainment, we ask faculty to submit artifacts as evidence. An artifact can be a written assignment, a presentation, a demonstration, a visual board, a portfolio, etc.


Artifacts that Can Be Scored vs. Artifacts that are Difficult to Score.

Artifacts That Can Be Scored Artifacts Difficult for Scoring
The artifact aligns with at least 80% of the General Education competency Student Learning Outcomes. The artifact does not align well with the competency SLOs.
The artifact aligns with the GenEd competency rubric used to score artifacts. The artifact does not strongly align well with the GenEd competency rubric designed to measure skills.
The Artifact content demonstrates skills specific to the rubric. Too short/Not enough student work in artifact to demonstrate using full rubric.
Artifact submission meets the criteria as listed in the dynamic form. Ineligible writing.
Easy to redact personal information as well as assignment information Too many separate files for one assignment.
  Files could not be opened.
  Comments are shown by faculty.
  Assignments are embedded in student work.
  Unable to redact identifiers

The importance of artifact collection and the results behind the effort must be well known across the institution. The process for submission must be clear and regularly accessible for faculty as they prepare their course curriculum. The effort is inclusive of all members committed to student learning and the effectiveness of the academic curriculum. Bristol Community College assess the process yearly and through the Lash Center for Teaching and Learning, makes recommendations for building a robust and inclusive collection in an attempt to build a Culture of Evidence through Assessment. 

Making Assessable Artifacts: A Quick Guide By Will Duffy - Special Programs Coordinator, Center for Teaching and Learning


The sampling process adopted by each institution should demonstrate efforts to create a representative sample of students from whom student work products will be collected. Such a sample should reflect the general characteristics of the eligible student population with respect to gender, race/ethnicity, major or program of study, Pell eligible, and age. These were the student characteristics identified and endorsed by the Multi-State Collaborative (MSC) members. (MSC to Advance Quality Student Learning, Refinement Year, Steering Committee & State Point People. 2017).

The ability to score a sample of 50-75 samples of work from students depends on the participation and willingness of faculty members to engage in this work.  Submission of artifacts should be well above this number considering the degree of scoreable factors of a particular artifact and the sampling criteria as indicated below.


  • Students enrolled in an associate’s degree (exclusions for those who currently hold a degree)
  • Students may be either full-time or part-time.
  • Credits completed may have been earned at the participating institution or may have been transferred into the institution from any other regionally accredited two-year or four-year public or private institution within or outside of the state. (We do look at both total earned credits and Bristol earned credits)
  • Any level of course is applicable

Sampling (for Lash CTL & SA&ES)

  • Student artifacts should be drawn from students majoring across a variety of disciplinary areas.
  • Limit: 10 artifacts collected in total, not per outcome, from any one faculty member or any one course. This number will likely decrease when participation across a variety of courses increases.
  • Limit of one artifact per student .
  • Limit: One outcome per artifact (Artifacts listed under two or more competencies for that year will need to be categorized under one competency for scoring purposes).
  • Limit: submitted student artifacts per faculty across multiple courses. When cleaning, if one faculty submitted three sections of one course, a random sample from each class (up to 10 artifacts) can be used.

In General

While artifacts have been reviewed and screened for scoring, there may be an artifact that becomes too difficult to score. Often this occurs when the content is handwritten, the PDF is too blurry, there is difficulty in distinguishing text describing the assignment and a student response or the artifact and rubric are misaligned. Additionally, there just may not be enough information to score the artifact. In any of these cases, the artifact should not be scored.

Additionally, reviewers might find that an artifact’s content is not aligned with the rubric metric. If this is the case, reviewers are told not to score it but note it as “not aligned with the rubric.” Reviewers should not be scoring a zero simply because the artifact does not align with the rubric metric.

We gather artifacts from students who are at different points in their pathway. They may have 12 credits completed or 48 credits completed. if scoring is low, it may be appropriate given where the current student is in terms of their education.


Rooted in campus collaboration and in faculty curriculum development, teaching activity, and assessment of authentic student work, the rubric is based on the use of Essential Learning Outcomes and associated VALUE rubrics developed by faculty members under the auspices of AAC&U’s LEAP initiative.  Each rubric underwent multiple revisions based upon feedback provided through campus testing of rubrics against samples of student work. We are entering an evaluation period for each rubric given our first launch date in 2016.

There are four dimensions (achievements) that are represented in the rubric. Each achievement is aligned with the Learning Outcomes associated with the General Education competency. Under each achievement a row separates the achievements, outlining what students should know or demonstrate. Scores are based on using the rubric to measure the student artifact.  

  • The student work will be evaluated both holistically (one overall score) and analytically (one score for each dimension) against each learning outcome and corresponding rubric.  
  • At times, reviewers believe that other competencies are being demonstrated. They may share those in the appropriate note section for that particular artifact. 
  • Often a group will score artifacts individually and then come together to discuss their scores. A discussion ensues to see if there are significant differences in the group members' evaluation of the artifact. 

The exclusion of Assignments for Artifact Scoring: 

The following bullets explain why scorers will not have the Assignment Instructions and emerged as a result of a pilot test conducted by the Multi-State Collaborative (the emergence of the artifact collection process) 

  • In general, scoring with the assignment instructions often moves faculty toward grading the paper based upon whether the student followed the assignment instructions and met the assignment requirements as opposed to assessing whether a student demonstrated the learning outcome and at what level. Faculty Reviewers also often begin to evaluate the assignment instructions (informally) which tends to bias their scoring, and Reviewers often try to ascertain whether the faculty asked students to demonstrate a certain dimension in the assignment instructions or not, which also influences scoring.   
  • Faculty Reviewers will assess the student work against all rubric dimensions/criteria. Students may demonstrate dimensions/criteria of the learning outcome that the instructor was not necessarily looking for or calling for in this specific assignment. The student may demonstrate this dimension because of learning in previous courses or experiences.


General Education Artifact Collection Timeline

Past General Education Artifact Scores (Bristol Community College Restriction)

Artifact Submission Form

General Education Course Designation Review Cycle

Course evolution occurs with the integration of new pedagogy and practice. This may result in the need to review course designations.  As a result, every year a review of the following years GenEd competency artifact collection will take place.

Academic Year 22-23 Multicultural and Social Perspectives Ethical Dimensions
Academic Year 23-24 Critical Thinking Human Expression
Academic Year 24-25 Written Communication Oral Communication
Academic Year 25-26 Scientific Reasoning and Discovery Information Literacy
Academic Year 26-27 Global and Historic Awareness Quantitative and Symbolic Reasoning