Computer Science Resources

Teaching and Leading Beyond Boundaries

Student Learning Objectives Project




SLO Project
APA-formatted Paper
(PDF in new window)

Welcome to my Student Learning Objectives (SLO) project!

This project was conducted during my student teaching internship, and the student population for the study was the AP Computer Science Principles class I was teaching. This worked out well because the school I was teaching at had a School Improvement Plan (SIP) that was focusing on improving the performance of students taking AP exams, and they wanted to ensure that any student who took an AP course would be prepared and able to take the corresponding AP exam at the end of the year.

AP Computer Science principles differs from AP courses offered by the College Board mostly because it's very new, having only been introduced last year. And already since last year, the exam and scoring rubrics have changed, meaning it's possible that teachers could be teaching material that's no longer on the exam. In an effort to mitigate this issue, my SLO project focused on some traditional learning strategies that can be used in Computer Science courses to help guide instruction so that any gaps can be filled.

Please click on the links to the left to navigate through this presentation.

I hope you find this information useful!

Sincerely,

Anthony Pieper

Description of Study

The baseline data collected was the Unit 1 assessment which was given in a similar manner to how the students will take the AP exam. The assessment consisted of a multiple-choice exam followed by a performance task that required the students to perform critical thinking and analysis. 11 students scored 68% or lower on the multiple-choice exam (with 52% being the lowest score), and 7 students scored 60% or lower on the performance task (with 50% being the lowest score). Looking at student performance for both parts as a whole:

  • 7 students scored 70% or better on both parts of the exam
  • 12 students scored 68% or less on one part of the exam
  • 3 students scored 68% or less on both parts of the exam

Throughout the AP Computer Science Principles course, students are required to take similar assessments at the end of every unit as a way of preparing them for the AP exam. Based on the results of the Unit 1 assessment, two growth targets were set and evaluated on the Unit 3 assessment to ensure success on the AP exam. A Growth to Mastery target of 90% was set on the performance task and a Common Growth target of 20% was set on the multiple-choice exam.

The period of the SLO project ran throughout Units 2 and 3 of the AP Computer Science Principles course, which took approximately 6 weeks to complete. The topics students learned during this period aligned to the following curriculum standards:

  • CSTA K-12 Computer Science Standards - Networks & the Internet
  • CSTA K-12 Computer Science Standards - Data & Analysis

Procedures

The data that formed the baseline for this SLO project came from the Unit 1 assessments that occurred at the end of the first month of the AP Computer Science Principles class, and the actual SLO project itself took place during Unit 2 and Unit 3, which covered a period of approximately 6 weeks. There was a multiple-choice unit exam and performance task at the end of week 3, and another at the end of week 6. Formative assessments called Quick Quizzes were administered as lesson warm-ups in week 2 and week 5. These were meant to provide students with an idea of the types of questions they would see on the multiple-choice exams, as well as provide feedback to the teacher on what gaps were present in student knowledge. And for the multiple-choice exams that were administered in weeks 3 and 6, many of the questions were rewritten to ensure they aligned with the lesson content that was taught.

  • Formative Assessments - Weeks 2 & 5
  • Unit Assessments - Weeks 3 & 6

To help students prepare for the performance tasks in weeks 3 and 6, clear objectives for those assessments were given to students in weeks 1 and 4. Each lesson leading up to the performance tasks had higher-level learning objectives written on the whiteboard, and students were reminded of how those learning objectives related to the performance task objectives. To reinforce the learning objectives, students were directed to use a specific Computer Aided Instruction (CAI) system at home to complete homework assignments. This specific CAI (https://studio.code.org) maps to the curriculum of the AP Computer Science Principles course, and is also continuously being updated to reflect changes in the AP exam. Any questions that the students had regarding the CAI or the results of the homework was addressed by the teacher in class the following lesson.

  • Early Presentation of Performance Task Objectives - Weeks 1 & 4
  • Clear Lesson Objectives Associated to Performance Tasks - Every Lesson
  • Computer Aided Instruction as Homework - Every Lesson

Data and Results

Compared to the baseline data collected from the Unit 1 multiple-choice exam and performance task, the goal was to have an improvement of 20% per student on the Unit 3 multiple-choice exam, and an overall score of 90% for all students on the Unit 3 performance task. Based on the data collected, these goals were not met. Only 8 students achieved the Common Growth target of 20% increase on the multiple-choice exam, and only 12 students achieved the Growth to Mastery target of 90% or higher on the performance task. However, even though the goals were not met, there was still evidence of significant learning occurring:

  • 15 students improved their multiple-choice exam scores, and 15 students improved their performance task scores, with 12 students improving both
  • Students who improved their multiple-choice exams scores averaged 19% improvement, and students who improved their performance task scores averaged 25% improvement
  • Only 2 students scored less than 70% on the multiple-choice exam, with none scoring less than 60%

See the table below for specific details regarding assessment scores. Please note that certain fields are marked in orange to identify them as outliers: either the student had 100% and showed no improvement, or the student failed to participate in one of the assessments (resulting in either 0% or 100% improvement).

SLO Data

Implications

Formative assessments used as a means of identifying gaps was a crucial part of this SLO project. There was a specific instance in Unit 2 where students didn't perform well on the Quick Quiz, and as a result it was identified that many students were failing to make conceptual connections between what was taught in lesson activities and what was going to be assessed on the Unit 2 exam. Handouts were created to fill in some of the conceptual gaps, and part of a lesson prior to the exam was used to ensure that students really did understand the concepts that were going to be covered on the exam.

  • Formative Assessments Identified Gaps

Clear lesson objectives and how they relate to assessments also was critical to this SLO project, and is probably the main reason that there were no failures on the Unit 2 or Unit 3 performance tasks compared to the 6 failures on the Unit 1 performance task. The main objective of the performance tasks for Unit 2 and Unit 3 were reiterated throughout the lessons in those units, and every time a new example was used, the students were reminded of the performance task objectives, and how they related to the current lesson's objectives. This allowed the students to be exposed to several different examples of the concept before they were assessed on their performance, providing them multiple ways of demonstrating mastery of the objective.

  • Clear Lesson Objectives Associated to Performance Tasks Reduced Failures

Although this study failed to achieve its goals, one thing that could potentially improve results in the future is spreading the unit exams out further, at least 4 weeks or longer, as opposed to the 3-week spacing that occurred in this study. The formative assessments in the week prior provided some valuable insights, but there wasn't time to tailor or adjust lessons in-between the assessments and the unit exams. This may explain why there was improvement shown in most students, but not all. If there was an additional week between the assessments and unit exams, this could allow more time to differentiate instruction to allow all students to increase their performance.

  • More Time Between Assessments To Differentiate Instruction

References

Below are the references that were used in drafting my SLO Project, which included a Literature Review of studies covering instructional strategies relating to Computer Science. Please note that some of these were retrieved from online libraries that may not be publicly accessible!

Code.org (n.d.). Computer Science Principles. Retrieved from https://code.org/educate/csp

Cox, K., & Clark, D. (1998). The use of formative quizzes for deep learning. Computers & Education, 30(3), 157-167. Retrieved from https://doi-org.ezproxy.umuc.edu/10.1016/S0360-1315(97)00054-7

Goel, S., & Sharda, N. (2004, September 27). What do engineers want? Examining engineering education through Bloom's taxonomy. Presentation from AAEE, The 15th Annual Conference for the Australasian Association for Engineering Education. Retrieved from http://files.eric.ed.gov/fulltext/ED524509.pdf

Jagger, S. (2013). Affective learning and the classroom debate. Innovations in Education and Teaching International, 50(1), 38-50. Retrieved from http://eds.b.ebscohost.com.ezproxy.umuc.edu/eds/pdfviewer/pdfviewer?vid=0&sid=fb84a720-b5c7-4485-bd2b-01714f7cf8d6%40sessionmgr104

Kausar, T, Choudhry, B. N., & Gujjar, A. A. (2008). A comparative study to evaluate the effectiveness of computer assisted instruction (CAI) versus class room lecture (CRL) for computer science at ICS level. Turkish Online Journal of Educational Technology, 7(4), 19-28. Retrieved from http://files.eric.ed.gov/fulltext/EJ1102933.pdf

Loria-Saenz, C. (2009). On requirements for programming exercises from an e-learning perspective. Cornell University Library: ArXiv E-Prints. Retrieved from https://arxiv.org/pdf/0903.0786

Machanick, P. (2007). Teaching Java backwards. Computers & Education, 48(3), 396-408. Retrieved from https://doi-org.ezproxy.umuc.edu/10.1016/j.compedu.2005.01.009

Mathumbu, D., Rauscher, W., & Braun, M. (2014). Knowledge and cognitive process dimensions of technology teachers’ lesson objectives. South African Journal of Education, 34(3), 1-9. http://dx.doi.org/10.15700/201409161053

The College Board (n.d.). AP computer science principles: The exam. AP Central. Retrived from https://apcentral.collegeboard.org/courses/ap-computer-science-principles/exam?course=ap-computer-science-principles

Wankhede, H. S., Gandhi, S. S., & Kiwelekar, A. W. (2016). On which skills do Indian universities evaluate software engineering students? Cornell University Library: ArXiv E-Prints. Retrieved from https://arxiv.org/pdf/1601.01796

Yen, J., Lee, C., & Chen, I. (2012). The effects of image-based concept mapping on the learning outcomes and cognitive processes of mobile learners. British Journal of Educational Technology, 43(2), 307-320. Retrieved from http://eds.b.ebscohost.com.ezproxy.umuc.edu/eds/pdfviewer/pdfviewer?vid=0&sid=7ef9cf19-d88d-4583-8ef0-b45678900afd%40sessionmgr102

Copyleft 2017 - All HTML and CSS hand-coded by Anthony Pieper.