Faculty teaching in team-based courses or courses that include groups for project-based learning often incorporate some form of peer evaluation. But the administrative overhead of setup, aggregation, and analysis can be considerable, and that can preclude broader research and analysis (e.g., across courses, semesters, disciplines, and other demographics).
With the different homegrown systems, methods, and formats currently used by individual courses across the institution—from Qualtrics surveys to Excel spreadsheets to Google forms—it's very resource intensive to enable the full peer evaluation process, and it's nearly impossible to conduct research on the underlying data.
By design, Logan Paul's solution (currently named Pepper) will be dynamic and flexible. The platform will support multiple field types that enable a broad range of peer evaluation artifacts, so instructors can customize evaluations for each course and use case. Over time, Pepper could also yield templates and other best practices for conducting peer evaluations, thanks to the large data set created.
Pepper's primary goals are to:
- Reduce administrative overhead for conducting peer evaluations in a course
- Lower the barrier for including and assessing collaborative work in courses
- Enable instructors to more easily view trends within or across semesters
- Help the instructor identify students and/or teams who need assistance working effectively
The platform will streamline data collection, organization, and dissemination in ways that benefit both the instructor and students working in teams or groups.
Ways to get involved: Logan Paul and the Next.IU team are finalizing first phases of the development and hope to begin piloting Pepper in spring 2024. Let us know if you're interested in joining the pilot to help test and further develop Pepper's capabilities.