This paper presents an educational environment to learn how to program computers which integrates an automatic grading tool. The educational environment offers summative and formative feedback, by evaluating the functionality of computer programs. Given a programming problem, students must write a solution and validate it through the automatic grading tool of the educational environment. Summative feedback is offered through verdicts of the programs, indicating whether the syntax, semantics and efficiency of the program are correct or present some type of error. Based on these verdicts, a grade is assigned to the solutions proposed by the students. Moreover, the environment also offers formative feedback through different mechanisms that support the student in the improvement of the proposed programming solutions, before these solutions are submitted for assessment. These mechanisms include: a code editor with syntax highlighting and automatic autocompletion; automatic verification of good programming practices; interactive visualization of the program execution from its source code; a partial scoring system, which assigns partial credit to incomplete or not entirely correct solutions; a test module with personalized inputs; support for submitting programs with multiple source code files and projects; compatibility for several programming languages including Java, Python, JavaScript and C/C++; a manual assessment module; and interactive statistical reports for students and teachers.
The state of the art of tools to support the learning of computer programming is comprised of learning environments and automatic grading systems. The first group includes: basic programming environments, interactive environments based on examples, visualizations/animations, and intelligent tutoring systems. However, generally these systems lack of grading tools, which could be useful to carry out formative evaluation processes while the students practice. On the other hand, in most cases the automatic grading systems focus only on the assessment of the programs (online judges), which are generally adaptations of the evaluation systems used in programming competitions. Although these systems have been shown to be effective in improving the practical skills of students and the competitive aspect has proven to be an excellent motivation for students, these systems fall short as formative assessment tools. This is due to the limited feedback given to the student and their lack of integration with content and tools typical from learning environments.
In this context, the educational environment presented in this paper extends an automatic grading tool for computer programs with new functionalities that make it a truly support system for learning computer programming.
@InProceedings{Restrepo-Calle2018,
author = {Restrepo-Calle, F. and Ram{\'{i}}rez-Echeverry, J.J. and Gonzalez,
F.A.},
title = {{UNCode: Interactive System for Learning and Automatic Evaluation of
Computer Programming Skills}},
series = {10th International Conference on Education and New Learning Technologies},
booktitle = {EDULEARN18 Proceedings},
isbn = {978-84-09-02709-5},
issn = {2340-1117},
doi = {10.21125/edulearn.2018.1632},
url = {http://dx.doi.org/10.21125/edulearn.2018.1632},
publisher = {IATED},
location = {Palma, Spain},
month = {2-4 July, 2018},
year = {2018},
pages = {6888-6898}
}
This article presents a continuous assessment methodology for a computer programming course supported by an automatic assessment tool, applied to the practical programming exercises performed by the students. The interaction between the students and the assessment tool was studied through quantitative analyses. In particular, the solutions proposed by the students (computer programs) were analyzed using the verdicts given by the automatic assessment tool: correct solutions or incorrect solutions. In the case of incorrect solutions, the types of programming errors were studied. Additionally, interaction was also studied by analyzing the students’ success rate. This rate is the percentage of correct solutions among the total number of attempts (correct and incorrect). Moreover, the relationship between success rate and academic performance was analyzed. Furthermore, this research examines the students’ perceptions toward the assessment tool through interviews. The results of this study help understanding the benefits and perceptions of the students with respect to the use of an automatic assessment tool in a computer programming course.
@article{Restrepo-Calle2019,
author = {Restrepo-Calle, Felipe and {Ram{\'{i}}rez Echeverry}, Jhon J. and
Gonz{\'{a}}lez, Fabio A.},
doi = {10.1002/cae.22058},
issn = {10613773},
journal = {Computer Applications in Engineering Education},
keywords = {automatic assessment,computer programming,computer science
education,continuous assessment},
month = {jan},
number = {1},
pages = {80--89},
title = {{Continuous assessment in a computer programming course supported by a
software tool}},
url = {http://doi.wiley.com/10.1002/cae.22058},
volume = {27},
year = {2019}
}
This article describes the use of an automatic tool that supports both formative and summative evaluation, in an introductory computer programming course. The aim was to test the tool in a real scenario and to assess its potential benefits. The participants were 56 students in an engineering school. Different programming tasks were solved by students with the support of the tool. It provides summative evaluation using automatic grading, and formative evaluation through real-time checking of good coding practices, code execution visualisation, testing code with custom inputs, informative feedback on failed test cases, and student performance statistics for self-monitoring the learning process. Results of the experience show the students considerably used the tool, denoting high engagement to achieve the learning goals of the tasks. The students perceived the tool was highly useful to debug programs and carry out frequent practice of programming, thanks to the automatic feedback and test of solutions for programming assignments. Results offer evidence of the importance of providing comprehensive feedback to facilitate students finding mistakes and building knowledge about how to proceed in case of errors. Moreover, the tool shows some potential to help students developing learning strategies, e.g. metacognitive awareness and improving problem-solving skills.
@article{Restrepo-Calle2020,
author = {Restrepo-Calle, Felipe and Ram{\'{i}}rez-Echeverry, Jhon Jairo and
Gonz{\'{a}}lez, Fabio A},
journal = {Global Journal of Engineering Education},
keywords = {automatic assessment,computer programming,computer science
education,formative evaluation,summative evaluation},
number = {3},
pages = {174--185},
title = {{Using an Interactive Software Tool for the Formative and Summative
Evaluation in a Computer Programming Course: an Experience Report}},
volume = {22},
year = {2020}
}
Jupyter notebooks provide an interactive programming environment that allows writing code, text, equations, and multimedia resources. They are widely used as a teaching support tool in computer science and engineering courses. However, manual grading programming assignments in Jupyter notebooks is a challenging task, thus using an automatic grader becomes a must. This paper presents UNCode notebook auto-grader, that offers summative and formative feedback instantaneously. It provides instructors with an easy-to-use grader generator within the platform, without having to deploy a new server. Additionally, we report the experience of employing this tool in two artificial intelligence courses: Introduction to Intelligent Systems and Machine Learning. Several programming activities were carried out using the proposed tool. Analysis of students’ interactions with the tool and the students’ perceptions are presented. Results showed that the tool was widely used to evaluate their tasks, as a large number of submissions were performed. Students expressed positive opinions mostly, giving feedback about the auto-grader, highlighting the usefulness of the immediate feedback and the grading code, among other aspects that helped them to solve the activities. Results remarked on the importance of providing clear grading code and formative feedback to help the students to identify errors and correct them.
@Article{Gonzalez2021AutomaticCourses, AUTHOR = {González-Carrillo, Cristian D. and Restrepo-Calle, Felipe and Ramírez-Echeverry, Jhon J. and González, Fabio A.}, TITLE = {Automatic Grading Tool for Jupyter Notebooks in Artificial Intelligence Courses}, JOURNAL = {Sustainability}, VOLUME = {13}, YEAR = {2021}, NUMBER = {21}, ARTICLE-NUMBER = {12050}, URL = {https://www.mdpi.com/2071-1050/13/21/12050}, ISSN = {2071-1050}, DOI = {10.3390/su132112050} }
In recent years, learning analytics has emerged as one of the most important fields on the future of education. In the context of programming courses, the applications of learning analytics show high efficacy in giving directions for interventions, which help to promote better learning methods. However, few investigations have considered complex datasets where there are heterogeneous student's groups, letting out differential factors that can influence in the learning process. The main objective of this work is determining the relations between the measurements and metrics of the learning process with the academic performance of the computer programming students of the National University of Colombia. We apply a quantitative non-experimental methodological design, using as source of information the records of 2 years of student's interactions with an educational platform of automatic grading and feedback use in the course. In total 38 variables are considered in this work, that include the number of submissions, the results of each submission, software metrics, and the use rates of the tools available in the platform. The results show that the number of submissions, three types of results/verdicts, two verdict's rates, and one software metric have a positive correlation with the academic performance. Moreover, the runtime error rate, and the use of a good practices verification tool (i.e., Linter) have a negative correlation with the final performance of the students.
@InProceedings{Chaparro2021,
author = {Chaparro, Edna and Restrepo-Calle, Felipe and Ram{\'{i}}rez-Echeverry, Jhon Jairo},
title = {{Learning analytics in computer programming courses}},
series = {CEUR Workshop Proceedings},
booktitle = {Proceedings of the LALA'21: IV Latin American Conference on Learning Analytics},
issn = {1613-0073},
url = {https://ceur-ws.org/Vol-3059/paper8.pdf},
publisher = {CEUR-WS},
location = {Arequipa, Perú},
month = {October 19–21},
year = {2021},
pages = {78-87}
}
In this article, the authors present a case study of an intervention in an introductory programming course aimed at improving the learning experience of students through the use of three different tools: an interactive document format (Jupyter notebooks), an automatic assessment tool (UNCode), and an education-oriented forum platform (Piazza). The study included 100 students from three different sections of the Introduction to Computer Programming course at an engineering school. The study analysed both the use of the tools and the students’ perception of them. Usage was evaluated quantitatively using usage statistics reported by the forum and automatic evaluation tools. Student perception was assessed using a survey with closed-ended and open-ended questions. The results of the study show that all three tools had a positive impact on the students’ learning experience. In general, the participants found Jupyter notebooks, UNCode and Piazza forums easy to use, and useful to better understand the programming problems to be solved.
@article{Ramirez-Echaverry2022,
author = {Ram{\'{i}}rez-Echeverry, Jhon Jairo and Restrepo-Calle, Felipe and
Gonz{\'{a}}lez, Fabio A},
journal = {Global Journal of Engineering Education},
keywords = {Computer programming, technology-enhanced learning, computer science education},
number = {1},
pages = {65--71},
title = {{A case study in technology-enhanced learning in an introductory computer
programming course}},
volume = {24},
url = {http://www.wiete.com.au/journals/GJEE/Publish/vol24no1/10-Restrepo-Calle-F.pdf},
year = {2022}
}
This article proposes a framework based on a sequential explanatory mixed-methods design in the learning analytics domain to enhance the models used to support the success of the learning process and the learner. The framework consists of three main phases: (1) quantitative data analysis; (2) qualitative data analysis; and (3) integration and discussion of results. Furthermore, we illustrated the application of this framework by examining the relationships between learning process metrics and academic performance in the subject of Computer Programming coupled with content analysis of the responses to a students’ perception questionnaire of their learning experiences in this subject.
@article{Chaparro2023,
author = {Chaparro Amaya, Edna Johanna and Restrepo-Calle, Felipe and Ram{\'{i}}rez-Echeverry, Jhon Jairo},
journal = {Journal of Information Technology Education: Research},
keywords = {learning analytics, mixed methods, computer programming, correlation analysis, content analysis},
pages = {339--372},
title = {{Discovering Insights in Learning Analytics Through a Mixed-Methods Framework: Application to Computer Programming Education}},
volume = {22},
doi = {https://doi.org/10.28945/5182},
month={Aug.},
year = {2023}
}
Hardware Description Languages (HDL) have gained popularity in the field of digital electronics design, driven by the increasing complexity of modern electronic circuits. Consequently, supporting students in their learning of these languages is crucial. This work aims to address this need by developing an automated assessment software tool with feedback process to support the learning of HDL and making an educational intervention to support the learning process of students. The tool’s features were selected based on similar developments, and a prototype was designed and implemented. Additionally, an educational intervention was conducted over a five-week period in a Digital Electronics course at the National University of Colombia. Through analyzing students’ interactions with the tool and their perceptions of its usage, the study examined their learning experiences. Among the features highlighted by students as most beneficial for their HDL learning process were the online availability of the tool, the feedback system that helped them identify and correct errors in their code, the provision of immediate feedback, the online editor with syntax highlighting, and the graphical user interface. This work makes two significant contributions to the field of HDL teaching in engineering. Firstly, a publicly accessible HDL grading tool has been developed, offering students immediate formative and summative feedback through an automated grader. Secondly, empirical evidence has been provided regarding the benefits of using such a tool in enhancing students’ learning process.
@article{Corso2024,
author = {Corso Pinzón, Andrés Francisco and Ramírez-Echeverry, Jhon J. and Restrepo-Calle, Felipe},
journal = {Research and Practice in Technology Enhanced Learning},
pages = {015},
title = {{Automated grading software tool with feedback process to support learning of hardware description languages}},
volume = {19},
url={https://rptel.apsce.net/index.php/RPTEL/article/view/2024-19015},
doi = {https://doi.org/10.58459/rptel.2024.19015},
month={Jan.},
year = {2024}
}