Introducing a labelling system for tutors to mark certain submission
Description / Overview
Some submissions should be marked automatically by test the preprocessed the data. These should be visibly distinguishable from user created/assigned labels. Some may be auto assigned in the process, e.g. got full/zero score, seen by only one tutor, etc. Frontend menu has to be intuitive and enable both assigning and creating issues.
Optional enhancements for the future (to be pushed into other issues):
- Make it possible to label/mark a specific line in the code
- Allow filtering by labels
- Reviewer wants to filter label with logical queries (e.g. give me all solution with a specific label that do not have more than five points)
- a reviewer can set multiple submissions scores at once based on selection or edit their status
- in the reviewer view the status is updated live (via websocket)
Note: the time estimate does not consider optional enhancements.
Use cases
This is for the tutors to conveniently mark certain solutions and for the reviewers to get an immediate and standardised overview of what went wrong/right with a submissions.
Links / references
None.
Feature checklist
-
Existing labels for feedback like origin and the results of test cases should be added as static labels. -
Static labels are automatically assigned and cannot be removed.
-
-
All the over labels should be allowed to be created on-the-fly during an exam. Tutors as well as reviewers can create labels. -
Additionally, reviewers can edit or even delete labels globally. In case of duplicated labels it should be possible to merge them into a new one. Needs a separate component.
-
-
Multiple labels per submission/feedback are allowed. -
The reviewer is able to get a representation of all submissions belonging to a label.
Specification
Grady Labeling System
Features
- Labels consist of
- Label name
- Label description
- Points (later versions)
- Dummy? (dummy labels are only one time use and can only be assigned by reviewers)
- Changing of a label’s information is restricted to the reviewer
- Tutors and reviewers can create new labels via a dedicated interface and in the correction interface (for convenience)
- Tutors can assign labels to lines of code while correcting the code
- Tutors can change the default description to better suit the context (?)
- Total score is calculated based on the assigned labels (later versions)
- Merging labels should be possible e.g. when multiple labels for similar problems were created
- Reviewers should be able to adjust the total score e.g. by creating a “dummy“ label on the go
- Students can see the labels and their description / points
- Reviewers are able to query for specific labels
- Reviewers can get statistics for labels e.g. how often specific labels are assigned
Implementation Details
The submission db model has to be adjusted so that not comments but labels are assigned to lines of code and the total score should be calculated by the backend e.g. by a signal.
OR
The current commenting functionality stays intact and labels are only used for grouping/statistics/giving points.
API
Database Model
name: char field description: char field of_feedback: many to many feedback of_feedback_comment: many to many feedback comment points: decimal (can be positive or negative idk if this works) (later versions) is_dummy: bool (later versions)
Endpoints
POST /labels/create PUT /labels/update
Frontend
Box for creating labels could be located below the tasks component so that a user can always create a label regardless of his current task.
/label Feature proposal