Skip to content
Snippets Groups Projects
Commit f5011eb1 authored by robinwilliam.hundt's avatar robinwilliam.hundt
Browse files

Updated Readme and made small changes to Importer script

parent c2c45ecc
No related branches found
No related tags found
1 merge request!122Updated Readme and made small changes to Importer script
Pipeline #81530 passed
Grady - will correct you!
=========================
# Grady - will correct you!
The intention of this tool is to simplify the exam correcting process at the
University of Goettingen. It is deployed as a Django web application.
University of Goettingen. It is deployed as a web application consisting
of a Django-Rest backend and a Vue.js frontend.
[![pipeline status](https://gitlab.gwdg.de/j.michal/grady/badges/master/pipeline.svg)](https://gitlab.gwdg.de/j.michal/grady/commits/master) [![coverage report](https://gitlab.gwdg.de/j.michal/grady/badges/master/coverage.svg)](https://gitlab.gwdg.de/j.michal/grady/commits/master)
Contributing
------------
## Overview
Grady has three basic functions for the three types of users
* Reviewers can
* edit feedback that has been provided by tutors
* mark feedback as final if it should not be modified (only final feedback is
shown to students)
* delete feedback (submission will be reassigned)
* Tutors can
* request a submission that they have to correct and submit feedback for it
* delete their own feedback
* review feedback of other tutors
* they do not see which student submitted the solution
* Students can
* review their final feedback and score in the post exam review
An overview over the database can be found in the docs folder.
## Contributing
Feature proposals are welcome! If you experienced any bugs or otherwise
unexpected behavior please submit an issue using the issue templates.
It is on course possible to contribute but currently there is no standardized
It is of course possible to contribute but currently there is no standardized
way since the project is in a very early stage and fairly small. If you feel the
need to help us out anyway, please contact us via our university email
addresses.
Dependencies
------------
## Development
### Dependencies
Make sure the following packages and tools are installed:
......@@ -32,10 +54,9 @@ These are required to set up the project. All other application dependencies are
listed in the `requirements.txt` and the `package.json` files. These will be
installed automatically during the installation process.
Installing
----------
### Installing
To set up a new instance perform the following steps:
To set up a new development instance perform the following steps:
1. Create a virtual environment with a Python3.6 interpreter and
activate it. It works like this:
......@@ -79,8 +100,7 @@ To set up a new instance perform the following steps:
8. Congratulations! Your backend should now be up an running. To setup the frontend
see the README in the `frontend` folder.
Testing
-------
### Testing
> "Code without tests is broken by design." -- (Jacob Kaplan-Moss, Django core developer)
......@@ -95,22 +115,165 @@ or if you want a coverage report as well you can run:
make coverage
Overview
--------
Grady has three basic functions for the three types of users
* Reviewers can
* edit feedback that has been provided by tutors
* mark feedback as final if it should not be modified (only final feedback is
shown to students)
* delete feedback (submission will be reassigned)
* Tutors can
* request a submission that they have to correct and submit feedback for it
* deleted their own feedback
* review feedback of other tutors
* they do not see which student submitted the solution
* Students can
* review their final feedback and score in the post exam review
An overview over the database can be found in the docs folder.
## Production
In order to run the app in production, a server with
[Docker](https://www.docker.com/) is needed. To make routing to the
respective instances easier, we recommend running [traefik](https://traefik.io/)
as a reverse proxy on the server. For easier configuration of the containers
we recommend using `docker-compose`. The following guide will assume both these
dependencies are available.
### Setting up a new instance
Simply copy the following `docker-compose.yml` onto your production server:
```yaml
version: "3"
services:
postgres:
image: postgres:9.6
labels:
traefik.enable: "false"
networks:
- internal
volumes:
- ./database:/var/lib/postgresql/data
grady:
image: docker.gitlab.gwdg.de/j.michal/grady:master
restart: always
entrypoint:
- ./deploy.sh
volumes:
- ./secret:/code/secret
environment:
GRADY_INSTANCE: ${INSTANCE}
SCRIPT_NAME: ${URLPATH}
networks:
- internal
- proxy
labels:
traefik.backend: ${INSTANCE}
traefik.enable: "true"
traefik.frontend.rule: Host:${GRADY_HOST};PathPrefix:${URLPATH}
traefik.docker.network: proxy
traefik.port: "8000"
depends_on:
- postgres
networks:
proxy:
external: true
internal:
external: false
```
and set the `INSTANCE`, `URLPATH`, `GRADY_HOST` variables either directly in the
compose file or within an `.env` file in the same directory as the `docker-compose.yml`
(it will be automatically loaded by `docker-compose`).
Login to gwdg gitlab docker registry by entering:
```commandline
docker login docker.gitlab.gwdg.de
```
Running
```commandline
docker-compose pull
docker-compose up -d
```
will download the latest postgres and grady images and run them in the background.
### Importing exam data
#### Exam data structure
In order to import the exam data it must be in a specific format.
You need the following:
1. A .json file file containing the output of the converted ILIAS export which is
generated by [hektor](https://gitlab.gwdg.de/j.michal/hektor)
2. A .csv file where the columns are: id, name, score, (file suffix). No
suffix defaults to .c
Supported suffixes: .c , .java , .hs , .s (for mips)
Important: The name values must be the same as the ones that are contained in
the export file file from 1.
Example:
```commandline
$ cat submission_types.csv
a01, Alpha Team, 10, .c
a02, Beta Distribution, 10, .java
a03, Gamma Ray, 20
```
3. A path to a directory with sample solutions named
<id>-lsg.c (same id as in 2.)
4. A path to a directory containing HTML files with an accurate
description of the task. File name pattern has to be: <id>.html (same id as in 2.)
```commandline
$ tree -L 2
.
├── code-lsg
│ ├── a01-lsg.c
│ ├── a02-lsg.c
│ └── a03-lsg.c
└── html
├── a01.html
├── a02.html
└── a03.html
```
5. (Optional) a .csv file containing module information. This step is purely
optional -- Grady works just fine without these information. If you want to
distinguish students within one instance or give information about the
grading type you should provide this info.
CSV file format: module_reference, total_score, pass_score, pass_only
Example:
```commandline
$ cat mpdules.csv
B.Inf.1801, 90, 45, yes
B.Mat.31415, 50, 10, no
```
6. (Optional) a plain text file containing one username per line. A new tutor
user account will be created with the corresponding username and a randomly
generated password. The passwords are written to a `.importer_passwords` file.
Note: Rather than during the import, tutors can register their own accounts
on the web login page. A reviewer can then activate their accounts via the
tutor overview.
7. A plain text file containing one username per line. A new **reviewer** account
will be created with the corresponding username and a randomly
generated password. The passwords are written to a `.importer_passwords` file.
This step should not be skipped because a reviewer account is necessary in order
to activate the tutor accounts.
#### Importing exam data
In order to import the exam data it has to be copied into the container
and the importer script has to be started. This process is still quite manual
and will likely change in the future.
Copy the prepared exam data as outlined above to the production server
(e.g. via scp). Then copy the data into the running grady container:
```commandline
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce0d61416f83 docker.gitlab.gwdg.de/j.michal/grady:master "./deploy.sh" 6 weeks ago Up 6 weeks grady_1
$ docker cp exam-data/ ce0d61416f83:/
```
This will copy the folder exam-data into the container with the id ce0d61416f83
under the root directory.
Open an interactive shell session in the running container:
```commandline
$ docker exec -it ce0d61416f83 /bin/sh
```
Change to the `/exam-data/` folder and run the importer script:
```commandline
$ python /code/manage.py importer
```
The importer script will now interactively guide you through the import process.
Note: The step `[2] do_preprocess_submissions` is in part specific to
c programming course exam data. The EmptyTest can be used for every kind of
submission, the other tests not. Submissions that are empty will be labeled as
such and receive a score of 0 during step `[3] do_load_submissions`.
Generated user account passwords will be saved under .import_passwords
# Generated by Django 2.1 on 2018-10-01 12:59
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core', '0010_auto_20180805_1139'),
]
operations = [
migrations.AlterField(
model_name='submissiontype',
name='programming_language',
field=models.CharField(choices=[('c', 'C syntax highlighting'), ('java', 'Java syntax highlighting'), ('mipsasm', 'Mips syntax highlighting'), ('haskell', 'Haskell syntax highlighting')], default='c', max_length=25),
),
]
......@@ -90,9 +90,9 @@ def add_feedback_if_test_recommends_it(test_obj):
available_tests = util.processing.Test.available_tests()
if test_obj.label == available_tests[test_obj.name].label_failure \
and not hasattr(test_obj.submission, 'feedback') \
and (test_obj.name == util.processing.EmptyTest.__name__ or
test_obj.name == util.processing.CompileTest.__name__):
and not hasattr(test_obj.submission, 'feedback') \
and (test_obj.name == util.processing.EmptyTest.__name__ or
test_obj.name == util.processing.CompileTest.__name__):
return Feedback.objects.update_or_create(
of_submission=test_obj.submission,
defaults={
......@@ -213,7 +213,7 @@ def do_load_submission_types():
desc_dir = i('descriptions dir', 'html')
with open(submission_types_csv, encoding='utf-8') as tfile:
csv_rows = [row for row in csv.reader(tfile)]
csv_rows = [row for row in csv.reader(tfile) if len(row) > 0]
for row in csv_rows:
tid, name, score, *suffix = (col.strip() for col in row)
......@@ -264,7 +264,7 @@ def do_load_module_descriptions():
'Where is the file?', 'modules.csv', is_file=True)
with open(module_description_csv, encoding='utf-8') as tfile:
csv_rows = [row for row in csv.reader(tfile)]
csv_rows = [row for row in csv.reader(tfile) if len(row) > 0]
for row in csv_rows:
data = {
......@@ -284,6 +284,45 @@ def do_load_module_descriptions():
info(f'{modification} ExamType {data["module_reference"]}')
def _do_check_empty_submissions():
submissions = i(
'Please provide the student submissions', 'binf1601-anon.json',
is_file=True)
return (
util.processing.process('', '', '', submissions, '', util.processing.EmptyTest.__name__),
submissions)
def _do_preprocess_c_submissions(test_to_run):
location = i('Where do you keep the specifications for the tests?',
'anon-export', is_path=True)
with chdir_context(location):
descfile = i(
'Please provide usage for sample solution', 'descfile.txt',
is_file=True)
binaries = i(
'Please provide executable binaries of solution', 'bin',
is_path=True)
objects = i(
'Please provide object files of solution', 'objects',
is_path=True)
submissions = i(
'Please provide the student submissions', 'binf1601-anon.json',
is_file=True)
headers = i(
'Please provide header files if any', 'code-testing',
is_path=True)
info('Looks good. The tests mights take some time.')
return util.processing.process(descfile,
binaries,
objects,
submissions,
headers,
test_to_run), submissions
def do_preprocess_submissions():
print('''
......@@ -292,7 +331,11 @@ def do_preprocess_submissions():
can specify what test you want to run.
Tests do depend on each other. Therefore specifying a test will also
result in running all its dependencies\n''')
result in running all its dependencies.
The EmptyTest can be run on all submission types. The other tests are very specific
to the c programming course.
\n''')
test_enum = dict(enumerate(util.processing.Test.available_tests()))
......@@ -308,33 +351,13 @@ def do_preprocess_submissions():
return
test_to_run = test_enum[int(test_index)]
location = i('Where do you keep the specifications for the tests?',
'anon-export', is_path=True)
with chdir_context(location):
descfile = i(
'Please provide usage for sample solution', 'descfile.txt',
is_file=True)
binaries = i(
'Please provide executable binaries of solution', 'bin',
is_path=True)
objects = i(
'Please provide object files of solution', 'objects',
is_path=True)
submissions = i(
'Please provide the student submissions', 'binf1601-anon.json',
is_file=True)
headers = i(
'Please provide header files if any', 'code-testing',
is_path=True)
# processed_submissions = None
if test_to_run == util.processing.EmptyTest.__name__:
processed_submissions, submissions = _do_check_empty_submissions()
else:
processed_submissions, submissions = _do_preprocess_c_submissions(test_to_run)
info('Looks good. The tests mights take some time.')
processed_submissions = util.processing.process(descfile,
binaries,
objects,
submissions,
headers,
test_to_run)
output_f = i('And everything is done. Where should I put the results?',
f'{submissions.rsplit(".")[0]}.processed.json')
......@@ -379,7 +402,8 @@ def do_load_tutors():
with open(tutors) as tutors_f:
for tutor in tutors_f:
user_factory.make_tutor(tutor.strip(), store_pw=True)
if len(tutor.strip()) > 0:
user_factory.make_tutor(tutor.strip(), store_pw=True)
def do_load_reviewer():
......
......@@ -182,7 +182,8 @@ class UnitTestTest(Test):
def process(descfile, binaries, objects, submissions, header, highest_test):
if isinstance(highest_test, str):
highestTestClass = Test.available_tests()[highest_test]
highest_test_class = Test.available_tests()[highest_test]
if highest_test != EmptyTest.__name__: # not needed for EmptyTest
global testcases_dict
testcases_dict = testcases.evaluated_testcases(descfile, binaries)
......@@ -191,12 +192,13 @@ def process(descfile, binaries, objects, submissions, header, highest_test):
submission_file.read())
# Get something disposable
path = tempfile.mkdtemp()
run_cmd(f'cp -r {objects} {path}')
run_cmd(f'cp -r {binaries} {path}')
run_cmd(f'cp -r {header} {path}')
os.chdir(path)
os.makedirs('bin')
if highest_test != EmptyTest.__name__:
path = tempfile.mkdtemp()
run_cmd(f'cp -r {objects} {path}')
run_cmd(f'cp -r {binaries} {path}')
run_cmd(f'cp -r {header} {path}')
os.chdir(path)
os.makedirs('bin')
def iterate_submissions():
yield from (obj
......@@ -204,11 +206,12 @@ def process(descfile, binaries, objects, submissions, header, highest_test):
for obj in student['submissions'])
for submission_obj in tqdm(iterate_submissions()):
highestTestClass(submission_obj)
run_cmd('rm code*')
highest_test_class(submission_obj)
if highest_test != EmptyTest.__name__:
run_cmd('rm code*')
print() # line after progress bar
shutil.rmtree(path)
if highest_test != EmptyTest.__name__:
shutil.rmtree(path)
return submissions_json
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment