Scored Surveys in Online Classroom Use (i.e. DotLRN sites)

In the online classroom, the Admin (i.e. teacher) would administer tests to Participants (i.e. students). If the test were a written essay, or a number of short answers to questions, there is currently no feasible way to automatically analyze responses (i.e. give a score), and the teacher would be forced to read all the responses. Although each response scoring_points_default | integer | default null (this is the default number of points given for a question - which multiple choice questions found in Survey_question_choices, scoring_points field values would override - this would be a way of saying, in a essay response test, the teacher may want to give 10 points per question, and he/she will assume they get the full point value unless he/she specifies it otherwise... or something like that. must be read by the teacher, there are "tools" the survey package could provide to the teacher to make their job easier. There should be a way for the teacher (i.e. Admin) to provide feedback after having read the responses. This feedback could take two forms, either a simple "score" (i.e. point value - which could hopefully be automatically added up for all short answers to give the complete score for that assignment), but also possibly in the form of comments about the Participants answers. For example, in response to a students short essay on how all leaders with facial hair are horrible people (Stalin, Hussein, etc.), the teacher might want to comment and tell the student to consider leaders such as Gandhi or Martin Luther King. In having talked to some professors at the school i work for, they also expressed interest in having "predefined" answers for questions, so that they can have a template with 4 or 5 different standard responses which can be customized.

If the test administered by the teacher is a multiple choice test. We should provide a way to automatically score the test by cross referencing the participants responses with a key. This could be as simple as the response to question number 1 is multiple choice value 5, thus if the Participant entered the "correct" value return a 1, if not return a 0 and then adding all the scores up to give a total. Although this would provide a "score", it would be nice to also provide an explanation for incorrect answers. Thus, if the test is automatically scored, it should not only say you missed 7 of 50 questions, but also tell you which questions you missed and give you an explanation for the correct answer. In testing one things that has little to do with Scoring but more to do with the design of a test, which would be useful for professors is the ability to write out a short essay for display and then follow it with a section of multiple choice question... although the scoring wouldn't change, this has an impact on the way information is displayed. Likewise, it would be nice to assign subcategories for scores (i.e. questions 1-15 are to be part of your "section 1 on the text book score", whereas questions 16-30 are part of you "section 2 in the text book score"). Thus a single test (i.e. survey) could provide independent scores for various sections of a test.

Another service teachers would need is the ability to translate point values into scores. For example, in the United States it is common practice to give an "A" to people with 92% of the total possible or higher, and an "A-" for 90-92%, a "B+" for 88-90%, a "B" for 82-88%, etc. In Germany on the other hand, scores are given in the form of numbers 1-6 (with 6 being the lowest score). We should have a way of taking the compiled score (which would be a total number of points), and translating this into either a percentage or a score which is defined by that percentage or the total number of points.

Then, there are tools for the teach. A Teacher may want to have an overview of the entire test. What he/she care may care about is "what are the points breakdowns for all students" or "which questions were most frequently missed by students" (this way the teacher could discern if a certain question was possibly written poorly - if for example almost everybody missed a particular question). The best/most visually appealing way to present this information is through bar graphs (which are relatively easy to program), one may want to use a pie chart to show score breakdowns, but I do not know how we could auto generate this in code. If you have a suggestion please let me know.

Finally, if a teacher were administering an online class... he/she, and or his/her teachers assistants may want to receive notification about a test having been completed. So we should add notification for a survey having been completed. There already appears to be the functionality to notify yourself, but this is not working in postgresql, and it would be good to be able to specify other people as well.