[[英語学研究2008]]

Review the papers you have presented and fill in the following grids. Specify what aspects of learner language is automatized and how automatization is made. Also what is a specific aim of the papers?

If the paper did not deal with automatization actually, please comment in the data analysis section.

----

|Paper|Topic|Method & Data Analysis|Aims|
|Author(s) & Links|Automatize what?|How is it done?|What for?|
|[[Tony Berber Sardinha and Tania Shepherd:https://calico.org/p-376-Abstracts%20of%20accepted%20papersposters.html#5]]|error detection|A pre-processor extracted the probability of each word, with surrounding three-word bundles, collocational frameworks, and parts of speech, being used erroneously within the corpus. Thus, possible errors are identified and correct alternatives provided.|developing online system to identify errors --> ICALL system, writing|
|[[Nick Pendar & Anna P. Kosterina:http://purl.org/net/icall/calico08/pendar-kosterina.pdf]]|annotating learner language manually|In LLC error tagging, not only errors but also sentence structures are annotated. This presentation is basically an introduction of the annotation scheme and challenges associated with error tagging. Using an error-tagged part of LLC, this presentation also reports the differences in error rate of several linguistic features between Czech and Chinese learners of English.|for the development of general, objective error annotation scheme|
|[[Joel Tetreault, Martin Chodorow, and Yoko Futagi:http://purl.org/net/icall/calico08/tetreault-et-al.pdf]]|error detection|First, this study tests inter-rater reliability of error annotation by two human raters. It, then, examines whether precision and recall rates judged with human rater as a golden standard are strongly influenced by using sampled learner corpus instead of the whole corpus.|for better efficiency in testing automatic error detection system|
|[[Su-Youn Yoon et al.:http://purl.org/net/icall/calico08/pierce-et-al.pdf]]|Assessment of the learners' speech data.|This research did not actually reached their goal to automatically assess the speach data. In this presentation they just showed methods to code the learner data, the inter-rater reliability of human raters, and places where the raters' judges differ.|to develope computer aided pronunciation training system and to automatically assess the learners' speech data|
|[[Marzena Watorek & Aurelia Marcus:http://www.ling.ohio-state.edu/icall/calico08/marcus-watorek.pdf]]|Formalizing the second language learner corpus by means of automatic analysis|Making use of an ESF database as a training corpus along with its theoretical framework as a support for their research, they tried TreeTagger and later corrected the output manually. The feasibility of formalization of learner corpora is thus demonstrated.| formalisation of learner language by means of automatic analysis|
|[[Bailey & Meurers:http://purl.org/net/calico-workshop-abstracts.html#4]]|meaning error detection|EFL learners' answers to loosely-restricted reading comprehension questions are analyzed both manually and automatically by compared with the target answers, and the semantic destance between them is estimated. Then the results from manual and computed analysis are compared.|for building an ICALL content assessment module to diagnose meaning errors and provide feedback on them|
|[[Michael Gamon, Chris Brockett, William B. Dolan, Jianfeng Gao, Dmitriy Belenko, Alexandre Klementiev, Claudia Leacock:http://purl.org/net/calico-workshop-abstracts.html#12]]|ESL Error Detection & Correction|The authors fist trained the computer with discourse of native speakers.  Then, the computer processes the native speaker's text, which is believed error-free, and they compare the results with the judgement of human, native speakers.  The results about articles are rather consistent with human judgement, but not on preopositions.   But overall, it is possible to correct ESL specific errors using statistical techniques.|To develop an automatic system for error detection and corrective suggestion|
|[[Deryle Lonsdale, C. Ray Graham, Casey Kennington, Aaron Johnson, Jeremiah McGhee:http://purl.org/net/calico-workshop-abstracts.html#20]]|oral language tests assessment|With elicited immitation, the authors try to develop an automatic oral language tests.  First syllable-based human assessment is carried out to select best-working items.  Subsequently, using an  automatic speech recognition system named "Sphinx", they compared the results of human judgement and automatic processing.  High agreement was observed between the two, so it might be feasible to say that it is possible to develop an automatic system to assess the oral language in the future.|For developing a system to assess the oral language tests.|
|[[Anne Rimrott & Trude Heift:http://www.ling.ohio-state.edu/icall/calico08/rimrott-heift.pdf]]|Error(misspellings)detecting|the study classfied 4 error taxonomies (edit distance, linguistic competence, linguistic subsystem, language influence) in the context of a study that found that spell checker improvement for L2 German should address intralingual morphological competence misspellings with an edit distance of more than one.|Improve spell checker & Automating error analysis|
|[[Hilton, H.:http://www.ling.ohio-state.edu/icall/calico08/hilton.pdf]]|Detection of hesitation phenomena in learner language|This study does not deals with automatizing of the detection, but tries to identify characteristics of fluency/disfluency in various levels of proficiency. Using the PAROLE corpus, Hilton coded the hesitation (i.e. silent pauses, filled pauses, retracings, drawls, fragments) by CHAT format, and all the silent and filled pauses over 200ms are transcribed and timed by Sonic Mode of CLAN program. From the analyses she found that frequency, length, and location of hesitations will be useful indicators of fluency level.|to identify and compare charateristics of fluency/disfluency of different proficiency levels in various L2s|
|[[Rachele De Felice and Stephen G. Pulman:http://purl.org/net/icall/calico08/defelice-pulman.pdf]]|preposition detection|This study investigates the extent to which prepositions can be assigned automatically on L1 data. Machine learing was adopted to educate the algorithm, which include the combination of the following parameters to determine correct prepositions; POS, complement, grammatical relations the preposition occurs in, WordNet information, and verb subcategorisation information. The result indicates that out of nine common prepositions, 70.12% of accuracy was achieved, surpussing other existing models. Spelling errors and grammatical mistakes, and the disagreement among annotaters may confuse the parser.|The ultimate goal is to detect preposition errors of learners|
|||||
|||||
|||||
|||||
|||||
|||||

トップ   新規 一覧 単語検索 最終更新   ヘルプ   最終更新のRSS