Review the papers you have presented and fill in the following grids. Specify what aspects of learner language is automatized and how automatization is made. Also what is a specific aim of the papers?

If the paper did not deal with automatization actually, please comment in the data analysis section.

PaperTopicMethod & Data AnalysisAims
Author(s) & LinksAutomatize what?How is it done?What for?
Tony Berber Sardinha and Tania Shepherderror detectionA pre-processor extracted the probability of each word, with surrounding three-word bundles, collocational frameworks, and parts of speech, being used erroneously within the corpus. Thus, possible errors are identified and correct alternatives provided.developing online system to identify errors --> ICALL system, writing
Nick Pendar & Anna P. Kosterinaannotating learner language manuallyIn LLC error tagging, not only errors but also sentence structures are annotated. This presentation is basically an introduction of the annotation scheme and challenges associated with error tagging. Using an error-tagged part of LLC, this presentation also reports the differences in error rate of several linguistic features between Czech and Chinese learners of English.for the development of general, objective error annotation scheme
Joel Tetreault, Martin Chodorow, and Yoko Futagierror detectionFirst, this study tests inter-rater reliability of error annotation by two human raters. It, then, examines whether precision and recall rates judged with human rater as a golden standard are strongly influenced by using sampled learner corpus instead of the whole corpus.for better efficiency in testing automatic error detection system
Su-Youn Yoon et al.Assessment of the learners' speech data.This research did not actually reached their goal to automatically assess the speach data. In this presentation they just showed methods to code the learner data, the inter-rater reliability of human raters, and places where the raters' judges differ.to develope computer aided pronunciation training system and to automatically assess the learners' speech data
Marzena Watorek & Aurelia MarcusFormalizing the second language learner corpus by means of automatic analysisMaking use of an ESF database as a training corpus along with its theoretical framework as a support for their research, they tried TreeTagger and later corrected the output manually. The feasibility of formalization of learner corpora is thus demonstrated.formalisation of learner language by means of automatic analysis
Stacey Bailey & Detmar Meurersmeaning error detectionEFL learners' answers to loosely-restricted reading comprehension questions are analyzed both manually and automatically by being compared with the target answers, and the semantic distance between them is estimated. Then the results from manual and computed analysis are compared.for building an ICALL content assessment module to diagnose meaning errors and provide feedback on them
Michael Gamon, Chris Brockett, William B. Dolan, Jianfeng Gao, Dmitriy Belenko, Alexandre Klementiev, Claudia LeacockESL Error Detection & CorrectionThe authors fist trained the computer with discourse of native speakers. Then, the computer processes the native speaker's text, which is believed error-free, and they compare the results with the judgement of human, native speakers. The results about articles are rather consistent with human judgement, but not on preopositions. But overall, it is possible to correct ESL specific errors using statistical techniques.To develop an automatic system for error detection and corrective suggestion
Deryle Lonsdale, C. Ray Graham, Casey Kennington, Aaron Johnson, Jeremiah McGheeoral language tests assessmentWith elicited immitation, the authors try to develop an automatic oral language tests. First syllable-based human assessment is carried out to select best-working items. Subsequently, using an automatic speech recognition system named "Sphinx", they compared the results of human judgement and automatic processing. High agreement was observed between the two, so it might be feasible to say that it is possible to develop an automatic system to assess the oral language in the future.For developing a system to assess the oral language tests.
Anne Rimrott & Trude HeiftError(misspellings)detectingthe study classfied 4 error taxonomies (edit distance, linguistic competence, linguistic subsystem, language influence) in the context of a study that found that spell checker improvement for L2 German should address intralingual morphological competence misspellings with an edit distance of more than one.Improve spell checker & Automating error analysis
Hilton, H.Detection of hesitation phenomena in learner languageThis study does not deals with automatizing of the detection, but tries to identify characteristics of fluency/disfluency in various levels of proficiency. Using the PAROLE corpus, Hilton coded the hesitation (i.e. silent pauses, filled pauses, retracings, drawls, fragments) by CHAT format, and all the silent and filled pauses over 200ms are transcribed and timed by Sonic Mode of CLAN program. From the analyses she found that frequency, length, and location of hesitations will be useful indicators of fluency level.to identify and compare charateristics of fluency/disfluency of different proficiency levels in various L2s
Rachele De Felice and Stephen G. Pulmanpreposition detectionThis study investigates the extent to which prepositions can be assigned automatically on L1 data. Machine learing was adopted to educate the algorithm, which include the combination of the following parameters to determine correct prepositions; POS, complement, grammatical relations the preposition occurs in, WordNet? information, and verb subcategorisation information. The result indicates that out of nine common prepositions, 70.12% of accuracy was achieved, surpussing other existing models. Spelling errors and grammatical mistakes, and the disagreement among annotaters may confuse the parser.The ultimate goal is to detect preposition errors of learners
Hiromi Oyama, Yuji Matsumoto, Masayuki Asahara and Kosuke SakataError detectionThis study deals with essay data by learners of Japanese as a second language or a foreign language, and attempts to automatically detect the learners' errors by using machine-learning systems, called Support Vector Machines (SVMs). They can learn the correct usage of the Japanese particles through analyzing half a year's worth of newspaper data. Based upon their knowledge about Japanese particles, they distinguish correct sentences from erroneous ones. At the present stage, recall value is over 80%. The future works of this study are 1. to enhance volume of corpus; 2. to refine error tagset along with error taxonomy; 3. to devise a way of automatic error detection and categorization; 4. to analyze the trend of Japanese learners' errorsAnalyzing the characteristics on learners' errors statistically for the SLA research.
Luiz Amaral, Detmar MeurersIdentification and interpretation of learner tokensThe researchers provide an web-based workbook for learners of Portuguese, TAGARELA. It can give learners feedback on orthographical, semantic and syntactic errors. In TAGARELA, there is a system to identify learners' input, which consists of linguistic analysis modules including tokenizer and parser by which it analyzes learners' input and finally provides feedback as an output through the web interface. However, some linguistic features such as accented charaters and contraction can cause mismatches of perception between learner and computer. In order to solve this problem, annotation-based NLP processing architecture will be useful.Developing the better ICALL system.
Lu, XiaofeiAutomatic analysis of syntactic complexityThe author tries to achieve an automatic analysis of syntactic complexity based on comparison between L1 syntactic metrics and automatically detectable factors. Taking irregular aspects of learner language into consideration, i. e. interlanguage errors, the system should be equipped with adequacy information of each variable in the future. The results will be released in the summer, 2008.Developing an automatic system to measure syntactic complexity of learner language
Seok Bae Jang, Sun-Hee Lee & Sang-kyu Seoannotation of particle errorThis study provides an annotation scheme for marking up Korean particle error which is thought to be one of the most difficult aspects of Korean for learners. Though this study does not show any automatized process for annotating particle errors, the reseachers aim to build up some automatic error tagging system with the elaborated annotation scheme in the future. In this study the researchers analyze written Korean data collected from learners with different backgrounds(heritage vs. non-heritage) and find some differences of errors between heritage and non-heritage learners.Developing a computation error tagging tool

トップ   編集 凍結解除 差分 バックアップ 添付 複製 名前変更 リロード   新規 一覧 単語検索 最終更新   ヘルプ   最終更新のRSS
Last-modified: 2010-03-02 (火) 12:08:14 (5218d)