Lecture 10 - Learning Style PPT
第十二讲learning style

• Perceptual learning styles
• visual, aural/auditory, and haptic (kinesthetic & tactile)
• Visual Learners
They tend to prefer sitting at the front of the classroom to avoid visual obstructions (e.g. people's heads). They learn best from visual displays including: diagrams, illustrated text books, overhead transparencies, videos, and hand-outs. During a lecture or classroom discussion, visual learners often prefer to take detailed notes to absorb the information.
• ORIGINS OF OUR LEARNING STYLES • For many of us, our learning style preference results from the kind of processing our brain “specializes” in. • Left Brain Processing – concentrates more on tasks requiring verbal competence, such as speaking, reading, thinking, and reasoning.
• A person's learning style has to do with the way he or she processes information in order to learn it and then apply it. • No one approach or style is more or less effective than any other. What does matter is whether it is suited to a particular everyday task or academic situation. • By understanding different 'learning styles' teacher may gain insights into ways of making academic information more accessible to our diverse groups of learners. • Most students learn best when the style of presentation is in agreement with their preferred learning style. • Learning style is the application of a particular cognitive style to a learning activity. It is seen as relatively fixed.
《Learning》SectionⅡ PPT教学精品课件

2.Prediction—Look at the pictures and the title and predict what the text is probably about. The text is mainly about active learning and how to take an active role in learning.
7
(4)What can we learn from the text? A.The outer voice expresses your personal opinions. B.Active learners focus on what their brain is saying in the background. C.Active learners accept everything they learn. D.Active learners don’t judge people based on first impressions or personal feelings. 答案 D
voice speaker/writer is saying, not on what their brain is saying in the background.
9
Argue When your inner voice tells you a speaker/writer is wrong, ③__th_i_n_k_a_b_o_u_t__ with your why the speaker/writer may be right. Be flexible in your opinions and you might
16
Ⅲ.句式欣赏 1.while引导并列句;what 引导宾语从句
人教版高中英语选修10:Learning efficiently_课件2

A: I know that. I’m just using the tapes to help me PRACTISE my reading. You see, I find it easier to learn when I’m listening. But once I can read easily, I’ll stop using the tapes.
A: We have to read the text in Unit 5.
X: I hate reading. It always takes me so long and then I can never remember what I’ve read.
A: You should do what I do. X: What’s that? A: Well, you know that all the reading texts in the course book are recorder? X: Yeah. A: I listen to the tape while I’m reading. Y: But isn’t that cheating?
Y: It’s worth a go.
A: You know what else you might be able to do?
X: What?
A: Photocopy the text and cut it into paragraphs and mix them all up. Then try to fit them back together in the right order. That way you would be doing something.
learning-style

1
what is a learning style
2
what is a learning style
A learning style is one's consistent way of responding to and using stimuli in the environment
• They like to solve problems and find practical solutions to practical issues.
• The problem or the task is more interesting to them than interactions with others.
• They like to listen and then act upon what
knowledge they have.
11
• The advantage of k
nowing what kind of
learner you are is th
at you can
then
study in the way tha
5
Learning Styles
• Visual: Involves the use of seen or observed things
• Auditory: Involves the transfer of information through listening
• Kinesthetic: Involves physical experience; touching feeling, movement and practical hands-on experiences.
learning style ppt语言学

Field-dependence: the tendency to be dependant on the total field -Advantages: whole picture larger view general configuration -Disadvantages: “You may miss your lover in crowd.” “It is hard to read in a noisy enviroment.”
Implications of learning style
There is no particular teaching or learning method that can suit the needs of all learners. Learning styles exist on wide continuums, although they are often described as opposites. No one style is better than others. Very little research has examined the interaction between different learning styles and success in L2 learning; however, students should be encouraged to “stretch” their learning styles so that they will be more empowered in a variety of leaning situations.
Left-brain: - logical, analytical thought, with mathematical and linear processing of information.
《Learning》SectionⅤ PPT课件

Ⅱ.短语语境填空——根据汉语提示写出适当的短语 1.John ___m__ad_e__a_n_e_f_fo_r_t__(努力) to finish his work today. 2.She tried to _____f_i_t _in________(融入)with the others,but it was difficult. 3.We should _m__a_k_e_t_h_e_b_e_s_t_u_s_e_o_f_ (充分利用)our time to study. 4.We all ____i_n_si_s_te_d__o_n____(坚持)his coming with us. 5.Tom _____i_s______ quite ___u_s_e_d__to____(习惯于)doing this kind of job. 6.You know how many people ___o_n__a_v_er_a_g_e_____(平均)read one copy?
10
[自主发现] ①be lost ______in______ reflection陷入深思中 ②____o_n_______ reflection 再三考虑 [巩固内化] 单句语法填空 ①He admired his __re_f_le_c_t_io_n____ (reflect) in the mirror. ②She was lost ______i_n_____ reflection , and did not seem to notice that everyone was looking at her. ③She decided, ______o_n_____ reflection, to accept the offer.
大学英语课堂PPT展示
Fluency Development
By repeatedly practicing with PowerPoint presentations, students can
improve their fluency and ability to express ideas clearly and coherently in
PowerPoint presentation in college English classrooms
目 录
• Introduction • English language knowledge • English Skills Training • Introduction to Cultural Background • Learning Strategies and Skills • Classroom interaction and activities • Summary and Outlook
要点二
Context Understanding
By playing audio along with the slides, students can better understand the context and gain a deep understanding of the information being presented
Sentence analysis
PowerPoint presentations can clearly display the structure of sentences, helping students understand complex grammatical structures.
人教版高中英语选修10课件 Unit 4 Learning efficiently Using Language 课件
An auditory learner: 1. Listen to the tape and sound them out. 2. Listen to the text with new words. 3. Work in pairs. One reads the new words and the other spells them. A tactile reader: 1.Write the new words down again and again until you know them. 2.Play the crossword. 3.Make a dictation by listening to the tape.
Auditory learners learn best when there is an oral component to the material they are learning. They prefer to listen to explanations or instructions rather than read them.
4. What else is Xiaozhou having problems with? He is also having problems with the new words in the text.
Listening Text
WHY DON’T YOU GIVE IT A TRY?
Y: What English homework do we have
X: Mmm. I will. But I still won’t be able to remember the text later. Y: I’ve thought of something you could do about that. X: Oh?
lecture10
Speech Processing(Module CS5241) Lecture10–Robustness in Speech Recognition Adaptation and Normalisation TechniquesSIM Khe ChaiSchool of Computing,National University of Singapore,January2010Speech ProcessingCoping with VariabilitiesThere are many factors that cause variabilities in speech recognition:•Intra-speaker variabilities:–Due to the same speaker speaking at different sessions–Also known as session variabilities–Causing factors:state of emotion,sore throat,aging•Inter-speaker variabilities:–Due to the different speakers–Males and females have different fundamental frequencies–Native vs.non-native speakers–Different speaking styles•Channel/environment variabilities:–Due to different acoustic environment(indoor vs.outdoor)–Due to different recording channels(close-talk mic.,far-field mic.mobile phone)–Different signal-to-noise(SNR)ratio(noise level,types of noise)How to handle these uncertainties?School of Computing,National University of Singapore,January20101Speech ProcessingCepstral Mean&Variance Normalisation(CMN&CVN)CMN and CVM are simple ways of performing feature normalisation.Given a segment of acousticfeature vectors(MFCCs or PLPs),O T1={o1,o2,...,o T},the mean and variance vectors arecomputed asµ=1TTXt=1o t andσ2=1TTXt=1diag“o t o Tt”Then,the normalisation is performed as˜o t(d)=o t(d)−µ(d)σ(d)(1)Longer segments yield better mean and variance estimates,but longer delay;need to wait till the end of the segment before normalisation can be done.Choice of segments:•Homogenous segments as detected by a Voice Activity Detector(VAD)•One side of a2-way conversationSchool of Computing,National University of Singapore,January20102Speech ProcessingComputing CMN&CVN Using HCompVWe have used HCompV to performflat start(compute global mean and variance).HCompV can also be used to compute class specific mean and covariance(e.g.speaker dependent).This is achieved by using a mask on the base name of the datafile to group multiplefiles into classes.HCompV-q<TYPE>-c<OUT_DIR>-k<MASK>-S<TRAIN_DATA_SCRIPT>•‘q’–specifies the type of normalisation(m=CMN,v=CVN)•‘c’–specifies the output directory•‘k’–specifies the maskExample,using mask‘%%%%%*’yields:Group name Example datafilesspkr1spkr1data1.mfcc,spkr1data2.mfcc,spkr1data3.mfccspkr2spkr2data1.mfcc,spkr2data2.mfcc,spkr2data3.mfccSchool of Computing,National University of Singapore,January20103Speech ProcessingApplying CMN&CVN Through Config Setting The CMN&CVNfiles take the following format:<CEPSNORM><MFCC_E> <MEAN>13...<CEPSNORM><MFCC_E> <VARIANCE>13 ...To apply CMN&CVN,use the following configuration set up:CMEANMASK=<MASK_TO_EXTRACT_GROUP_NAME>CMEANDIR=<DIRECTORY_CONTAINING_CMN_FILES>VARSCALEMASK=<MASK_TO_EXTRACT_GROUP_NAME>VARSCALEDIR=<DIRECTORY_CONTAINING_CVN_FILES>School of Computing,National University of Singapore,January20104Speech ProcessingVocal Tract Length Normalisation(VTLN)f warp=8><>:(f max−sc u)(f−c u)(f max−c u)+sc u f>c u(sc l−f min)(c l−f min)(f−f min)+f min f<c lsf otherwisewhere8<:c u=2f u/(1+s)c l=2f l/(1+s)s=1/αDifferent speakers have different vocal tract attributes.Vocal tract length can be varied by performing a non-linear warping of the frequency scale.This changes the center freqiencies of thefilter banks.Typically,a warping factor between0.8to1.2is used to stretch/compress the frequency scale of so that the features from different speakers fall on a canonical frequency scale. Typically,VTLN is performed on the same segments as those used for CMN and CVM.School of Computing,National University of Singapore,January20105Speech ProcessingGaussianisationGaussianisation are two normalisation techniques which transforms the features non-linearly such that the cummulative density function(cdf)of these features matches a predefined function. Typically,the target cdf correspond to that of a standard normal distribution N(0,I).The original cdf of the feature vectors can be represented in the following forms:•Non-parametric:cdf derived from histogram with discrete bins•Parametric:cdf derived from GMMsNote that cdfs are defined for1-dimensional data.Hence,Gaussianisation is applied to each dimension independently.School of Computing,National University of Singapore,January20106Speech ProcessingSpeaker Independent versus Speaker Dependent System•Speaker Independent(SI):–Training data collected from many speakers(>100speakers for better generalisation)–System is designed to cope with both intra-and inter-speaker variabilities–Suitable for applications where the users of the systems are unknown(e.g.call centres)•Speaker Dependent(SD):–Trained data collected from a specific speaker–System is designed to cope with intra-speaker variabilities–Suitable for applications where users are known(e.g.personal dictation software)•If the speaker is known,the performance of the SD systems superceed the SI systems because the tasks do not involve inter-speaker variabilities,provided there is sufficient training data from that speaker to yield a robust acoustic model estimation.•Usually,hundreds of hours of training data is required to train a reliable acoustic model.This is often difficult to obtain.On the other hand,it is easier to collect the same amount of speech data from multiple speakers.How to achieve robust estimation of SD systems with limited amount of data? School of Computing,National University of Singapore,January20107Speech ProcessingSpeaker AdaptationThe aim of speaker adaptation is to update the parameters of an SI system using a small amount of training data such that the performance of the system progressively improves towards an SD system.For practical applications,the amount of adaptation data available may be a few seconds long!Speaker adaptation can be performed in two ways:•Supervised adaptation:The transciptions of the adaptation data are available •Unsupervised adaptation:The transciptions of the adaptation data are not ually obtained by recognising the speech using an SI systemTwo modes of adaptation:•Online adaptation:Parameters are updated as adaptation data becomes available data•Offline adaptation:Parameters are updated until all the adaptation data becomes available.This approach is also known as batch adaptationSchool of Computing,National University of Singapore,January20108Speech ProcessingCoping with Limited Adaptation DataIt is not possible to update the HMM parameters using only a small amount of training data.This will lead to the overfitting or overtraining issue.Therefore,the main challanges in adaptation is how to obtain a robust estimation of the model parameters using only a small amount of training data.Two commonly used approaches for speaker adaptation are:•Parameter interpolation:One way is to interpolate the HMM parameters between SI and SD estimates.The weight on the SD estimate increases as more adaptation data becomes available.This can be realised using the Maximum a Posteriori(MAP)estimation.•Linear regression:This approach uses a linear regression technique to update the parameters for a target speaker.This approach effectively reduces the degree of freedom of the optimisation process to avoid the problem of overfitting.School of Computing,National University of Singapore,January20109Speech ProcessingMAP AdaptationThe Maximum a Posteriori(MAP)estimation maximises the following posterior function:θMAP=arg maxθp(θ|O T1)=arg maxθp(O T1|θ)p(θ)For mean,p(o t|µ,σ2)=N(o t|µ,σ2)and p(µ)=N(µ|µ0,σ2),then,log p(O T1,µ|σ2)=TXt=1log p(o t,µ|σ2)=K−12TXt=1((o t−µ)2σ2+(µ−µ0)2σ2) =K −12(σ2+σ2)σ2σ2TXt=1µ−σ2o t+σ2µ0σ2+σ2!2Hence,differentiating this w.r.tµand equating to zero yields:µMAP=1NTXt=1σ2o t+σ2µ0σ2+σ2=σ2σ2+σ2µML+σ2σ2+σ2µ0School of Computing,National University of Singapore,January201010Speech ProcessingLinear RegressionLinear regression:y=Ax+bGiven:[x1,x2,...,x N]T and[y1,y2,...,y N]T,find the optimum A and b which maps x i to y i.Then,any other values of x∗can be mapped to y∗.A and b can be found by minimising the squared error(Least Squared Error).Alternatively,Maximum Likelihood(ML)estimation can be used...School of Computing,National University of Singapore,January201011Speech ProcessingMaximum Likelihood Linear Regression(MLLR)Maximum Likelihood Linear Regression(MLLR)–finding the optimum linear regression parameters by maximising the likelihood(using the EM algorithm):L(θmllr)=log p(O T1|θmllr)=TXt=1log p(o t|θmllr)In the following three types of MLLR adaptation will be described:•MLLR for mean:µmllr=Aµ+b•MLLR for variance:Σmllr=AΣA T•Constrained MLLR(CMLLR)µmllr=Aµ+b andΣmllr=AΣA Ti.e.mean and covariance matrix share the same MLLR transform,ASchool of Computing,National University of Singapore,January201012Speech ProcessingRegression Tree•Group Gaussian components into classes•Linear regression is applied to each class•Find regression classes using a regression tree•Data-driven(top down)–split nodes while there are sufficient statistics•Usually incorporate a top level speech-silence splitSchool of Computing,National University of Singapore,January201013Speech ProcessingMLLR for Mean&Variance MLLR mean adaptation:µmllr m =A cµm+b c=ˆA c b c˜»µm1–=W cξmNeed to maximise:Q(W c)=K−12TXt=1Xm∈R cγm(t)n(o t−W cξm)TΣ−1m(o t−W cξm)oIfΣm is a diagonal matrix,W c can be updated efficiently and independently for each row.MLLRvariance adaptation:Σmllrm =A cΣm A Tc.Need to maximise:Q(A c)=K−12TXt=1Xm∈R cγm(t)n(o t−µm)T A−TcΣ−1mA−1c(o t−µm)oIfΣm is diagonal,A c can be updated efficiently in an iterative row-by-row fashion.School of Computing,National University of Singapore,January201014Speech ProcessingConstrained MLLR(CMLLR)Constrained MLLR(CMLLR)combines both MLLR mean and variance adaptation by constraining that the same linear transform A c is used for both cases.µmllr m =A cµm+b c andΣmllrm=A cΣm A TcNeed to maximise:Q(A c,b c)=K−12TXt=1Xm∈R cγm(t)n(o t−A cµm−b c)T A−TcΣ−1mA−1c(o t−A cµm−b c)o=K−12TXt=1Xm∈R cγm(t)n(A−1co t−µm−b c)TΣ−1m(A−1co t−µm−b c)oThis is equivalent to applying a feature transformation for each regression class.o ct =A−1co t−b cThe transformation matrix can be obtained efficiently using an iterative row-by-row fashion. School of Computing,National University of Singapore,January201015Speech ProcessingLattice-based MLLR AdaptationIn many applications,unsupervised adaptation is preferred.Therefore,an SI system is typically used to recognise the speech of the adaptation data to obtain the supervision transcriptions. Inevitably,these transcriptions contain errors.Of course,fewer errors leads to better adaptation performance.Instead of providing a1-best transcription,multiple hypotheses can be used as supervision.They can be represented as an N-best list or a lattice.This yields improved adaptation performance. Instead of using the standard forward-backward algorithm,a lattice-based forward-backward algorithm can be used to obtained the posterior probabilities,γm(t),during the E-step of the EM algorithm.The M-steps for both1-best and lattice-based MLLR are the same.The drawback of using lattice-based MLLR is the higher computational costs for lattice-based forward-backward algorithm.Proper lattice pruning strategies may help improve both time and space efficiencies.School of Computing,National University of Singapore,January201016Speech ProcessingSpeaker Adaptive Training(SAT)So far,we have been discussing about using MLLR speaker adaptation techniques during the recognition stage to improve speech recognition performance using only limited amount of adaptation.MLLR can also be used to normalise speaker effects when training SI systems. Recall that when training SI systems,data from different speakers are pulled together.Features from different speakers lie in different acoustic space.MLLR normalisation can be applied to transform the features from different speakers into a canonical space,so that a canonical acoustic model can be estimated.This technique is known as Speaker Adaptive Training(SAT).In SAT,MLLR parameters are estimating for each speaker in the training data(supervised adaptation).Typically CMLLR adaptation is used.With the new regression parameters,the model parameters(Gaussian mean,variance,weights and transition probabilities)are updated. This is performed iteratively until the model parameters converges to a canonical space.The same normalisation process needs to be carried out using recognition(unsupervised adaptation). School of Computing,National University of Singapore,January201017Speech ProcessingSpeaker Adaptation Using HTKTypes of adaptation supported•MLLR(mean and diagonal covariance)•Constrained MLLR(CMLLR)Types of transformation matrix supported•Full affine transformation•Block affine transformation•Bias(translation)Types of regression supported:•Regression tree•Regression base classesSchool of Computing,National University of Singapore,January201018Speech ProcessingDefining a Global Regression Base ClassTo perform adaptation using HTK,regression classes need to be defined.For example,the definition of a global regression base class is given below:~b"global"<MMFIDMASK>*<PARAMETERS>MIXBASE<NUMCLASSES>1<CLASS>1{*.state[2-4].mix[1-128]}Thisfile specifies one regression class(global)which contains thefirst128Gaussian components from state2to4of all the HMMs in the system.HTK also supports multiple regression classes specified as base classes or regerssion trees.HHEd can be used to generate regression classes,either in a data driven manner or using expert knowledge(e.g.speech and silence classes).School of Computing,National University of Singapore,January201019Speech ProcessingIdentifying Speakers using Masks in HTKSpeaker mask is used to identify speakers from the base name of the datafile.Special characters used for pattern matching:•‘%’–matches any character to be used for speaker name•‘?’–matches any character not to be used for speaker name•‘*’–matches zero or more characters not to be used for speaker name•other characters–matches the specified character with the base name(not used for speaker name)Example:Datafile name Mask Speaker namespkrA data1.mfcc%%%%%*spkrAspkrA data1.mfcc spkr%*AspkrA data1.mfcc?pkr%*AspkrA data1.mfcc?%%%%*pkrASchool of Computing,National University of Singapore,January201020Speech ProcessingAdaptation Transform Estimation Using HERestSpecifying the adaptation configuration:HADAPT:TRANSKIND=CMLLRHADAPT:USEBIAS=TRUEHADAPT:BASECLASS=globalHADAPT:ADAPTKIND=BASEHADAPT:KEEPEXFORMDISTINCT=TRUEHADAPT:TRACE=61HMODEL:TRACE=512HERest-u a-h<MASK>-J<BASE CLASS DIRECTORY>-K<XFORM OUTPUT DIR><XFORM OUTPUT EXT>-C <CONFIG>-H<MMF>-I<TRAIN MLF>-S<TRAIN SCRIPT>...•‘u’–update adaptation transforms(type‘a’for adaptation transform)•‘h’–specifies speaker mask•‘J’–specifies directory tofind the‘global’file•‘K’–specfies output transform directory and extension•‘I’–loads model-level transcription(supervised vs.unsupervised)School of Computing,National University of Singapore,January201021Speech ProcessingApplying Adaptation TransformTo apply adaptation transform using HVite:HVite-k-h<MASK>-J<BASE CLASS DIRECTORY>-J<XFORM INPUT DIR><XFORM INPUT EXT>-C<CONFIG>-H<MMF>-w<NETWORK>-S<TEST SCRIPT>...•‘k’–specifies that adaptation input transforms will be used•‘h’–specifies speaker mask•‘J’–specifies directory tofind the‘global’file as well as the directory and extension for adaptation transformfilesTo apply adaptation transform using HERest:HERest-u a-h<MASK>-J<BASE CLASS DIRECTORY>-J<XFORM INPUT DIR><XFORM INPUT EXT>K<XFORM OUTPUT DIR><XFORM OUTPUT EXT>...•‘J’–specifies directory tofind the‘global’file as well as the directory and extension for adaptation transformfiles•other options similar as beforeSchool of Computing,National University of Singapore,January201022Speech ProcessingRecap•Feature normalisation:–Ceptral mean&variance normalisations–Vocal tract length normalisation–Gaussianisation•Speaker adaptation:–Supervised vs unsupervised training–Online vs offline adaptation–MAP&MLLR adaptation–Speaker adaptive training•Speaker adaptation using HTKSchool of Computing,National University of Singapore,January201023。
《Learning》SectionⅠ PPT课件
Ⅰ.语言知识积累 谈论学习的常用口语
1.Are you busy with your study this term? 你这学期忙吗? 2.Do you have a busy schedule this term?你这学期课程安排紧吗? 3.How many courses do you have this term?你这学期上几门课? 4.I’m afraid the course load is a little too heavy. 恐怕课程负担太重了。 5.I think you’d better carry a little load in the first term.
③__l_e_a_rn__a_b_o_u_t_ new ideas.I’m always ④__cu_r_i_o_u_s_a_b_o_u_t new things.I’m working on a science ⑤___p_r_o_je_c_t____ with my friends.It’s great.I like to ⑥__w_o_r_k__w_i_th___ a partner or do ⑦__g_ro_u_p__w_o_r_k__ after class.My learning ⑧____g_o_a_l_____ is to get into college.And I’m going to enter the country science ⑨__c_o_m__p_e_ti_ti_o_n_ next month.I’m sure I’ll do ⑩_____w__el_l____.
教师联合会PTA(Parent and Teacher Association),以及较具有专业性质的一般坊间教
育机构。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
• An individual’s natural, habitual and preferred way(s) of absorbing, processing, and retaining new information and skills (Dornyei, 2006) • Perceptually-based (sensory preferences (Oxford, 2001)) Visual: “Seeing is believing!” Aural/auditory: “I’m all ears!” Kinesthetic/tactile Koreans are visual; Hispanics are auditory; and Americans are kinesthetic?
SILL
• If you score low overall or on any of the categories, you should try increasing your use of strategies or certain type of strategies. • But do you really need to?
Categories of L2 learning strategies
5. Affective strategies: managing emotions rewarding oneself for good performance; using deep breathing or positive self-talk 6. Social strategies: working with others and understand the target culture and language. asking for clarification of a confusing point; exploring cultural and social norms
ID: Learning Styles
• Cognitive styles (how learners process information) Field-independent vs. field-dependent • Mini discussion 1) What’s your style? (visual/aural/kinesthetic; field dependent/field independent) 2)Implications for teaching Don’t judge; be open-minded Styles are subject to cultural/situational factors No teaching method suits everyone Create a variety of activities
ID: Personality/Affective Factors
• Extroverted vs. introverted • Extroversion is the extent to which a person needs to receive ego enhancement, selfesteem, and a sense of wholeness from other people (Brown, 2007). • Research failed to show favourable results for extroverts • Culture specific
Oxford’s Strategy Groups
A. Memory: Grouping, making associations, using imagery B. Mental Processes (cognitive): Repeating, practicing sounds and writing, using formulas and patterns C. Compensating for missing knowledge (compensatory): Guessing, trying to understand through context, using gestures D. Organizing and evaluating learning (metacognitive): Overviewing and linking with material you already know, deciding what to pay attention to E. Managing emotions (affective): Lowering anxiety, encouraging through positive statements, talking about feelings, attitudes F. Learning with others (social): Asking questions for clarification, verification, asking for correction, cooperating with peers
6 main categories of L2 learning strategies
1. Cognitive strategies: manipulating the language materials in direct ways. e.g., analysis, notetaking, summarizing, reasoning, synthesizing 2. Metacognitive strategies: executive skills that may entail planning for, monitoring, or evaluating the success of a learning activity. e.g. setting goals, finding native speakers to talk with, arranging a study place, self-evaluating
• Interacts with situation-specific variables
Other personality factors
• Inhibition • Anxiety Debilitative vs. facilitative A static trait or context-specific? Is anxiety a bad thing? • Personality traits seem to be related to oral skills but not literacy
Mnemonic Strategies
Calling Cognitive
My
Mum
Mnemonic
Metacognitive
Creates
Affective
Stress
Social
Mnemonic Strategies
Other examples are from learning Teaching Chinese to Speakers of Other Languages (TCSOL):
•Memorizing characters through mnemonics is a recognized strategy for beginners.
Exploring strategy use: SILL (Oxford, 1989)
• Strategy Inventory for Language Learning
1) Write one number from 1 to 5 for each question: 1 = never…5 = always 2) Add up your scores for each section 3) Calculate averages in the middle of p.298 4) p.299: fill out the score for different parts 5) Label them: cognitive, meta-cognitive, memory, compensatory, affective, social 6) Compare your answers with a partner
Categories of L2 learning strategies
3. Memory-related strategies: help learners link one L2 concept with another, but do not necessarily involve deep understanding. acronyms, rhyming, a mental picture of a new word, flashcards 4. Compensatory strategies: help learners make up for missing knowledge. guessing from context in listening and reading, using gestures, using synonyms to aid speaking and writing
Learning strategies
• Learning strategies refer to “specific actions, behaviors, steps, or techniques that students use to improve their own progress in developing skills in a second or foreign language. • These strategies can facilitate the internalization, storage, retrieval, or use of the new language.