Head Model Acquisition from Silhouettes

合集下载

2000) Neural network modeling of categorical perception

2000) Neural network modeling of categorical perception

Running Head:Neural Models of CP Neural Network Models of Categorical PerceptionR.I.Damper and S.R.HarnadUniversity of SouthamptonAbstractStudies of the categorical perception(CP)of sensory continua have a long and rich history in psychophysics.In1977,Macmillan et al.introduced the use of signal detection theory to CP studies.Anderson et al.simultaneously proposed thefirst neural model for CP,yet this line of research has been less well explored.In this paper,we assess the ability of neural-network models of CP to predict the psychophysical performance of real observers with speech sounds andartificial/novel stimuli.We show that a variety of neural mechanisms is capable of generating the characteristics of categorical perception.Hence,CP may not be a special mode of perception but an emergent property of any sufficiently powerful general learning system.Neural Network Models of Categorical PerceptionStudies of the categorical perception(CP)of sensory continua have a long and rich history. For a comprehensive review up until a decade ago,see the volume edited by Harnad(1987).An important question concerns the precise definition of CP.According to the seminal contribution of Macmillan,Kaplan and Creelman(1977),“The relation between an observer’s ability to identify stimuli along a continuum and his ability to discriminate between pairs of stimuli drawn from that continuum is a classic problem in psychophysics”(p.452).The extent to which discrimination is predictable from identification performance has long been considered the acid test of categorical–as opposed to continuous–perception.Continuous perception is often characterized by(approximate)adherence to Weber’s law, according to which the difference limen is a constant fraction of stimulus magnitude.Also, discrimination is much better than identification:1Observers can make only a small number of identifications along a single dimension,but they can make relative(e.g.pairwise)discriminations between a much larger number of stimuli(Miller,1956).By contrast,CP was originally defined (e.g.,Liberman,Harris,Hoffman and Griffith,1957;Liberman,Cooper,Shankweiler and Studdert-Kennedy,1967)as occurring when thebetween identifiable categories,notdegrees of CP.The strongest case of identification grain equal to discrimination grain would mean that observers could only discriminate what they can identify: one just noticeable difference would be the the size of the whole category.This(strongest)criterion hasambiguity in the identification function with the peak in discrimination“the essential defining characteristic of categorical perception”(p.253).Macmillan et al.(1977)generalize this,defining CP as“the equivalence of discrimination and identification measures”which can take place“in the absence of a boundary effect since there need be no local maximum in discriminability”(p.453).In the same issue ofbrain-state-in-a-box)based jointly on neurophysiological considerations and linear systems theory.Simulated labeling(identification)and ABX discrimination curves were shown to be very much like those published in the experimental psychology literature.In the present paper,we will refer to such network models as exhibitingneural-net models for synthetic CP.These have mostly considered artificial or novel continua, whereas experimental work with human subjects has usually considered speech(or,more accurately synthetic,perception frominnate as in the case of color vision(e.g.,Bornstein,1987)orEarly Characterizations from Speech CPThe phenomenon of CP wasfirst observed and characterized in the seminal studies of the perception of synthetic speech at Haskins Laboratories,initiated by Liberman et al.(1957),but see Liberman,1996,for a comprehensive historical review.The impact of these studies on thefield has been tremendous.Massaro(1987a)writes,“The study of speech perception has been almost synonymous with the study of categorical perception”(p.90).Liberman et al.(1957)investigated the perception of syllable-initial stop consonants(/b/,/d/and/g/)varying in place of articulation,cued by second-formant transition.Liberman, Delattre and Cooper(1958)went on to study the voiced/voiceless contrast cued byfirst-formant(in that a steep labeling function and a peaked discrimination function(in an ABX task)were observed,with the peak at the phoneme boundary corresponding to the50%point of the labeling curve.Fry,Abramson,Eimas and Liberman(1962)found the perception of long,steady-state vowels to be much“less categorical”than stop consonants.An importantfinding of Liberman et al.(1958)was that the voicing boundary depended systematically on place of articulation.In subsequent work,Lisker and Abramson(1970)found that as the place of articulation moves back in the vocal tract from bilabial(for a/ba–pa/VOT continuum)through alveolar(/da–ta/)to velar(/ga–ka/),so the boundary moves from about25ms VOT through about35ms to approximately42ms.Why this should happen is uncertain.For instance,Kuhl(1987)writes:“We simply do not know why the boundary“moves”.”(p.365).One important implication,however,is that CP is more than merely bisecting a continuum,otherwise the boundary would be at mid-range in all three cases.At the time of the early Haskins work,and for some years thereafter,CP was thought toreflect a mode of perception special to speech(e.g.,Liberman et al.,1957,1967;Studdert-Kennedy,Liberman,Harris and Cooper,1970)in which the listener somehow made reference to production.It was supposed that an early and irreversible conversion took place of the continuous sensory representation into a discrete,symbolic code subserving both perception and production(more categorical than that of steady-state vowels because the articulatory gestures that produce the former are more discrete than the relatively continuous gestures producing the latter.Although there is little or no explicit mention of Weber’s law in early discussions of motor theory,its violation is one of the aspects in which CP was implicitly supposed to be special.Also, at that time,CP had not been observed for stimuli other than speech,a situation which was soon to change.According to Macmillan,Braida and Goldberg(1987),however,“all...discrimination data require psychoacoustic explanation,whether they resemble Weber’s Law,display a peak,or are monotonic”(p.32).In spite of attempts to modify it suitably(e.g.,Liberman and Mattingly,1985,1989;Liberman,1996),the hypothesis that CP is special to speech has been unable to bear the weight of accumulating contrary evidence.One strong line of contrary evidence comes from psychoacoustic investigations,notably that of Kuhl and Miller(1978),using non-human animal listeners who“by definition,[have]no phonetic resources”(p.906).These workers trained four chinchillas to respond differentially to the 0ms and80ms endpoints of a synthetic VOT continuum as developed by Abramson and Lisker (1970).They then tested their animals on stimuli drawn from the full0to80ms range.Four human listeners also labeled the stimuli for comparison.Kuhl and Miller found“no significant differences between species on the absolute values of the phonetic boundaries...obtained,but chinchillas produced identification functions that were slightly,but significantly,less steep”(p.905).Figure1 shows the mean identification functions obtained for bilabial,alveolar and velar synthetic VOT series(Kuhl and Miller’s Figure10).In thisfigure,smooth curves have beenfitted to the raw data points(at0,10,20,...80ms).Subsequently,working with macaques,Kuhl and Padden(1982, 1983)confirmed that these animals showed increased discriminability at the phoneme boundaries. Although animal experiments of this sort are methodologically challenging,and there have been difficulties in replication(e.g.,Howell,Rosen,Laing and Sackin,1992working with chinchillas), the convergence of human and animal data in this study has generally been taken as support for the notion that general auditory processing and/or learning principles underlie this version of CP.The emerging classical characterization of CP has been neatly summarized by Treisman, Faulkner,Naish and Rosner(1995)as encompassing four features:“a sharp category boundary,a corresponding discrimination peak,the predictability of discrimination function fromidentification,and resistance to contextual effects”(p.335).These authors go on to critically assess this characterization,referring to“the unresolved difficulty that identification data usually predict alower level of discrimination than is actually found”(pp.336–7)as,for example,in the work of Liberman et al.(1957),Studdert-Kennedy et al.(1970),Macmillan et al.(1977)and Pastore (1987a).They also remark on the necessity to qualify“Studdert-Kennedy et al.’s claim that context effects[and other sequential dependencies]are weak or absent in categorical perception”(see also Healy and Repp,1982).We will take the classical characterization of CP to encompass only thefirst three aspects identified above,given the now rather extensive evidence for context effects and sequential dependencies(e.g.,Brady and Darwin,1978;Diehl,Elman and McCusker,1978; Rosen,1979;Repp and Liberman,1987;Diehl and Kluender,1987)which can shift the category boundary.4d)from measures of response bias().Second,the two views differ in details of how discrimination is predicted from identification,with labeling playing a central role in both aspects of performance in the classical view.We deal with the latter point in some detail below;brief remarks on thefirst point follow immediately.Figure2(a)(after Massaro,1978b)shows the transformation of stimulus to response in an identification or discrimination task as a two-stage process:sensory operation followed by decision operation.This is obviously consistent with SDT’s separation of sensitivity factors(having to do with sensory perception)and response bias(having to do with decision processes).The importanceof this in the current context is that classical notions of CP are ambiguous about which of the representations are categorical:The information passed between sensory and decision processes (labeledcategorical partition.Emphasizing this distinction,Massaro(1987a)writes,“I cannot understand why categorization behavior was(and continues to be)interpreted as evidence for categorical perception.It is only natural that continuous perception should lead to sharp category boundaries along a stimuluscontinuum”(p.115).(See also Hary and Massaro,1982,and the reply of Pastore,Szczesiul, Wielgus,Nowikas and Logan,1984).According to SDT,continuous decision variate.In the simplest case,there are two kinds of presentation(historically called signal-plus-noise)andd.Stimuli are then judged to be from one class or the other according to whether they give rise to anHaskins model(p.345),which also predicts(from labeling)a peak as described below.Moreover, the absolute value of the observed discrimination performance is close to that predicted by CST. This is not the case with the Haskins model,which predicts a lower performance than is actually observed,as discussed immediately below.The betterfit achieved by CST relative to the Haskins model is attributed to the former’s additional criterion-setting assumptions.covert labeling.First A is labeled covertly(in the sense that the subject is not required to report this judgment to the investigator as in overt identification),then B,then X:If the A and B labels are different,the subject responds X is B according to X’s label, otherwise the subject guesses.On this basis,ABX discrimination is predictable from identification. Indeed,one of the criticisms of this paradigm(e.g.,Pisoni and Lazarus,1974;Massaro and Oden, 1980)is that itP p pp padvocates of categorical perception nor a central result for any alternative view”(p.91).However, the dual-process model of Fujisaki and Kawashima(1969,1970,1971)does indeed effectively take this discrepancy as the basis of an alternative view,in which both a continuous(phonetic)mode of processing co-exist(Figure2(b)).If the subject fails to label A and B differently via the categorical route then,instead of guessing,the continuous(but decaying) representations of A and B are consulted.According to Macmillan et al.(1977,p.454),the extent to which Equation1underestimates discrimination determines the weight to be given to each process so as tofit the data best.They criticize dual-process theory for“its embarrassing lack of parsimony”(p.467),however,in that everything that can be done via the discrete route(and more) can also be achieved via the continuous route.The theory does,however,have other strengths.It can explain,for instance,the effect that memory requirements of the experimental procedure have on CP on the basis that the two processes have different memory-decay properties.Macmillan et al.point out that the Haskins model is tacitly based on low threshold assumptions5,arguing that mere correlation between observed discrimination and that predicted from identification is inadequate support for the notion of CP.By contrast,they characterize CP on the basis of signal detection theory,in terms of the equivalence of discrimination and identification. The essential defining characteristic of CP is then considered to be the equivalence of identificationd.The Braida and Durlach model assumes a distribution corresponding to each point on the continuum,and thenfinds ad corresponding to the same pair of distributions in ABX discrimination,then these two sensitivity measures should be equal if discrimination is indeed predictable from identification.To avoid the low threshold assumptions of a discrete set of internal states,Macmillan et al. extended Green and Swets’earlier(1966)derivation of(pp.458–9)as a2IFC subtask(to determine if the standards are in the order AB or BA6), followed by a yes-no subtask.This is described as“a continuous(SDT)model for categorical perception”(p.462).This view of the importance of continuous information to CP is gaining ground over the classical characterization of CP.For instance,Treisman et al.(1995)state that“CP resembles standard psychophysical judgements”(p.334)while Takagi(1995)writes,“In fact,the signal detection model is compatible with both categorical and continuous patterns ofidentification/discrimination data”(p.569).Neural Models of CP:A ReviewIn this section,we present a historical review of synthetic CP.Anderson et al.model because of its greater simplicity and perspicacity,and its more direct and obvious use in modeling human psychophysical data.Anderson,Silverstein,Ritz and Jones(1977)consider networks of neurons7which“are simple analog integrators of their inputs”(p.416).They extend the earlier work mentioned above(e.g.,Anderson,1968)in two main ways.It had previously been assumed(p.413)that:1.nervous system activity could be represented by the pattern of activation across a group of cells;2.different memory traces make use of the same synapses;3.synapses associate two patterns by incrementing synaptic weights in proportion to the product of pre-and post-synaptic activities.The form of learning implied in3is in effect correlational,and has been calledN-dimensional input pattern vectors f M-dimensional output pattern vectors g N input units andMig TT denotes the vector transpose.In this way,the overall connectivity matrix is determined as A=i,summed over allk will beAki AkAkk Akg Tkk g Tkby Equation2gTjf iM distinct output units and to introduce positive feedback from the set of Ni andj and a aC.That is,the activation function of each neuron is linear-with-saturation.Thus,in use,all units are eventually driven into saturation(either positive or negative in sense)and the net has stable states corresponding to some(possibly all)of the corners of a hypercube(N-dimensional state space.(Of course,not all corners are necessarily stable.)For this reason,the model was calledas vectors in the state space,these corners are the eigenvectors of A and can be identified,in psychological terms,with thei)are associated with themselves during training,that is,during computation of the(N)matrix A.This is an essentially unsupervised operation.However,if the training patterns are labeled with their pattern class,the corners of the box can be similarly labeled according to the input patterns that they attract.(There is,of course, no guarantee that all corners will be so labeled.Corners which remain unlabeled correspond toSD) of the noise was either0.0,0.1,0.2or0.4.We have replicated Anderson et al.’s simulation.Weights were calculated from the inner product of each of the two training patterns with itself,added to produce the feedback matrix in accordance with Equation2.Testing used1,000presentations under three different noiseconditions.During testing,the activation of each neuron was computed asextinput intinputiextinput intinput i and the internal(feedback)input to the same unit.A decay term,acti i idecay was set to1so that the activation is given simply by Equation3.For the replication of Anderson et al.’s simulation,the external scale factor and internal scale factor were both set at0.1.The saturation limit for the neurons was set atSD=0.0,0.4and0.7.In all noise-free cases,the system converged to one of its two stable,saturating states for all 1,000inputs.For the added noise conditions,there was only a very small likelihood of convergence to an unlabeled corner(rubbish state).This occurred for approximately1%of the inputs when SD=0.7.Figure3(b)shows the identification results obtained by noting the proportion of inputs which converge to the saturating state corresponding to endpoint0.For the no-noise condition,categorization is perfect with the class boundary at the midpoint between the prototypes.For the noise conditions,SD=0.7, the labeling curves are very reasonable approximations to those seen in the classical CP literature. Overall,this replicates the essentialfindings of Anderson et al.Consider next the ABX discrimination task.Anderson et al.considered two inputs to the net to be discriminable if they converged to different stable states.(Note that as Anderson et al.areconsidering a simple two-class problem with convergence to one or other of the two labeled states, and no rubbish states,they are never in the situation of having A,B and X all covertly labeled differently,as can conceivably happen in reality.)If they converged to the same stable state,a guess was made with probability0.5,in accordance with Equation1.This means that discrimination by the net is effectively a direct implementation of the Haskins model.Indeed, Anderson et al.observed a distinct peak at midrange for their intermediate-noise condition,just as in classical CP.Finally,they obtained simulatedcontinues:“No physical process is defined to justify the discontinuous change in the slope of each variable when it reaches an extreme of activity...The model thus invokes a homunculus to explain...categorical perception”(pp.192–194).In our view,however,a homunculus is an unjustified,implicit mechanism which is,in the worst case,comparable in sophistication and complexity to the phenomenon to be explained.By contrast,Anderson et al.postulate an explicit mechanism(firing-rate saturation)which is both simple and plausible in that something like it is a ubiquitous feature of neural systems.In the words of Lloyd(1989),“Homunculi are tolerable provided they can ultimately be discharged by analysis into progressively simpler subunculi,untilfinally each micrunculus is so stupid that we can readily see how a mere bit of biological mechanism could take over its duties”(p.205). Anderson et al.go so far as to tell us what this“mere bit of biological mechanism”is–namely,rate saturation in neurons.(See Grossberg,1978,and the reply thereto of Anderson and Silverstein, 1978,for additional discussion of the status of the non-linearity in the Anderson et al.BSB model: see also B´e gin and Proulx,1996,for more recent commentary.)To be sure,the discontinuity of the linear-with-saturation activation function is biologically and mathematically unsatisfactory,but essentially similar behavior is observed in neural models with activation functions having a more gradual transition into saturation(as detailed below).competitive.Strictly speaking,T RACE is as much a model of lexical accessing as of speech perception per se,as the existence of the word units makes plain.McClelland and Elman assumed an input in terms of something like‘distinctive features’,which sidesteps important perceptual questions about how the distinctive features are derived from the speech signal and,indeed,about whether this an appropriate representation or not.In their1986Insert Figure4about hereF1.The endpoints of the continuum(stimuli1and11)were more extreme than prototypical/g/and/k/,which occurred at points3and9,respectively.The word units were removed,thus excluding any top-down lexical influence,and all phoneme units other than/g/and/k/were also removed.Figure4(a)shows the initial activations(at time stepHowever,the variation is essentially continuous rather than categorical,as shown by the relative shallowness of the curves.By contrast,after60time steps,the two representations are as shown in Figure4(b).As a result of mutual inhibition between the/g/and/k/units,and possibly oftop-down influence of phoneme units on featural units also,a much steeper(more categorical) response is seen.This appears to be a natural consequence of the competition between excitatory and inhibitory processing.Many researchers have commented on this ubiquitousfinding.For instance, Grossberg(1986)states,“Categorical perception can...be anticipated whenever adaptivefiltering interacts with sharply competitive tuning,not just in speech recognition experiments”(p.239).McClelland and Elman go on to model overt labeling of the phonemes,basing identification on a variant of Luce’s(1959)choice rule.The result is shown in Figure4(c),which also depicts the ABX discrimination function.The choice rule involves setting a constantk determined the shape of the identification functions...A rather uncharitable conclusion...is that the model has beenfixed up to demonstrate categorical perception...Categorical perception does not follow from any of the a priori functional characteristics of the net”(p.151).It is also apparent that the obtained ABX discrimination curve is not very convincing,having a rather low peak relative to those found in psychophysical experiments.We considerfinally the relation between discrimination and identification in T RACE. McClelland and Elman point out that discrimination incourse of these interactions gives a way of predicting the increase in reaction time for stimuli close to the category boundary.)The authors remark on the practical difficulty of distinguishing between this feedback explanation and a dual-process explanation.precategorization discrimination function.This was then re-examined after categorization training to see if it had warped.That is,a CP effect was defined as a decrease in within-category inter-stimulus distances and/or an increase in between-category inter-stimulus distances relative to the baseline of auto-association alone.The stimuli studied were artificial–namely,different representations of the length of(virtual)lines–and the net’s task was to categorize these as long.A back-propagation net with eight input units,a single hidden layer of2to12units and8or9output units was used.The8different input lines were represented in6different ways,to study the effect of therecommenced.The net was given a double task:auto-association(again)and categorization.For the latter,the net had to label lines1to4(for instance)as long.This required an additional output,making nine in this case.Strong CP effects(with warping of similarity space in the form of increased separation across the category boundary and compression within a category)were observed for all input representations:The strongest effect was obtained with the least iconic,most arbitrary(place) code.The categorization task was very difficult to learn with only two hidden units,.With more hidden units,however,the pattern of behavior did not change with increasing(3to12). This is taken to indicate that CP is not merely a byproduct of information compression by the hidden layer.Nor was CP a result of overlearning to extreme values,because the effect was present (albeit smaller)for larger values of the epsilon()error criterion in the back-propagation algorithm.A test was also made to determine if CP was an artifact of re-using the weights for the precategorization discrimination(auto-association)for the auto-association-plus-categorization. Performance was averaged over several precategorization nets,and compared to performance averaged over several different auto-association-plus-categorization nets.Again,although weaker and not always present,there was still evidence of synthetic CP.Afinal test concerned iconicity and interpolation:Was the CP restricted to trained stimuli, or would it generalize to untrained ones?Nets were trained on auto-association the usual way,and then,during categorization training,some of the lines were left untrained(say,line3and line6)to see whether they would nevertheless warp in the‘right’direction.Interpolation of the CP effects to untrained lines was found,but only for the coarse-coded representations.A“provisional conclusion”of Harnad et al.(1991)was that“whatever was responsible for it,CP had to be something very basic to how these nets learned”.In this and subsequent work (Harnad,Hanson and Lubin,1995),the time-course of the training was examined and three important factors in generating synthetic CP were identified:1.maximal interstimulus separation induced during auto-association learning with thehidden-unit representations of each(initially random)stimulus moving as far apart from one another as possible;2.stimulus movement to achieve linear separability during categorization learning,which undoes some of the separation achieved in1above,in a way which promotes within-category compression and between-category separation;3.inverse-distance“repulsive force”at the category boundary,pushing the hidden-unit representation away from the boundary and resulting from the form of the(inverse exponential) error metric which is minimized during learning.One further factor–the iconicity of the input codings–was also found to modulate CP.The general rule is that the further the initial representation is from satisfying the partition implied in1 to3above(i.e.the less iconic it is),the stronger is the CP effect.Subsequently,Tijsseling and Harnad(1997)carried out a more detailed analysis focusing particularly on the iconicity.Contrary to the report of Harnad et al.(1995),they found no overshoot as in2above.They conclude:“CP effects usually occur with similarity-based categorization,but their magnitude and direction vary with the set of stimuli used,how[these are]carved up into categories,and the distance between those categories”.This work indicates that a feedforward net trained on back-propagation is able(despite obvious dissimilarities)to replicate the essential features of classical CP much the way theBSB model of Anderson et al.(1977)does.There are,however,noteworthy differences.The most important is that Harnad et al.’s back-propagation nets are trained on intermediate(rather than solely on endpoint)stimuli.Thus,generalization testing is a more restricted form of interpolation. Also(because the feedforward net has no dynamic behavior resulting from feedback),reaction times cannot be quite so easily predicted as Anderson et al.do(but see below.)。

我的新风扇英语作文

我的新风扇英语作文

我的新风扇英语作文Title: My New Fan。

Recently, I acquired a new addition to my room that has significantly enhanced my living space a sleek andefficient fan. Its arrival heralded a new era of comfort and tranquility within my abode. Allow me to delve into the various aspects of this newfound delight.First and foremost, let me extol the virtues of its design. The fan boasts a contemporary aesthetic, seamlessly blending into the decor of my room. Its slender silhouette occupies minimal space, yet its impact is monumental. Crafted with precision, its smooth contours and minimalist finish evoke a sense of modernity and sophistication.Functionality is the cornerstone of this appliance, and it does not disappoint. With multiple speed settings, it offers versatility tailored to my specific needs. Whether I seek a gentle breeze to lull me into relaxation or apowerful gust to combat the sweltering heat, this fan delivers with aplomb. Moreover, its oscillation feature ensures even distribution of airflow, creating a refreshing ambiance throughout the room.Energy efficiency is a crucial consideration in today's eco-conscious world, and this fan embodies sustainability. Equipped with energy-saving technology, it consumes minimal power while delivering maximum performance. This not only reduces my carbon footprint but also translates into tangible savings on utility bills, a testament to its economic viability.In addition to its functionality, the fan excels in the realm of convenience. Its remote control functionality allows me to adjust settings with ease from the comfort of my bed or desk. Gone are the days of fumbling with cumbersome switches or cords; now, convenience is merely a button press away. Furthermore, its whisper-quiet operation ensures uninterrupted peace, allowing me to work, rest, or sleep without disturbance.The benefits of this fan extend beyond mere physical comfort; it also contributes to my overall well-being. By improving air circulation and ventilation, it creates a healthier indoor environment, free from staleness or stuffiness. This, in turn, promotes better respiratoryhealth and enhances cognitive function, enabling me tothrive in my daily endeavors.In conclusion, the acquisition of my new fan has been a transformative experience, elevating my living space to new heights of comfort and functionality. Its impeccable design, unmatched performance, and myriad features have made it an indispensable asset in my daily life. As I bask in the cool breeze it provides, I am reminded of the simple pleasures that enhance the quality of our existence. Truly, this fanis not merely an appliance but a symbol of comfort, convenience, and well-being.。

SLA_二语习得重要问题总结

SLA_二语习得重要问题总结

SLA 期末考试提纲Week 9Chapter 1 Introducing Second Language AcquisitionChapter 2 Foundations of Second Language AcquisitionPART ONE: Definition:1.Second Language Acquisition (SLA): a term that refers both to the study of individuals and groups who are learning a language subsequent to learning their first one as young children, and to the process of learning that language.2.Formal L2 learning: instructed learning that takes place in classrooms.rmal L2 learning: SLA that takes place in naturalistic contexts.4.First language/native language/mother tongue (L1): A language that is acquired naturally in early childhood, usually because it is the primary language of a child’s family. A child who grows up in a multilingual setting may have more than one “first” language.5.Second language (L2): In its general sense, this term refers to any language that is acquired after the first language has been established. In its specific sense, this term typically refers to an additional language which is learned within a context where it is societally dominant and needed for education, employment, and other basic purposes. The more specific sense contrasts with foreign language, library language, auxiliary (帮助的,辅助的) language, and language for specific purposes.6.Target language: The language that is the aim or goal of learning.7.Foreign language: A second language that is not widely used in the learners’ immediate social context, but rather one that might be used for future travel or other cross-cultural communication situations, or one that might be studied as a curricular requirement or elective in school with no immediate or necessary practical application.8.Library language: A second language that functions as a tool for further learning, especially when books and journals in a desired field of study are not commonly published in the learner’s L1.9.Auxiliary language: A second language that learners need to know for some official functions in their immediate sociopolitical setting. Or that they will need for purposes of wider communication, although their first language serves most other needs in their lives.10.Linguistic competence: The underlying knowledge that speakers/hearers have of a language. Chomsky distinguishes this from linguistic performance.11.Linguistic performance: The use of language knowledge in actual production.municative competence: A basic tenet (原则、信条、教条) of sociolinguistics defined as “what a speaker needs to know to communicate appropriately within a particular language community” (Saville-Troike 2003)13.Pragmatic competence: Knowledge that people must have in order to interpret and convey meaning within communicative situations.14.Multilingualism: The ability to use more than one language.15.Monolingualism: The ability to use only one language.16.Simultaneous multilingualism: Ability to use more than one language that were acquired during early childhood.17.Sequential multilingualism: Ability to use one or more languages that were learned after L1 had already been established.18.Innate capacity: A natural ability, usually referring to children’s natural ability to learn or acquire language.19.Child grammar: Grammar of children at different maturational levels that is systematic in terms of production and comprehension.20.Initial state: The starting point for language acquisition; it is thought to include the underlying knowledge about language structures and principles that are in learners’ heads at the very start of L1 or L2 acquisition.21.Intermediate state: It includes the maturational changes which take place in “child grammar”, and the L2 developmental sequence which is known as learner language.22.Final state: The outcome of L1 and L2 leaning, also known as the stable state of adult grammar.23.Positive transfer: Appropriate incorporation of an L1 structure or rule in L2 structure.24.Negative transfer: Inappropriate influence of an L1 structure or rule on L2 use. Also called interference.25.Poverty-of-the-stimulus: The argument that because language input to children is impoverished and they still acquire L1, there must be an innate capacity for L1 acquisition.26.Structuralism: The dominant linguistic model of the 1950s, which emphasized the description of different levels of production in speech.27.Phonology: The sound systems of different languages and the study of such systems generally.28.Syntax: The linguistic system of grammatical relationships of words within sentences, such as ordering and agreement.29.Semantics: The linguistic study of meaning.30.Lexicon: The component of language that is concerned with words and their meanings.31.Behaviorism: The most influential cognitive framework applied to language learning in the 1950s. It claims that learning is the result of habit formation.32.Audiolingual method: An approach to language teaching that emphasizes repetition and habit formation. This approach was widely practiced in much of the world until at least the 1980s.33.Transformational-Generative Grammar: The first linguistic framework with an internal focus, which revolutionized linguistic theory and had profound effect on both the study of first and second languages. Chomsky arguedeffectively that the behaviorist theory of language acquisition is wrong because it cannot explain the creative aspects of linguistic ability. Instead, humans must have some innate capacity for language.34.Principles and Parameters (model): The internally focused linguistic framework that followed Chomsky’s Transformational-Generative Grammar. It revised specifications of what constitutes innate capacity to include more abstract notions of general principles and constraints common to human language as part of a Universal Grammar.35.Minimalist program: The internally focused linguistic framework that followed Chomsky’s Principles and Parameters model.This framework adds distinctions between lexical and functional category development, as well as more emphasis on the acquisition of feature specification as a part of lexical knowledge.36.Functionalism: A linguistic framework with an external focus that dates back to the early twentieth century and has its roots in the Prague School (布拉格学派) of Eastern Europe. It emphasizes the information content of utterances and considers language primarily as a system of communication. Functionalist approaches have largely dominated European study of SLA and are widely followed elsewhere in the world.37.Neurolinguistics: The study of the location and representation of language in the brain, of interest to biologists and psychologists since the nineteenth century and one of the first fields to influence cognitive perspectives on SLA when systematic study began in 1960s.38.Critical period: The limited number of years during which normal L1 acquisition is possible.39.Critical Period Hypothesis: The claim that children have only a limited number of years during which they can acquire their L1 flawlessly; if they suffered brain damage to the language areas, brain plasticity in childhood would allow other areas of the brain to take over the language functions of the damaged areas, but beyond a certain age, normal language development would not be possible. This concept is commonly extended to SLA as well, in the claim that only children are likely to achieve native or near-native proficiency in L2.rmation processing (IP): A cognitive framework which assumes that SLA (like learning of other complex domains) proceeds from controlled to automatic processing and involves progressive reorganization of knowledge.41.Connectionism: A cognitive framework for explaining learning processes, beginning in the 1980s and becoming increasingly influential. It assumes that SLA results from increasing strength of associations between stimuli and responses.42.Variation theory: A microsocial framework applied to SLA that explores systematic differences in learner production which depend on contexts of use.43.Accommodation theory: A framework for study of SLA that is based on the notion that speakers usually unconsciously change their pronunciation and even the grammatical complexity of sentences they use to sound more like whomever they are talking to.44.Sociocultural theory (SCT): An approach established by Vygotsky which claims that interaction not only facilitates language learning but is a causative force in acquisition. Further, all of learning is seen as essentially a social process which is grounded in sociocultural settings.45.Ethnography(人种论、民族志) of communication: A framework for analysis of language and its functions that was established by Hymes(1966). It relates language use to broader social and cultural contexts, and applies ethnographic methods of data collection and interpretation to study of language acquisition and use.46.Acculturation(文化适应): Learning the culture of the L2 community and adapting to those values and behavior patterns.47.Acculturation Model/Theory: Schumann’s (1978) theory that identifies group factors such as identity and status which determine social and psychological distance between learner and target language populations. He claims these influence outcomes of SLA.48.Social psychology: A societal approach in research and theory that allows exploration of issues such as how identity, status, and values influence L2 outcomes and why. It has disciplinary ties to both psychological and social perspectives. PART TWO: Short & Long answers:Chapter 11.What are the similarities and differences between linguists, psycholinguist, sociolinguists and social psycholinguists? P3(1)Linguists emphasize the characteristics of the differences and similarities in the languages that are being learned, and thelinguistic competence (underlying knowledge) and linguistic performance (actual production) of learners at various stages of acquisition.(2)Psychologists emphasize the mental or cognitive processes involved in acquisition, and the representation of languages in the brain.(3)Sociolinguists emphasize variability in learner linguistic performance, and extend the scope of study to communicative competence(underlying knowledge that additionally accounts for language use, or pragmatic competence).(4)Social psychologists emphasize group-related phenomena, such as identity and social motivation, and the interactional and larger social contexts of learning.2.What are the differences between second language, foreign language, library language and auxiliary language? P4(1)A second language is typically an official or societally dominant language needed for education, employment, and other basic purposes. It is often acquired by minority group members or immigrants who speak another language natively. In this more restricted sense, the term is contrasted with other terms in this list.(2)A foreign language is one not widely used in the learners' immediate social context which might be used for future travel or other cross-cultural communication situations, or studied as a curricular requirement or elective in school, but with no immediate or necessary practical application.(3)A library language is one which functions primarily as a tool for future learning through reading, especially when books or journals in a desired field of study are not commonly published in the learners' native tongue.(4)An auxiliary language is one which learners need to know for some official functions in their immediate political setting, or will need for purposes of wider communication, although their first language serves most other needs in their lives.3.Why are some learners more (or less) successful than other? P5The intriguing question of why some L2 learners are more successful than others requires us to unpack the broad label “learners” for some dimensions of discussion. Linguistics may distinguish categories of learners defined by the identity and relationship of their L1 and L2; psycholinguists may make distinctions based on individual aptitude for L2 learning, personality factors, types and strength of motivation, and different learning strategies; sociolinguists may distinguish among learners with regard to social, economic, and political differences and learner experiences in negotiated interaction; and social psychologists may categorize learners according to aspects of their group identity and attitudes toward targetlanguage speakers or toward L2 learning itself.Chapter21.List at least five possible motivations for learning a second language at an older age. P10The motivation may arise from a variety of conditions, including the following:Invasion or conquest of one’s country by speakers of another language;A need or desire to contact speakers of other languages ineconomic or other specific domains;Immigration to a country where use of a language other than one's L1 is required;Adoption of religious beliefs and practices which involve use of another language;A need or desire to pursue educational experienceswhere access requires proficiency in another language;A desire for occupational or social advancement whichis furthered by knowledge of another language;An interest in knowing more about peoples of other cultures and having access to their technologies or literatures.2.What are the two main factors that influence the language learning? P13(1)The role of natural ability: Humans are born with a natural ability or innate capacity to learn language.(2)The role of social experience: Not all of L1 acquisition can be attributed to innate ability, for language-specific learning also plays a crucial role. Even if the universal properties of language are preprogrammed in children, they must learn all of those features which distinguish their L1 from all other possible human languages. Children will never acquire such language-specific knowledge unless that language is used with them and around them, and they will learn to use only the language(s) used around them, no matter what their linguistic heritage. American-born children of Korean or Greek ancestry will never learn the language of their grandparents if only English surrounds them, for instance, and they will find their ancestral language just as hard to learn as any other English speakers do if they attempt to learn it as an adult. Appropriate social experience, including L1 input and interaction, is thus a necessary condition for acquisition.3.What is the initial state of language development for L1 and L2 respectively? P17-18The initial state of L1 learning is composed solely of an innate capacity for language acquisition which may or may not continue to be available for L2, or may be available only in some limited ways. The initial state for L2 learning, on the other hand, has resources of L1 competence, world knowledge, and established skills for interaction, which can be both an asset and an impediment.4.How does intermediate states process? P18-19The cross-linguistic influence, or transfer of prior knowledge from L1 to L2, is one of the processes that is involved ininterlanguage development. Two major types of transfer which occur are: (1) positive transfer, when an L1 structure or rule is used in an L2 utterance and that use is appropriate or “correct” in the L2; and (2) negative transfer (or interference), when an L1 structure or rule is used in an L2 utterance and that use is inappropriate and considered an “error”.5.What is a necessary condition for language learning (L1 or L2)? P20Language input to the learner is absolutely necessary for either L1 or L2 learning to take place. Children additionally require interaction with other people for L1 learning to occur. It is possible for some individuals to reach a fairly high level of proficiency in L2 even if they have input only from such generally non-reciprocal sources as radio, television, or written text.6.What is a facilitating condition for language learning? P20While L1 learning by children occurs without instruction, and while the rate of L1 development is not significantly influenced by correction of immature forms or by degree of motivation to speak, both rate and ultimate level of development in L2 can be facilitated or inhabited by many social and individual factors, such as (1) feedback, including correction of L2 learners' errors; (2) aptitude, including memory capacity and analytic ability; (3) motivation, or need and desire to learn; (4) instruction, or explicit teaching in school settings.7.Give at least 2 reasons that many scientists believe in someinnate capacity for language. P21-24The notion that innate linguistic knowledge must underlie (指原则、理由构成某学说...的基础,潜在于...之下)language acquisition was prominently espoused (采纳或支持事业理念)by Noam Chomsky. This view has been supported by arguments such as the following:(1)Children’s knowledge of language goes beyond what could be learned from the input they receive: Children often hear incomplete or ungrammatical utterances along with grammatical input, and yet they are somehow able to filter the language they hear so that the ungrammatical input is not incorporated into their L1 system. Further, children are commonly recipients of simplified input from adults, which does not include data for all of the complexities which are within their linguistic competence. In addition, children hear only a finite subset of possible grammatical sentences, and yet they are able to abstract general principles and constraints which allow them to interpret and produce an infinite number of sentences which they have never heard before.(2)Constraints and principles cannot be learned: Children’s access to general constraints and principles which govern language could account for the relatively short time ittakes for the L1 grammar to emerge, and for the fact that it does so systematically and without any “wild” divergences. This could be so because innate principles lead children to organize the input they receive only in certain ways and not others. In addition to the lack of negative evidence , constraints and principles cannot be learnt in part because children acquire a first language at an age when such abstractions are beyond their comprehension; constraints and principles are thus outside the realm of learning process which are related to general intelligence.(3)Universal patterns of development cannot be explained by language-specific input: In spite of the surface differences in input, there are similar patterns in child acquisition of any language in the world. The extent of this similarity suggests that language universals are not only constructs derived from sophisticated theories and analyses by linguists, but also innate representations in every young child’s mind.8.Linguists have taken an internal and/or external focus to the study of language acquisition. What is the difference between the two? P25-26Internal focus emphasizes that children begin with an innate capacity which is biologically endowed, as well as the acquisition of feature specification as a part of lexical knowledge; while external focus emphasizes the information content of utterances, and considers language primarily as a system of communication.9.What are the two main factors for learning process in the study of SLA from a psychological perspective? P26-27(1) Information Processing, which assumes that L2 is a highly complex skill, and that learning L2 is not essentially unlike learning other highly complex skills. Processing itself is believed to cause learning;(2) Connectionism, which does not consider language learning to involve either innate knowledge or abstraction of rules and principles, but rather to result from increasing strength of associations (connections) between stimuli and responses.10.What are the two foci for the study of SLA from the social perspective? P27(1) Microsocial focus: the concerns within the microsocial focus relate to language acquisition and use in immediate social contexts of production, interpretation, and interaction. (2) Macrosocial focus: the concerns of the macrosocial focus relate language acquisition and use to broader ecological contexts, including cultural, political, and educational settings.Week10Chapter 5 Social contexts of Second Language AcquisitionPART ONE: Definitionmunicative competence: A basic tenet of sociolinguistics defined as “what a speaker needs to know to communicate appropriately within a particular language community”(Saville-Troike 2003)nguage community: A group of people who share knowledge of a common language to at least some extent.3.Foreigner talk: Speech from L1 speakers addressed to L2 learners that differs in systematic ways from language addressed to native or very fluent speakers.4.Direct Correction: Explicit statements about incorrect language use.5.Indirect correction: Implicit feedback about inappropriate language use, such as clarification requests when the listener has actually understood an utterance.6.Interaction Hypothesis: The claim that modifications and collaborative efforts which take place in social interation facilitate SLA because they contribute to the accessibility of input for mental processing.7.Symbolic mediation: A link between a person’s current mental state and higher order functions that is provided primarily by language; considered the usual route to learning (oflanguage, and of learning in general). Part of Vygosky’s Sociocultural Theory.8.Variable features: Multiple linguistic forms (vocabulary, phonology, morphology, syntax, discourse) that are systematically or predictably used by different speakers of a language, or by the same speakers at different times, with the same meaning or function.9.Linguistic context: Elements of language form and function associated with the variable element.10.Psychological context: factors associated with the amount of attention which is being given to language form during production, the level of automaticity versus control in processing, or the intellectual demands of a particular task.11.Microsocial context: features of setting/situation and interaction which relate to communicative events within which language is being produced, interpreted, and negotiated.12.Accommodation theory: A framework for study of SLA that is based on the notion that speakers usually unconsciously change their pronunciation and even the grammatical complexity of sentences they use to sound more like whomever they are talking to .13.ZPD: Zone of Proximal Development, an area ofpotential development where the learner can only achieve that potential with assistance. Part of Vygosky’s Soci ocultural Theory.14.Scaffolding: Verbal guidance which an expert provides to help a learner perform any specific task, or the verbal collaboration of peers to perform a task which would be too difficult for any one of them in individual performance.15.Intrapersonal interaction: communication that occurs within an individual's own mind, viewed by Vygosky as a sociocultural phenomen.16.Interpersonal interaction: Communicative events and situations that occur between people.17.Social institutions:The systems which are established by law, custom, or practice to regulate and organize the life of people in public domains: e.g. politics, religion, and education.18.Acculturation: learning the culture of the L2 community and adapting to those values and behavioral patterns.19.Additive bilingualism: The result of SLA in social contexts where members of a dominant group learn the language of a minority without threat to their L1 competence or to their ethnic identity.20.Subtractive bilingualism: The result of SLA in socialcontexts where members of a minority group learn the dominant language as L2 and are more likely to experience some loss of ethnic identity and attrition of L1 skills—especially if they are children.21.Formal L2 learning: formal/instructed learning generally takes place in schools, which are social institutions that are established in accord with the needs, beliefs, values, and customs of their cultural settings.rmal L2 learning: informal/naturalistic learning generally takes place in settings where people contact—and need to interact with—speakers of another language.PART TWO: Short & Long answers1.what is the difference between monolingual and multilingual communicative competence?Differencese between monolingual and multilingual communicative competence are due in part to the different social functions of first and second language learning, and to the differences between learning language and learning culture.The differences of the competence between native speakers and nonative speakers include structural differences in the linguisitc system, different rules for usage in writing or conversation, andeven somewhat divergent meanings for the “same” lexical forms. Further, a multilingual speaker’s total communicative competence differs from that of a monolingual in including knowledge of rules for the appropriate choice of language and for switching between languages, given a particular social context and communicative purpose.2.what are the microsocial factors that affect SLA? P101-102a) L2 variation b) input and interaction c) interaction as the genesis of language3.What is the difference between linguistic & communicative competence (CC)?Linguistic competence- It was defined in 1965 by Chomsky as a speaker's underlying ability to produce grammatically correct expressions. Linguistic competence refers to knowledge of language. Theoretical linguistics primarily studies linguistic competence: knowledge of a language possessed by “an ideal speak-listener”.Communicative competence- It is a term in linguistics which refers to “what a speaker needs to know to communicate appropriately within a particular language community”, such as alanguage user's grammatical knowledge of syntax , morphology , phonology and the like, as well as social knowledge about how and when to use utterances appropriately.4.Why is CC in L1 different from L2?L1 learning for children is an integral part of their sociolization into their native language community. L2 learning may be part of second culture learning and adaptation, but the relationship of SLA to social and cultural learning differs greatly with circumstances.5.What is Accommodation Theory? How does this explain L2 variation?Accommodation theory: Speakers (usually unconsciously) change their pronunciation and even the grammatical complexity of sentences they use to sound more like whomever they are talking to. This accounts in part for why native speakers tend to simply their language when they are talking to a L2 learner who is not fluent, and why L2 learners may acquire somewhat different varieties of the target language when they have different friends.6.Discuss the importance of input & interaction for L2 learning. How could this affect the feedback provided to students?ⅰ. a) From the perspective of linguistic approaches: (1) behaviorist: they consider input to form the necessary stimuli and feedback which learners respond to and imitate; (2) Universal Grammar: they consider exposure to input a necessary trigger for activating internal mechanisms; (3) Monitor Model: consider comprehensible input not only necessary but sufficient in itself to account for SLA;b) From the perspective of psychological approaches: (1) IP framework: consider input which is attended to as essential data for all stages of language processing; (2) connectionist framework: consider the quantity or frequency of input structures to largely determine acquisitional sequencing;c) From the perspective of social approaches: interaction is generally seen as essential in providing learners with the quantity and quality of external linguistic input which is required for internal processing.ⅱ. Other types of interaction which can enhance SLA include feedback from NSs which makes NNs aware that their usage is not acceptable in some way, and which provides a model for “correctness”. While children rarely receive such negative evidence。

User Model Acquisition Heuristics Based on Dialogue Acts

User Model Acquisition Heuristics Based on Dialogue Acts

USER MODEL ACQUISITION HEURISTICSBASED ON DIALOGUE ACTSWolfgang Pohl,Alfred Kobsa and Oliver KutterWorking Group Knowledge-Based Information SystemsInformation Science Department,University of KonstanzP.O.Box5560-D73,D-78434Konstanz,GermanyTel.:+49-7531-882613,Fax:+49-7531-883065{pohl,kobsa,kutter}@inf-wiss.uni-konstanz.deAbstractA wide-spread technique for user model acquisition is the use of acquisition heuristics,which are normallyemployed for inferring assumptions about the user’s beliefs or goals from observed user actions.These beliefsor goals can often be characterized as presuppositions to communicative actions that the user performs.Inthe area of natural-language systems,presupposition analysis techniques have been applied for makingassumptions about the dialogue partner based on the types of speech acts that he or she employs.In this paper,we will generalize this approach and investigate the analysis of so-called‘dialogue acts’,municativeactions on the user interface whose execution entails user beliefs or goals as presuppositions of the action.Dialogue act types with schematic presuppositions will be proposed as a means for formulating andgeneralizing user model acquisition heuristics.Several dialogue act types,both general ones applicable to anyinteractive system and specialized ones for an adaptive hypertext,are presented.The BGP-MS user modelingshell system contains a dialogue act analysis component that allows the developer of an adaptive applicationto define relevant dialogue act types and associated presupposition patterns.During run-time,the applicationcan then inform BGP-MS about observed dialogue acts.BGP-MS will instantiate the presupposition patternsof the corresponding dialogue act type and enter them into the current user model.L’aquisition d’un mod`e le d’utilisateur se fait commun´e ment par des heuristiques qui permettent`a traversl’observation d’actions de l’utilisateur d’inf´e rer des hypoth`e ses sur les croyances ou les buts de celui-ci.Souvent ces croyances ou ces buts peuventˆe tre consid´e r´e s comme des pr´e suppositions n´e cessaires`a l’actioncommunicative que l’utilisateur effectue.Dans le domaine du traitement du langage naturel,des techniquesd’analyse de pr´e suppositions ont´e t´e mises enœuvre pour´e tablir des hypoth`e ses sur l’interlocuteur sur labase du type d’acte de la parole qu’il utilise.Dans cet article,nous g´e n´e ralisons cette approche et´e tudions letraitement de ce que nous avons appel´e‘actes du dialogue’,c’est-`a-dire d’actions communicatives effectu´e espar l’interm´e diaire de l’interface homme-machine et dont l’ex´e cution entraˆıne des croyances ou des buts del’utilisateur en tant que pr´e supposition`a cette action.Nous proposons l’utilisation de types d’actes du dialogue qui contiennent les schemas de pr´e suppositionspour aider`a la formulation et`a la g´e n´e ralisation d’heuristiques pour l’aquisition du mod`e le d’utilisateur.Nous montrons plusieurs types d’acte,tant g´e n´e raux en ce qu’ils s’appliquent`a n’importe quel syst`e meinteractif que sp´e cialis´e s,ici pour un syst`e me hypertexte.Le syst`e me d’aide`a la mod´e lisation d’utilisateursBGP-MS contient une composante de traitement d’actes du dialogue qui permet au d´e veloppeur d’un logicieladaptif de d´efinir les types d’actes du dialogue utiles ainsi que la forme des pr´e suppositions associ´e es.Pendant l’ex´e cution,le logiciel peut informer BGP-MS de l’observation de ces actes.BGP-MS instanciealors le schema des pr´e suppositions concern´e es et incorpore celles-ci dans le mod`e le d’utilisateur courant.Keywords:user modeling,user model acquisition,user modeling shell systems,adaptive hypertext, dialogue actsInternational Workshop on the Design of Cooperative Systems, Antibes-Juan-les-Pins, France, 471-486, 1995.1Introduction:Making Assumptions Based on User ActionsAs is the case with knowledge-based systems in general,acquiring and representing knowledge is crucial for user modeling in interactive software systems.In addition to representation and management mechanisms,user modeling components therefore must include suitable user model acquisition mechanisms(see[Wahlster and Kobsa,1989;Chin,1993]).The developed methods can be divided into two groups:those that extract primary assumptions about the user from his/her system input,and those that extract secondary(or derivative)assumptions from primary and secondary assumptions(like forward inferences or stereotype activation).During the past few years,a number of tool systems for user modeling have been developed(the so-called user modeling shell systems;see[Finin,1989;Kobsa,1990;Brajnik and Tasso,1992;Kay, 1994;Kobsa and Pohl,1994]).They must provide acquisition,inference and retrieval mechanisms that are often used in user modeling components,and serve as the basis for the development of user modeling components in application systems.To date,however,none of the developed user modeling shell systems has included mechanisms that extract primary assumptions about the user from his/her system input.Atfirst this is surprising,since the acquisition of a user model plays an important role in a user modeling component and therefore should be supported by a shell system. The omission is understandable,though,if one considers that a user modeling shell system must be domain-independent while heuristics for acquiring primary assumptions concerning the user are mostly domain-dependent.For example,if the user asks the system the following question:When is the next train to Montreal?[Allen,1979]then one would most likely assume that the user wants to go to Montreal on the next train.But this is only true in travel domains.The assumption is no longer valid in rail shipping domains(for example in[Allen and Schubert,1993]),where it is more likely that the user may just want to ship a container or a freight car to Montreal.However,there are also domain-independent heuristics that may lead from user input to new primary assumptions.The following ones can be found in the literature:Correct use:if the user employs objects correctly(e.g.operating system commands,math-ematical operations,concepts),then the user is familiar with these objects[Chin,1989;Nwana,1991;Sukaviriya and Foley,1993].Incorrect use:if the user uses objects incorrectly,then he/she is not familiar with them[Quilici, 1989;Hirschmann,1990].Request for explanation:if the user requests an explanation for concepts,then he/she is not familiar with them[Chin,1989;Boyle and Encarnacion,1994].Request for detail information:if the user wants to be informed about objects in more detail, then he/she is familiar with them[Boyle and Encarnacion,1994].Feedback:if user feedback concerning a system output that was based on certain assumptions in the user model is positive/negative,then the plausibility of these assumptions should be increased/decreased[Rich,1979a;Rich,1979b].It seems to be common to at least thefirst four heuristics that assumptions about the user are derived from observed user actions,and that the assumptions can be understood as prerequisites to the actions.For example,the correct use of an object presumes that the user knows the object.It seems that a wide variety of domain-independent user model acquisition heuristics follows a common scheme,namely deriving the prerequisites of observed user actions.This reminds one of the presupposition analysis technique that has been applied in natural-language dialogue systems to support the acquisition of a dialogue partner model[Kaplan,1979;Kobsa,1983; Kobsa,1985]:a user utterance is analyzed with respect to the speech acts it verbalizes,and from each speech act the presuppositions are derived that must have been valid for the speaker in order to perform the act correctly.The method is particularly interesting if these derivations can be made without regard to the contents of the speech act,i.e.if they are only determined by its type(like ‘question’or‘information’).We generalize the notion of natural-language speech acts to dialogue acts that may occur in human-computer interaction,following other speech-act based approaches in this area[Winograd,1988; Sitter and Stein,1992].A dialogue act is independent of any specific user interface,i.e.it may be performed in a command interface,a direct-manipulative interface,a natural-language interface, etc.A dialogue act type comprises all dialogue acts with structurally equal presuppositions,only differing in the objects of the acts.A dialogue act is then an instance of a dialogue act type.A dialogue act type is normally parametrized and can be associated with a set of presupposition patterns,which schematically describe the presuppositions of all instances of the dialogue act type. We already saw two examples of dialogue act types above,namely a request for explanation and a request for detail information.A kind of dialogue act analysis can be applied in interactive systems if a set of such types along with their presupposition patterns has been defined:the presuppositions of an observed dialogue act can be computed by suitably instantiating the presupposition patterns of its type.The user modeling shell system BGP-MS[Kobsa and Pohl,1994]has been equipped with a dialogue act analysis component that supports the formation of primary assumptions about the user.The application system can inform BGP-MS about the dialogue act(s)that underlie an input operation of the user.BGP-MS then automatically enters all relevant user presuppositions of this dialogue act in a suitably instantiated form into the user model.This component saves the application system that utilizes BGP-MS for user modeling of having to derive possible assumptions about the user’sknowledge or goals itself.A prerequisite is that the user model developer must introduce all dialogue act types that are relevant in the application to the dialogue act analysis component,along with their presupposition patterns.For this purpose,he can take advantage of the set of pre-defined and application-independent dialogue act types that is offered by BGP-MS.In most cases however,the developer will have to define additional dialogue act types that may occur in the specific application.This paper describes how dialogue acts and dialogue act analysis can be used as a general mechanism to support the formation of primary assumptions from observed user input in a user modeling shell system.The principles involved in dialogue act analysis will be explained in the next section. Subsequently,we will show in more detail how dialogue act types as generalizations of speech act types can represent domain-independent user model acquisition heuristics.Examples of dialogue act types will be presented,which were identified by analyzing user interfaces in general as well as specifically an adaptive hypertext(for a detailed description of this analysis and all identified dialogue act types see[Kutter,1994]).Afterwards,we will describe the dialogue act analysis component of the user modeling shell system BGP-MS,and discuss related work and future developments.2An Introduction to Dialogue Act Types2.1Dialogue Act Types Generalize Speech Act TypesAccording to Searle[Searle,1969],every utterance in an interaction can be considered as a speech act.One of the aspects of a speech act is its function or role in the interaction,which is called the illocutive act and is characteristic of each utterance.In general,the illocutive aspect of an utterance can be described by natural-language verbs.Since verbs often have similar meanings, verb classifications should offer a good starting point for categorizing speech acts.A well-known example for English is[Wierzbicka,1987],who distinguishes the following verb categories,among many others:command/request/order;question;consent/accept;information.These categories can be regarded as speech act types.A basic and important feature of speech acts is that inferences can be made when they occur.On the one hand,the content of a speech act may bear logical consequences.For instance,if somebody asserts“The cat eats the mouse”the asserted proposition logically entails“In a while the mouse will be no more”.On the other hand,conclusions may be drawn based solely on the type of a speech act.For instance,the above assertion implies“I think the cat eats the mouse”by virtue of the fact that the person used a specific type of speech act(namely an assertion).The latter inference may be generalized to the rule“If a dialogue partner asserts P,then it can be inferred that he thinks that P is the case”.Note that this inference is independent of the meaning ofP and has the form of a scheme in which P can be suitably instantiated for drawing inferences based on a concrete assertion.Since the propositions that can be inferred when a speech act occurs must be true in order that the speech act can be used correctly,we will call them‘presuppositions’of the speech act.Speech act types then have schematic‘presupposition patterns’associated.Since our primary interest is not natural-language dialogue but human-computer interaction(HCI) in general,we will henceforth use the more general term‘dialogue act’instead of‘speech act’for referring to communicative actions in man-machine dialogue.One goal of this work then is tofind dialogue act types that,like speech act types,have presupposition patterns and therefore allow content-independent derivations from the dialogue acts they subsume.They shall be used as heuristics for drawing useful assumptions about the beliefs or goals of the user of an interactive computer system when he is observed to perform a dialogue act.The assumptions then are made by appropriately instantiating the presupposition patterns of the respective dialogue act type.This application of dialogue act types for user model acquisition will be referred to as‘dialogue act analysis’.Before showing examples of dialogue act types,wefirst discuss other work in thefield of HCI that makes use of speech/dialogue act types.2.2Dialogue Acts in Other WorkWinograd and Flores[Winograd and Flores,1986]use dialogue act types(i.e.,speech act types in their terms)such as“request”,“promise”,“reject”and“accept”to model larger conversation structures in transition networks.An example for a conversation structure is a request for an action (“conversation for action”;CfA).Sitter and Stein[Sitter and Stein,1992]start with a CfA model and develop a considerably complexer model of information-seeking dialogues.Their dialogue act types are very similar to those of Winograd and Flores,although they can be further developed and extended to complex dialogue contributions.For example,a“request”act can recursively include a complete dialogue for establishing information about the context of the request.The models developed in each of these studies can be used as a basis for implementing conversation[Winograd, 1988]or dialogue systems[Stein et al.,1991].User or partner models were either not mentioned or even indirectly considered unimportant in both studies.Occurring dialogue acts merely cause a state change in the dialogue model.Unfortunately,the dialogue act types used in all these papers appear to be too general to allow precise conclusions regarding the dialogue partner’s knowledge, goals,etc.,and are therefore not appropriate for our purposes.The VIE-DPM system[Kobsa,1985]uses presupposition analysis to form assumptions about the user in a natural-language dialogue system.It distinguishes the following dialogue act types: assertion,“yes/no”question,wh-question(which starts with“who”,“what”,etc.),and command.Itis interesting to note that these dialogue act types are very similar to the speech act types mentioned in Section2.1,one difference being that the category“question”becomes subclassified.The reason for this subclassification is that“yes/no”questions and wh-questions have different presupposition patterns.3Dialogue Act Types as User Model Acquisition HeuristicsSince we want to build a user model acquisition mechanism for a generally-usable user modeling shell system,we are particularly interested in dialogue act types that are not specific for a single application system or application domain only.This section presents examples of such dialogue act types and their presupposition patterns.They may be employed for acquiring assumptions about users of general-purpose interactive computer systems and of hypertext-like systems,respectively.First,however,we have to explain the formal notation for presupposition patterns that will be used in this paper(with minor modifications,it has also been used in the implementation of dialogue act analysis).In section2we interpreted the statement“The cat eats the mouse”as an“assert”dialogue act with regard to the content eat(cat,mouse).In our notation for presupposition patterns the symbol (or if a variable is included)refers to the content of a dialogue act.may be preceded by the operators and(with their standard meanings)as well as(with the meaning that x). In order to formalize beliefs and goals of the user we employ the modal operators for believes and for wants,and index them for modal subject identification with,and for User,System and Mutually.is a simple example of a presupposition pattern and would roughly mean:The system“privately”assumes that the user wants to achieve.In contrast,means that both S and U mutually believe(meaning that not only holds true,but also, ,etc.).3.1Dialogue Act Types for Interactive SystemsIn afirst study,various interactive systems were examined for dialogue acts that are generally applicable.Natural-language speech acts were again thefirst reference point.Below are a few examples of dialogue act types that seem to occur in many kinds of interactive systems.Their presupposition patterns will befirst described in English and then in our formal notation.WH-QUESTIONDescription:A user request for information that can be expressed in English with the interrogatives “who”,“which”,etc.Example:Invoking afind command in the Apple Macintosh operating system would correspond to the following question:“Where is thefile named XY?”.Presupposition Patterns:It will be mutually assumed that…1.…the user believes that what he wants to know exists;2.…the user is not familiar with the desired information;3.…the user believes that the system possesses the information;4.…the user wants that he and the system both know the desired information.Formalized Presupposition Patterns:1.;2.;3.;4.YN-ANSWER-YESDescription:A positive answer to a“yes/no”question of the system about the user’s intentions.Example:After invoking the formatting command the system asks:“All data will be erased if the disk is formatted.Format the disk anyway[y/n]?”.The user enters“y”.Presupposition Pattern:It will be mutually assumed that the user would like to reach the described state of affairs.Formalized Presupposition Pattern:AGREEDescription:The user consents to an action announced by the system.Example:After entering a false command the user is asked whether he/she would like to get help.The user clicks the OK button.Presupposition Pattern:It will be mutually assumed that the user agrees to reach the described state of affairs–in no way does he desire the opposite.Formalized Presupposition Pattern:Remark:It is important to note that the dialogue act types YN-ANSWER-YES and AGREE belong both to the verb category“consent/accept”listed above.However,they differ in their presupposition patterns and therefore have to be separated.3.2Dialogue Act Types for Adaptive Hypertext SystemsUnfortunately,general and application-independent dialogue act types do not seem to be very helpful as user model acquisition heuristics since the assumptions that can be inferred from them using dialogue act analysis are likewise general.Assumptions about the user that are more profitable for the purposes of an adaptive system must be more expressive than those derivable from the AGREE act and less obvious than those following from the YN-ANSWER-YES act.In the latter case an assumption is made about a user goal that immediately afterwards is met by the system,thus rendering the assumption irrelevant.We therefore decided to analyze dialogue act types that are no longer completely domain-independent, but still apply to all interactive systems of a particular application class.In this subsection we will present some dialogue act types representing acquisition heuristics for adaptive hypertexts as de-scribed in[Boyle and Encarnacion,1994;Kobsa et al.,1994].Since we concentrated on the interactive behavior of the hypertext(namely the possible user actions and the presentation of the document),these dialogue act types can also be found in other hypertexts showing similar behavior.REQUEST-FOR-EXPLANATION(cf.section1)Description:The user can ask the system to explain a hotword of the hypertext by clicking on it.Example:In a hypertext node describing operating systems,a mouse click on the hotword“UNIX”opens a pop-up menu that offers“explanation”as one choice.If this item is selected,a hypertext node is shown that contains explanatory information about UNIX.Presupposition Pattern:It can be mutually believed that the user does not know the concept denoted by the hotword or keyword.Formalized Presupposition Pattern:Another dialogue act type that implements one of the user model acquisition heuristics mentioned in section1is REQUEST-FOR-DETAILED-INFORMATION.It occurs for example when the“more info”item of a hotword pop-up menu is selected and implies a mutual belief that the user already knows the concept under consideration.Other cases are more difficult:What can be concluded if the user ignores a hotword in the current hypertext node?Does the user know the corresponding concept?Or is he/she just not interested in a deeper understanding of the text,and therefore skipped the hotword?In our system,a good heuristic might be that if it is currently assumed that the userWe assume that the reader is familiar with the basic concepts of hypertexts like node,link,hotword,and glossary.In our hypertext,only concepts of the domain become explained,while in other systems also complete propositions may be explained.In this case,the presuppositions below would not be restricted to concepts.is an expression on the lexical level.It refers to the concept named.does not know the concept(since it was e.g.derived from a REQUEST-FOR-EXPLANATION),the contrary should be concluded from now on.In order to represent such a heuristic,an if…then…construct had to be introduced:IGNORE-HOTWORDDescription:The user does not perform any action on a hotword in a hypertext node.Example:A node contains the hotword“UNIX”.The user does not use it as a starting point for further navigation.Presupposition Pattern:If the user is currently assumed not to know the concept denoted by the hotword,then it can be mutually believed from now on that he/she knows it.Formalized Presupposition Pattern:if thenDialogue acts of this and similar types that correspond to“non-actions”of the user are difficult to detect.IGNORE-HOTWORD dialogue acts can be reported by the application system,when the user leaves the current hypertext node–all hotwords that no action was performed upon may be regarded as“ignored”.In general,sophisticated observation techniques may be required for a decision about reporting“non-action”dialogue acts(cf.[Kutter,1994]).4Dialogue Act Analysis in BGP-MSThe task of the dialogue act analysis component in BGP-MS is to convert the dialogue acts observed in the user’s interaction with an application system into their presuppositions by instantiating the presupposition patterns of their dialogue act types.This component operates in the following way:1.A library of pre-defined dialogue act types with domain-independent presupposition patternsfor each of them has been made available to the developer of the user modeling component of the application system.2.The developer can both add further application-specific dialogue acts and complement thepresupposition patterns of the predefined dialogue acts in an application-specific way.3.The application system can report observed dialogue acts to BGP-MS.4.The reported dialogue acts will be converted into their presuppositions by instantiating thepresupposition patterns of the corresponding type definition.Examples of possible pre-defined dialogue acts were given in the previous section.The following subsections will explain items(2)–(4)in more detail.4.1Defining Dialogue Act TypesThe range of possible presupposition patterns for dialogue act types is strongly determined by the available knowledge representation language.The most powerful formalism available in BGP-MS is multimodalfirst-order predicate logic(MM-FOL),which includesfirst-order logic and allows MM-FOL expressions to be preceded by indexed modal operators,or combined with other expressions by the standard logical ing multimodal predicate logic means that most of the presupposition patterns listed in section3can remain unchanged in the definition of dialogue act types,and that instantiations of them can be entered as presuppositions at run-time.Only few descriptive elements used in section3must be disregarded,like e.g.the-operator.Let us take the dialogue act types YN-ANSWER-YES,AGREE,REQUEST-FOR-EXPLANATION, and IGNORE-HOTWORD from section3as examples.When defining a dialogue act type in BGP-MS,its name(:name)and its parameters(:parameters)must be given,and its presupposition patterns(:presupp)must be declared as a list containing Lisp notations of MM-FOL patterns.The pattern variables(and in the notation of section3)are replaced by parameter symbols.(define-d-act:name YN-ANSWER-YES:parameters(queried-item):presupp((B M(W U queried-item))))(define-d-act:name AGREE:parameters(topic):presupp((B M(not(W U(not topic))))))(define-d-act:name REQUEST-FOR-EXPLANATION:parameters(hotword):presupp((B M(not(B U(:concept hotword))))))(define-d-act:name IGNORE-HOTWORD:parameters(hotword):presupp((if(belief S(not(belief U(:concept hotword))))(belief M(belief U(:concept hotword))))))4.2Reporting and Processing Observed Dialogue ActsThe application system reports occurring dialogue acts with a message to the dialogue act analysis component,indicating the name of the corresponding dialogue act type along with a list of the concerned facts(which must be ground atoms of predicate logic).The dialogue act analysis component of BGP-MS will then generate the presuppositions of the dialogue act by instantiating the presupposition patterns in the definition of its dialogue act type with the reported facts.In thisIn BGP-MS,reported dialogue acts may have one or more parameters that specify the facts concerned.way,new assumptions about the user are derived from observed user actions.These assumptions then become entered into the individual user model via bgp-ms-tell,the general input interface of BGP-MS.In order to illustrate this method,we will now analyze possible messages to the dialogue act analysis component using the dialogue act type descriptions of section3.1.Observation:(d-act YN-ANSWER-YES((formatted disk1)))Presupposition:(bgp-ms-tell’(B M(W U(formatted disk1))))2.Observation:(d-act AGREE((displayed help-item-5)))Presupposition:(bgp-ms-tell’(B M(not(W U(not(displayed help-item-5))))))Now a possible sequence of two dialogue acts is shown,as can be observed in the adaptive hypertext system(note that the conditional expression in the presupposition of IGNORE-HOTWORD is satisfied after the presupposition of the REQUEST-FOR-EXPLANATION act has been entered by the dialogue act analysis component):3.Observation:(d-act REQUEST-FOR-EXPLANATION(UNIX))Presupposition:(bgp-ms-tell’(B M(not(B U(:concept UNIX)))))4.Observation:(d-act IGNORE-HOTWORD(UNIX))Presupposition:(bgp-ms-tell’(B M(B U(:concept UNIX))))Figure1summarizes the dialogue act analysis of BGP-MS using the example given in item3above. The user interface recognizes a mouse click and reports it to the application system,which itself informs the dialogue act analysis component of BGP-MS about the dialogue act that took place, along with all necessary parameters.Then the presuppositions of this dialogue act are determined (using the defined dialogue act types and their presupposition patterns)and entered into the individual user model via bgp-ms-tell.In this specific example,the user of our hypertext system wants an explanation of one of the hotwords of the current node,“UNIX”.The hypertext system reports a REQUEST-FOR-EXPLANA TION to BGP-MS,and the dialogue act analysis component derives the assumption that it is mutually believed that the user does not know the UNIX operating system.Beyond its immediate reaction (e.g.,displaying explanatory text),the application could consider this assumption later and provide explanatory information about UNIX again when it displays the contents of another hypertext。

value chain模型英文版

value chain模型英文版

value chain模型英文版English:The value chain model is a strategic framework that outlines the key activities involved in a company's production process, from raw materials acquisition to the delivery of the final product or service to the end customer. Developed by Michael Porter in 1985, the value chain model helps businesses identify opportunities for cost reduction, differentiation, and competitive advantage. The value chain is typically divided into two main categories: primary activities and support activities. Primary activities are directly related to the production and delivery of the product, such as inbound logistics, operations, outbound logistics, marketing and sales, and service. Support activities, on the other hand, provide the infrastructure and resources necessary for the primary activities to function effectively, including procurement, human resource management, technology development, and infrastructure. By analyzing each activity within the value chain, companies can pinpoint areas of their operations where they can create the most value and improve overall efficiency. This strategic approach enables firms to increase profitability, enhancecustomer satisfaction, and gain a sustainable competitive advantage in the marketplace.Translated content:价值链模型是一个战略框架,详细说明了公司生产过程中的关键活动,从原材料采购到最终产品或服务交付给最终客户。

The Bardeen Model as a Nonlinear Magnetic Monopole

The Bardeen Model as a Nonlinear Magnetic Monopole

a r X i v :g r -q c /0009077v 1 22 S e p 2000The Bardeen Model as a Nonlinear Magnetic MonopoleEloy Ay´o n–Beato and Alberto Garc´ıaDepartamento de F´ısica,Centro de Investigaci´o n y de Estudios Avanzados del IPNApdo.Postal 14-740,07000M´e xico DF,MEXICOThe Bardeen model—the first regular black hole model in General Relativity—is reinterpretedas the gravitational field of a nonlinear magnetic monopole,i.e.,as a magnetic solution to Einsteinequations coupled to a nonlinear electrodynamics.04.20.Dw,04.20.Jb,04.70.BwThe study on the global regularity of black hole solutions is quite important in order to understand the final state of gravitational collapse of initially regular configurations.The Penrose (weak)cosmic censorship conjecture claims that the singularities predicted by the celebrated “singularity theorems”[1,2]must be dressed by event horizons (cf.[3,4]for recent reviews).The first black hole solutions known in General Relativity were all in accordance with such point of view.However,the conjecture does not forbid the existence of regular (singularity–free)black holes.In fact,as it was pointed by Hawking and Ellis [1],and more recently by Borde [5,6],the model proposed by Bardeen [7]at the early stage of investigations on singularities,explicitly shows the impossibility of proving more general singularity theorems,if one avoids the assumption of either the strong energy condition or the existence of global hyperbolicity.The Bardeen model [7]is a regular black hole space–time satisfying the weak energy condition.This model has been recently revived by Borde [5,6],who also clarified the avoidance of singularities in this space–time,and in similar examples,as due to a topology change in the evolution of the space–like slices from open to closed [6].Other regular black hole models were proposed later [5,8–10].None of these “Bardeen black holes,”as they have been called by Borde [6],is an exact solution to Einstein equations,thus,there is no known physical sources associated with any of them.Regular black hole solutions to Einstein equations with physically reasonable sources has been reported only recently [11–14].The objective of this contribution is to provide the pioneering Bardeen model [7]with a physical interpretation.The Bardeen model is described by the metric g =− 1−2mr 2(r 2+g 2)3/2 −1dr 2+r 2d Ω2.(1)It can be noted that this metric asymptotically behaves as −g tt =1−2m/r +3mg 2/r 3+O (1/r 5);from the 1/r term it follows that the parameter m is associated with the mass of the configuration (incidentally,this fact can also be verified from the explicit evaluation of the ADM mass definition),but,the following term goes as 1/r 3,therefore this does not allow one to associate the parameter g with some kind of “Coulomb”charge as,for instance,in the Reissner–Nordstr¨o m solution.This fact causes that up–to–date there is no known physical interpretation for the regularizing parameter g .The main objective of this work is to show that g is the monopole charge of a self–gravitating magnetic field described by a nonlinear electrodynamics.The Bardeen model (1)describes a regular space–time;this can be realized from the analytical expressions of itscurvature invariantsR =6mg 2 4g 2−r 2(r 2+g 2)7,(3)R µναβR µναβ=12m 2 8g 8−4g 6r 2+47g 4r 4−12g 2r 6+4r 8sx2√As it can be noted from the derivative of the last function,it has a single minimum at x m=27.At x m,for s<s c the quoted minimum of A(x,s)is negative,for s=s c the minimum vanishes and for s>s c the minimum is positive. From the regularity of the curvature invariants(2–4),it follows that for s≤s c the singularities appearing in(1),due to the vanishing of A,are only coordinate–singularities describing the existence of event horizons.Consequently,we are in the presence of black hole space–times for g2≤4s2c m2=(16/27)m2.For the strict inequality g2<(16/27)m2, there are inner and event horizons for the Killingfield k=∂/∂t,defined by kµkµ=g tt=0.For the equality g2=(16/27)m2,the horizons shrink into a single one,where also∇ν(kµkµ)=0,i.e.,this case corresponds to an extreme black hole as in the Reissner–Nordstr¨o m solution.The extension of the Bardeen metric beyond the horizons, up to r=0,becomes apparent by passing to the standard advanced and retarded Eddington–Finkelstein coordinates, in terms of which the metric is well–behaved everywhere,even in the extreme case.The maximal extension of the Bardeen metric can be achieved by following the main lines used for the Reissner–Nordstr¨o m solution,taking care of course,of the more involved integration in the present case of the tortoise coordinate r∗≡ A−1dr.The global structure of the Bardeen space–time is similar to the structure of the Reissner–Nordstr¨o m black hole,as it has been pointed out by previous authors[1,6],except that the usual singularity of the Reissner–Nordstr¨o m solution,at r=0, is smoothed out and now it simply corresponds to the origin of the spherical coordinates.This fact implies that the topology of the slices change from S2×I R outside the black hole to S3inside of it,as it has been pointed by Borde [6].In what follows we will show that it is possible to provide a physical interpretation to the quoted parameter g, and to think of the Bardeen model as a regular black hole solution of General Relativity simply by coupling to the Einstein equations an appropriate nonlinear electrodynamics.In this context g is the monopole charge of a magnetic field ruled by the quoted nonlinear electrodynamics.The dynamics of the proposed theory is governed by the actionS= dv 14πL(F) ,(6) where R is scalar curvature,and L is a function of F≡11+2sg21−2M(r)r dt2+0=dF=f′(r)sin(θ)dr∧dθ∧dϕ,(13) we conclude that f(r)=const.=g,where the integration constant has been chosen as g.As it was anticipated above, g is the magnetic monopole charge of the configuration:14π π0 2π0sin(θ)dθdϕ=g,(14)where S∞is a sphere at infinity.The t t component of Einstein equations(7)yieldsM′(r)=r2L(F).(15) Substituting L from(9)with F=g2/2r4,and using that m=lim r→∞M(r),one can write the integral of(15)asM(r)=m−3mg2 ∞r dy y2(r2+g2)3/2.(17)Substituting M(r)into(10)onefinally obtains the Bardeen metric(1).It can be noted that the proposed source(9)satisfies the weak energy condition.Let X be a time–likefield,without loss of generality it can be chosen normal(XµXµ=−1).Using the right hand side of(7),one can write the local energy density along X as4πTµνXµXν=L+EλEλL F,(18) where Eλ≡FλµXµis by definition orthogonal to X,so it is an space–like vector(EλEλ>0).It follows from(18) that if L≥0and L F≥0the local energy density along any time–likefield is non–negative everywhere,which is the requirement of the weak energy condition.For the proposed non–linear electrodynamics the non–negativeness of these quantities follows from the definition(9),hence,the matter proposed by us as source of the Bardeen model satisfies the weak energy condition.Summarizing,for the Bardeen metric(1)we found afield source related to a nonlinear electrodynamics given by the Lagrangian(9).The solution to the Einstein equations coupled with the energy–momentum tensor associated to the magnetic strength(12)corresponds to a self–gravitating magnetic monopole charge g(14).It is well known the Bardeen model,now climbed to the status of an exact solution of Einstein equations,fulfills the weak energy condition and is regular everywhere,although the invariant of the associated electromagneticfield exhibits the usual singular behavior F=g2/2r4of magnetic monopoles.Finally,we would like to point out that the nonlinear electrodynamics used is stronger than the Maxwell one,in the weakfield limit,as it can be seen from the expansion of the Lagrangian (9);L(F≪1)∼F F1/4,while for Maxwell electrodynamics L M(F)=F.ACKNOWLEDGMENTSThis work was partially supported by the CONACyT Grant32138E and the Sistema Nacional de Investigadores (SNI).EAB also thanks all the encouragement and guide provided by his recently late father:Erasmo Ay´o n Alayo.[5]Borde,A.,Phys.Rev.,D50,3392(1994).[6]Borde,A.,Phys.Rev.,D55,7615(1997).[7]Bardeen,J.,presented at GR5,Tiflis,U.S.S.R.,and published in the conference proceedings in the U.S.S.R.(1968).[8]Barrab`e s,C.,Frolov,V.P.,Phys.Rev.,D53,3215(1996).[9]Mars,M.,Mart´ın–Prats,M.M.,Senovilla,J.M.M.,Class.Quant.Grav.,13,L51(1996).[10]Cabo,A.,Ay´o n–Beato,E.,Int.J.Mod.Phys.,A14,2013(1999).[11]Ay´o n–Beato,E.,Garc´ıa,A.,Phys.Rev.Lett.,80,5056(1998).[12]Ay´o n–Beato,E.,Garc´ıa,A.,Gen.Rel.Gravit.,31,629(1999).[13]Ay´o n–Beato,E.,Garc´ıa,A.,Phys.Lett.,B464,25(1999).[14]Magli,G.,Rept.Math.Phys.,44,407(1999).4。

Computer Methods in Applied Mechanics and Engineering

Computer Methods in Applied Mechanics and Engineering

Topological clustering for water distribution systems analysis Environmental Modelling & Software , In Press, Corrected Proof ,Available online 15 February 2011Lina Perelman, Avi OstfeldShow preview | Related articles | Related reference work articlesPurchase $ 19.95 716 HyphArea —Automated analysis of spatiotemporal fungal patterns Original Research ArticleJournal of Plant Physiology , Volume 168, Issue 1, 1January 2011, Pages 72-78Tobias Baum, Aura Navarro-Quezada, Wolfgang Knogge, Dimitar Douchkov, Patrick Schweizer, Udo Seiffert Close preview | Related articles | Related reference work articlesAbstract | Figures/Tables | ReferencesAbstractIn phytopathology quantitative measurements are rarely used to assess crop plant disease symptoms. Instead, a qualitative valuation by eye is often themethod of choice. In order to close the gap between subjective humaninspection and objective quantitative results, the development of an automated analysis system that is capable of recognizing and characterizing the growth patterns of fungal hyphae in micrograph images was developed. This system should enable the efficient screening of different host –pathogen combinations (e.g., barley —Blumeria graminis , barley —Rhynchosporium secalis ) usingdifferent microscopy technologies (e.g., bright field, fluorescence). An image segmentation algorithm was developed for gray-scale image data thatachieved good results with several microscope imaging protocols.Furthermore, adaptability towards different host –pathogen systems wasobtained by using a classification that is based on a genetic algorithm. Thedeveloped software system was named HyphArea , since the quantification of the area covered by a hyphal colony is the basic task and prerequisite for all further morphological and statistical analyses in this context. By means of atypical use case the utilization and basic properties of HyphArea could bePurchase $ 41.95demonstrated. It was possible to detect statistically significant differences between the growth of an R. secalis wild-type strain and a virulence mutant. Article Outline Introduction Material and methodsExperimental set-upThe host and pathogensFluorescence image acquisition protocolBright field image acquisition protocolSegmentation of hyphal coloniesAchieving adaptive behaviorImage corpusResultsR. secalis infected material and segmentationB. graminis infected material and segmentationDiscussionAcknowledgementsReferences717 Automated generation of contrapuntal musical compositions using probabilistic logic inDerive Original Research ArticleMathematics and Computers in Simulation , Volume 80, Issue 6, February 2010, Pages 1200-1211Gabriel Aguilera, José Luis Galán, Rafael Madrid, Antonio Manuel Martínez, Yolanda Padilla, Pedro Rodríguez Close preview | Related articles | Related reference work articles Abstract | Figures/Tables | ReferencesAbstractIn this work, we present a new application developed in Derive 6 to compose Purchase$ 31.50counterpoint for a given melody (―cantus firmus‖). The result isnon-deterministic, so different counterpoints can be generated for a fixed melody, all of them obeying classical rules of counterpoint. In the case where the counterpoint cannot be generated in a first step, backtracking techniques have been implemented in order to improve the likelihood of obtaining a result. The contrapuntal rules are specified in Derive using probabilistic rules of a probabilistic logic, and the result can be generated for both voices (above and below) of first species counterpoint.The main goal of this work is not to obtain a ―professional‖ counterpoint generator but to show an application of a probabilistic logic using a CAS tool. Thus, the algorithm developed does not take into account stylistic melodic characteristics of species counterpoint, but rather focuses on the harmonic aspect.The work developed can be summarized in the following steps:(1) Development of a probabilistic algorithm in order to obtain anon-deterministic counterpoint for a given melody.(2) Implementation of the algorithm in Derive 6 using probabilistic Logic.(3) Implementation in Java of a program to deal with the input (―cantus firmus‖) and with the output (counterpoint) through inter-communication with the module developed in D ERIVE. This program also allows users to listen to the result obtained.Article Outline1. Introduction1.1. Historical background1.2. ―Cantus Firmus‖ and counterpoint1.3. Work developed1.4. D ERIVE 6 and J AVA2. Description of the algorithm2.1. The process2.2. Rules2.3.Example2.4. Backtracking3. Description of the environment3.1. Menu bar3.2. Real time modifications3.3. Inter-communication with DERIVE3.4. Playing the composition4. Results4.1. Example 1: Above voice against ―Cantus firmus‖4.2. Example 2: Below voice against ―Cantus firmus‖4.3. Example 3: Above and below voices against ―Cantus firmus‖5. Conclusions and future workReferences718 Study of pharmaceutical samples by NIR chemical-image and multivariate analysis OriginalResearch ArticleTrAC Trends in Analytical Chemistry , Volume 27, Issue 8, September 2008, Pages 696-713José Manuel Amigo, Jordi Cruz, Manel Bautista, Santiago Maspoch, Jordi Coello, Marcelo BlancoClose preview | Related articles | Related reference work articles Abstract | Figures/Tables | ReferencesAbstractNear-infrared spectroscopy chemical imaging (NIR-CI) is a powerful tool for Purchase$ 31.50providing a great deal of information on pharmaceutical samples, since the NIR spectrum can be measured for each pixel of the image over a wide range of wavelengths.Joining NIR-CI with chemometric algorithms (e.g., Principal Component Analysis, PCA) and using correlation coefficients, cluster analysis, classical least-square regression (CLS) and multivariate curve resolution-alternating least squares (MCR-ALS) are of increasing interest, due to the great amount of information that can be extracted from one image. Despite this, investigation of their potential usefulness must be done to establish their benefits and potential limitations.We explored the possibilities of different algorithms in the global study (qualitative and quantitative information) of homogeneity in pharmaceutical samples that may confirm different stages in a blending process. For this purpose, we studied four examples, involving four binary mixtures in different concentrations.In this way, we studied the benefits and the drawbacks of PCA, cluster analysis (K-means and Fuzzy C-means clustering) and correlation coefficients for qualitative purposes and CLS and MCR-ALS for quantitative purposes.We present new possibilities in cluster analysis and MCR-ALS in image analysis, and we introduce and test new BACRA software for mapping correlation-coefficient surfaces.Article Outline1. Introduction2. Structure of hyperspectral data3. Preprocessing the hyperspectral image4. Techniques for exploratory analysis4.1. Principal Component Analysis (PCA)4.2. Cluster analysis4.2.1. K-means algorithm4.2.2. Fuzzy C-means algorithm4.2.3. Number of clusters4.2.3.1. Silhouette index4.2.3.2. Partition Entropy index4.3. Similarity using correlation coefficients5. Techniques for estimating analyte concentration in each pixel 5.1. Classical Least Squares5.2. Multivariate Curve Resolution-Alternating Least Squares5.3. Augmented MCR-ALS for homogeneous samples6. Experimental and data treatment6.1. Reagents and instruments6.2. Experimental6.3. Data treatment7. Results and discussion7.1. PCA analysis7.2. Cluster analysis7.2.1. K-means results for heterogeneous samples7.2.2. FCM results7.3. Correlation-coefficient maps–BACRA results7.4. CLS results7.5. MCR-ALS and augmented-MCR-ALS results8. Conclusions and perspectivesAcknowledgementsReferences719Validation and automatic test generation on UMLmodels: the AGATHA approach Original Research ArticleElectronic Notes in Theoretical Computer Science, Volume66, Issue 2, December 2002, Pages 33-49David Lugato, Céline Bigot, Yannick ValotClose preview | Related articles | Related reference work articlesAbstract | ReferencesAbstractThe related economic goals of test generation are quite important for softwareindustry. Manufacturers ever seeking to increase their productivity need toavoid malfunctions at the time of system specification: the later the defaultsare detected, the greater the cost is. Consequently, the development oftechniques and tools able to efficiently support engineers who are in charge of elaborating the specification constitutes a major challenge whose falloutconcerns not only sectors of critical applications but also all those where poor conception could be extremely harmful to the brand image of a product.This article describes the design and implementation of a set of tools allowingsoftware developers to validate UML (the Unified Modeling Language)specifications. This toolset belongs to the AGATHA environment, which is anautomated test generator, developed at CEA/LIST.The AGATHA toolset is designed to validate specifications of communicatingconcurrent units described using an EIOLTS formalism (Extended InputOutput Labeled Transition System). The goal of the work described in thispaper is to provide an interface between UML and an EIOLTS formalism givingthe possibility to use AGATHA on UML specifications.In this paper we describe first the translation of UML models into the EIOLTS formalism, and the translation of the results of the behavior analysis, providedby AGATHA, back into UML. Then we present the AGATHA toolset; wePurchase$ 35.95particularly focus on how AGATHA overcomes several problems of combinatorial explosion. We expose the concept of symbolic calculus and detection of redundant paths, which are the main principles of AGATHA's kernel. This kernel properly computes all the symbolic behaviors of a systemspecified in EIOLTS and automatically generates tests by way of constraintsolving. Eventually we apply our method to an example and explain thedifferent results that are computed.720Text mining techniques for patent analysis OriginalResearch ArticleInformation Processing & Management,Volume 43, Issue5, September 2007, Pages 1216-1247Yuen-Hsien Tseng, Chi-Jen Lin, Yu-I LinClose preview | Related articles | Related reference work articlesAbstract | Figures/Tables | ReferencesAbstractPatent documents contain important research results. However, they arelengthy and rich in technical terminology such that it takes a lot of humanefforts for analyses. Automatic tools for assisting patent engineers or decisionmakers in patent analysis are in great demand. This paper describes a seriesof text mining techniques that conforms to the analytical process used bypatent analysts. These techniques include text segmentation, summaryextraction, feature selection, term association, cluster generation, topicidentification, and information mapping. The issues of efficiency andeffectiveness are considered in the design of these techniques. Someimportant features of the proposed methodology include a rigorous approachto verify the usefulness of segment extracts as the document surrogates, acorpus- and dictionary-free algorithm for keyphrase extraction, an efficientco-word analysis method that can be applied to large volume of patents, andan automatic procedure to create generic cluster titles for ease of resultPurchase$ 31.50interpretation. Evaluation of these techniques was conducted. The results confirm that the machine-generated summaries do preserve more important content words than some other sections for classification. To demonstrate the feasibility, the proposed methodology was applied to a real-world patent set for domain analysis and mapping, which shows that our approach is more effective than existing classification systems. The attempt in this paper to automate the whole process not only helps create final patent maps for topic analyses, but also facilitates or improves other patent analysis tasks such as patent classification, organization, knowledge sharing, and prior art searches. Article Outline1. Introduction2. A general methodology3. Technique details3.1. Text segmentation3.2. Text summarization3.3. Stopwords and stemming3.4. Keyword and phrase extraction3.5. Term association3.6. Topic clustering3.6.1. Document clustering3.6.2. Term clustering followed by document categorization3.6.3. Multi-stage clustering3.6.4. Cluster title generation3.6.5. Mapping cluster titles to categories3.7. Topic mapping4. Technique evaluation4.1. Text segmentation4.2. Text summarization 4.2.1. Feature selection4.2.2. Experiment results4.2.3. Findings4.3. Key term extraction and association4.4. Cluster title generation4.5. Mapping cluster titles to categories5. Application example5.1. The NSC patent set5.2. Text mining processing5.3. Topic mapping5.4. Topic analysis comparison6. Discussions7. Conclusions and future workAcknowledgementsAppendix A. AppendixAppendix B. AppendixReferences721Incremental bipartite drawing problem OriginalResearch ArticleComputers & Operations Research, Volume 28, Issue 13,November 2001, Pages 1287-1298Rafael Martí, Vicente EstruchClose preview | Related articles | Related reference work articlesAbstract | Figures/Tables | ReferencesAbstractLayout strategies that strive to preserve perspective from earlier drawings arecalled incremental. In this paper we study the incremental arc crossingminimization problem for bipartite graphs. We develop a greedy randomizedPurchase$ 31.50adaptive search procedure (GRASP) for this problem. We have also developed a branch-and-bound algorithm in order to compute the relative gap to the optimal solution of the GRASP approach. Computational experiments are performed with 450 graph instances to first study the effect of changes in grasp search parameters and then to test the efficiency of the proposed procedure.Scope and purposeMany information systems require graphs to be drawn so that these systems are easy to interpret and understand. Graphs are commonly used as a basic modeling tool in areas such as project management, production scheduling, line balancing, business process reengineering, and software visualization. Graph drawing addresses the problem of constructing geometric representations of graphs. Although the perception of how good a graph is in conveying information is fairly subjective, the goal of limiting the number of arc crossings is a well-admitted criterion for a good drawing. Incremental graph drawing constructions are motivated by the need to support the interactive updates performed by the user. In this situation, it is helpful to preserve a―mental picture‖ of the layout of a graph over successive drawings. It would not be very intuitive or effective for a user to have a drawing tool in which after a slight modification of the current graph, the resulting drawing appears very different from the previous one. Therefore, generating incrementally stable layouts is important in a variety of settings. Since ―real-world‖ graphs tend to be large, an automated procedure to deal with the arc crossing minimization problem in the context of incremental strategies is desirable. In this article, we develop a procedure to minimize arc crossings that is fast and capable of dealing with large graphs, restricting our attention to bipartite graphs.Article Outline1. Introduction2. Branch-and-bound approach3. GRASP approach3.1. Construction phase3.2. Improvement phase4. Computational experiments5. ConclusionsReferencesVitae722 Applications of vibrational spectroscopy to the analysis of novel coatings Original Research ArticleProgress in Organic Coatings , Volume 41, Issue 4, May2001, Pages 254-260A. J. Vreugdenhil, M. S. Donley, N. T. Grebasch, R. J.Passinault Close preview | Related articles |Related reference work articles Abstract | Figures/Tables | ReferencesAbstractPrecise analysis is essential to the development of novel coating technologiesand to the systematic modification of metal surfaces. Ideally, these analysistechniques will provide molecular information and will be sensitive to changesin the chemical environment of interfacial species. Many of the techniquesavailable in the field of vibrational spectroscopy demonstrate some of thesecharacteristics. Examples of the ways in which FT-IR spectroscopy are appliedto the investigation of coatings at AFRL will be described.Attenuated total reflectance (ATR), specular reflectance and photoacousticspectroscopy (PAS) are important techniques for the analysis of surfaces.Purchase $ 31.50Both ATR and PAS can be used to provide depth-profiling information crucialfor the study of coatings. Recent developments in digital signal processing(DSP) and step scan interferometry which have dramatically improved the reliability and ease of use of PAS for depth analysis will be discussed. Article Outline 1. Introduction1.1. ATR theory1.2. PAS theory1.3. IR microscopy2. Experimental2.1. Instrumentation2.2. Samples3. Results and discussion3.1. IR microscopy3.2. Photoacoustic spectroscopy3.3. Variable angle ATR4. ConclusionsReferences 723 Affective disorders in children and adolescents: addressing unmet need in primary caresettings Review ArticleBiological Psychiatry , Volume 49, Issue 12, 15 June 2001, Pages 1111-1120Kenneth B. Wells, Sheryl H. Kataoka, Joan R. Asarnow Close preview | Related articles | Related reference work articlesAbstract | ReferencesAbstractAffective disorders are common among children and adolescents but mayPurchase$ 31.50often remain untreated. Primary care providers could help fill this gap because most children have primary care. Yet rates of detection and treatment for mental disorders generally are low in general health settings, owing to multiple child and family, clinician, practice, and healthcare system factors. Potential solutions may involve 1) more systematic implementation of programs that offer coverage for uninsured children; 2) tougher parity laws that offer equity in defined benefits and application of managed care strategies across physical and mental disorders; and 3) widespread implementation of quality improvement programs within primary care settings that enhancespecialty/primary care collaboration, support use of care managers to coordinate care, and provide clinician training in clinically and developmentally appropriate principles of care for affective disorders. Research is needed to support development of these solutions and evaluation of their impacts. Article Outline• Introduction• Impact and appropriate treatment of affective disorders in youths• Unmet need for child mental health care• Primary care and treatment of child mental disorders• Barriers to detection and appropriate treatment in primary care• Child characteristics• Parental characteristics• Clinician and clinician–patient relationship factors• Finding policy solutions to unmet need: coverage for the uninsured and par ity for the insured• Finding practice-based solutions: implementing quality improvement programs• Discussion• Acknowledgements • References724 Orthogonal drawings of graphs with vertex and edgelabels Original Research ArticleComputational Geometry, Volume 32, Issue 2, October2005, Pages 71-114Carla Binucci, Walter Didimo, Giuseppe Liotta, MaddalenaNonatoClose preview | Related articles | Related reference work articlesAbstract | ReferencesAbstractThis paper studies the problem of computing orthogonal drawings of graphswith labels on vertices and edges. Our research is mainly motivated bySoftware Engineering and Information Systems domains, where tools like UMLdiagrams and ER-diagrams are considered fundamental for the design ofsophisticated systems and/or complex data bases collecting enormousamount of information. A label is modeled as a rectangle of prescribed widthand height and it can be associated with either a vertex or an edge. Ourdrawing algorithms guarantee no overlaps between labels, vertices, and edgesand take advantage of the information about the set of labels to compute thegeometry of the drawing. Several additional optimization goals are taken intoaccount. Namely, the labeled drawing can be required to have either minimumtotal edge length, or minimum width, or minimum height, or minimum areaamong those preserving a given orthogonal representation. All these goalslead to NP-hard problems. We present MILP models to compute optimaldrawings with respect to the first three goals and an exact algorithm that isbased on these models to compute a labeled drawing of minimum area. Wealso present several heuristics for computing compact labeled orthogonaldrawings and experimentally validate their performances, comparing theirsolutions against the optimum.Purchase$ 31.50725 Finite element response sensitivity analysis of multi-yield-surface J 2 plasticitymodel by direct differentiation method Original Research Article Computer Methods in Applied Mechanics and Engineering ,Volume 198, Issues 30-32, 1 June2009, Pages 2272-2285Quan Gu, Joel P. Conte, AhmedElgamal, Zhaohui YangClose preview | Related articles | Relatedreference work articles Abstract | Figures/Tables | ReferencesAbstractFinite element (FE) response sensitivity analysis is an essential tool for gradient-basedoptimization methods used in varioussub-fields of civil engineering such asstructural optimization, reliability analysis,system identification, and finite element modelupdating. Furthermore, stand-alone sensitivityanalysis is invaluable for gaining insight intothe effects and relative importance of varioussystem and loading parameters on systemresponse. The direct differentiation method(DDM) is a general, accurate and efficientmethod to compute FE response sensitivitiesto FE model parameters. In this paper, theDDM-based response sensitivity analysismethodology is applied to a pressureindependent multi-yield-surface J 2 plasticityPurchase$ 35.95material model, which has been used extensively to simulate the nonlinear undrained shear behavior of cohesive soils subjected to static and dynamic loading conditions. The complete derivation of the DDM-based response sensitivity algorithm is presented. This algorithm is implemented in a general-purpose nonlinear finite element analysis program. The work presented in this paper extends significantly the framework of DDM-based response sensitivity analysis, since it enables numerous applications involving the use of the multi-yield-surface J2 plasticity material model. The new algorithm and its software implementation are validated through two application examples, in which DDM-based response sensitivities are compared with their counterparts obtained using forward finite difference (FFD) analysis. The normalized response sensitivity analysis results are then used to measure the relative importance of the soil constitutive parameters on the system response.Article Outline1. Introduction2. Constitutive formulation ofmulti-yield-surface J2 plasticity model andnumerical integration2.1. Multi-yield surfaces2.2. Flow rule2.3. Hardening law2.3.1. Active yield surface update2.3.2. Inner yield surface update3. Derivation of response sensitivity algorithm for multi-yield-surface J2 plasticity model3.1. Introduction3.2. Displacement-based FE response sensitivity analysis using DDM3.3. Stress sensitivity for multi-yield-surface J2 material plasticity model3.3.1. Parameter sensitivity of initial configuration of multi-yield-surface plasticity model3.3.2. Stress response sensitivity3.3.3. Sensitivity of hardening parameters of active and inner yield surfaces4. Application examples4.1. Three-dimensional block of clay subjected to quasi-static cyclic loading4.2. Multi-layered soil column subjected to earthquake base excitation5. ConclusionsAcknowledgementsReferencesresults 701 - 725991 articles found for: pub-date > 1999 and tak((Communications or R&D or Engineer or DSP or software or "value-added" or services) and (communication or carriers or (Communication modem) or digital or signal or processing or algorithms or product or development or TI or Freescale) and (multicore or DSP or Development or Tools or Balanced or signal or processing or modem or codec) and (DSP or algorithm or principle or WiMAX or LTE or Matlab or Simulink) and (C or C++ or DSP or FPGA or MAC or layer or protocol or design or simulation or implementation or experience or OPNET or MATLAB or modeling) and (research or field or deal or directly or engaged or Engineers or Graduates))Edit this search | Save this search | Save as search alert | RSS Feed。

MODEL BASED, DETAILED FAULT ANALYSIS IN THE CERN PS COMPLEX EQUIPMENT

MODEL BASED, DETAILED FAULT ANALYSIS IN THE CERN PS COMPLEX EQUIPMENT

MODEL BASED, DETAILED FAULT ANALYSIS IN THE CERN PS COMPLEX EQUIPMENT M. Beharrell, G. Benincasa, J.M. Bouché, J. Cuperus, M. Lelaizant, L. Merard PS Division, CERN, CH-1211 Geneva 23, SwitzerlandAbstractIn the CERN PS Complex of accelerators, about a thousand pieces of equipment of various types (power converters, RF cavities, beam measurement devices, vacuum systems, etc.) are controlled using the so-called Control Protocol. This Protocol, a model-based equipment access standard, provides, amongst other facilities, a uniform and structured fault description and report feature. The faults are organized into categories according their severities and are presented at two levels, the first being global, and identical for all devices, and the second being very detailed and adapted to the peculiarities of each device.All the relevant information is provided by the equipment specialists and is appropriately stored in static and real time databases; in this way a unique set of data-driven application programs can always cope with existing and newly added equipment.Two classes of applications have been implemented, the first one is intended for control room alarm purposes and the second is oriented for specialist diagnostics. The system is completed by a fault history report facility permitting easy retrieval of faults which have occurred previously e.g. during the night.INTRODUCTIONThe control system of the CERN PS Complex, with its nine separate accelerators, deals with thousands of pieces of equipment of various types and having different control sequences from each other.The maintenance of such a quantity of hardware requires the use of an appropriate set of tools that, on the one hand provides a detailed specific fault report system, possibly organized in a hierarchical structure, and on the other provides uniformity in the presentation of information. The Mean Time To Repair (MTTR) a device is in fact strongly dependent on the clarity and unambiguity of the information (fault messages) that is sent by the device and that must be interpreted by the maintenance team.A particular aspect in the fault recovery treatment concerns intermittent faults, i.e. those usually having non-destructive effects and that occur at certain moments and disappear before appropriate treatment can be undertaken. The cumulative effects of these faults can be dangerous because they often precede some major problem in the device; the fault hunting system must then provide adequate tools for detecting and reporting these.Finally, different faults occurring in the same device or in separate devices can be connected with each other or have the same root cause The detection of the relationship between different faults is greatly helped by appropriately flagging fault messages with time information.All these considerations have been taken into account in the design of the fault report system presented in this paper.1. THE PS COMPLEX CONTROL ENVIRONMENTThe nine accelerators of the PS Complex are controlled using the same unique architecture [1]. The control system is based on the so-called “standard model” for controls, i.e. an architecture having two (or three) levels of computing, interconnected by an appropriate LAN.The first level is composed of powerful workstations running under UNIX and connected with each other and with the second level through an Ethernet LAN using TCP/IP. At the second level we find a series of VME crates (called Device Stub Controllers, DSC) housing 32-bit processors of the 680xx type and running LYNXOS RT. For certain kinds of equipment a third level of control is used, based on field buses.Software access to the equipment is performed through standard control modules called Equipment Modules (EM) [2] ; these modules, one for each type of equipment, are housed in the DSCs, and hide toapplication programs all the intricacies of the control system. A large ORACLE database is housed in a dedicated server and contains all the necessary information to run the accelerators [3].In order to improve the access speed at run-time, two subsets of this database are extracted:-Data Base Real Time (DBRT) is housed in a local server (one per accelerator) and contains, amongst other information, the addresses of all controlled pieces of equipment-Data Table (DT) constitutes an essential part of the EMs and is thus housed in the DSCs and contains all the information necessary to control each single piece of equipment physically connected to that DSC.The Alarm System [4] periodically scans (polls) all the equipment of the Complex and reports to the workstations the faulty situations. In this context has been installed some years ago the so-called Control Protocol [5, 6], a model based uniform access procedure for equipment. All the equipment of the PS Complex has been classed in families, each family containing the devices having similar goals and characteristics (power converters, RF Cavities, beam diagnostic instrumentation, vacuum systems, etc.). For each family behavioral static and dynamic models have been defined and the corresponding control and acquisition parameters have been identified.The implementation of the Control Protocol is based on two separate software packages exchanging appropriate control and acquisition messages. The first, one package for each family of devices, is hardware independent and is realized in the form of an Equipment Module permitting access to all the equipment of the same family. The second, one package for each single device, is strongly hardware dependent and implements all the control sequences peculiar to this device. The two packages are largely independent each other and can be written by the most appropriate persons; only the exchanged control and acquisition messages must conform with the Protocol rules, i.e. contain the parameters identified in the concerned equipment model, expressed in an appropriate formalism.In the equipment models and, as a consequence, in the contents of exchanged messages the identification and report of fault conditions has received particular care.2. THE FAULT DESCRIPTION IN THE CONTROL PROTOCOLIn the Control Protocol the faults or, better, the anomalous situations, are described at two levels: -the level, called QUALIFIER, is a global description common and identical for all families-the second , called DETAILED STATUS, is specific for each piece of equipmentThe QUALIFIER contains six indicators, not exclusive each other, in increasing order of severity: -WARNING indicates a minor fault not having consequences on the behavior of the device. Nevertheless, it may indicate the beginning of some more serious condition.-BUSY only indicates that the device is executing some time-consuming task, e.g. a motor is still running or some sub-piece is warming-up to the correct temperature, etc. The Busy condition is accompanied, where possible, by indication of the number of seconds necessary to terminate the action.-RESETTABLE FAULT indicates that a major fault has occurred and prevents normal behavior of the device; the recovery from this fault can usually be obtained by executing a computer controlled reset that tries to restart the hardware and software of the device.-NON RESETTABLE FAULT indicates that a major fault has occurred, similar to the previous one, but that no recovery action by computer is possible and that the specialist must be called.-INTERLOCK indicates that a fault occurred not directly in the concerned device but in an interconnected system, and that this fault prevents the normal behavior of the device. An example is a bad vacuum condition which can prevent the measurements in a beam diagnostic device.-INTERNAL COMMUNICATION fault (not present for all devices) concerns those devices having distributed hardware interconnected with a local network.These Fault indicators provided by the Qualifier often represent sufficient information on the status of the device for the operators in Control Room. On the contrary they do not contain enough information for the device specialist or for a more precise diagnostic. In fact each indicator is the sum of several possible faults of the same category: for example, the Resettable Fault indicates that one or more resettable faults occurred, without indicating which ones. The DETAILED STATUS indicators answer this need.Five of the fault indicators of the Qualifier (the Busy indicator has not been considered) have been detailed in order to contain up to 32 different causes of a fault. The number 32 is not magic, it is just sufficient in the PS control system environment. The 5 x 32 Detailed Status indicators are specific to each piece of equipment, and in general their meaning is different from one device to another. The specialist on each device defines the meaning of the 32 indicators for each category of fault, organizes them in increasing order of severity and provides appropriate error messages for each. A check is done by the controls specialists in order to avoid too cryptic messages or ambiguities, e.g. two messagesreferring to the same fault or two faults having the same message. Each of the fault categories is flagged with a timestamp indicating the time of occurrence of the most severe of the reported faults.The Qualifier information is implemented in the PS control system as six (five for certain families) bits of a word contained in every acquisition message from a device. In this way, at each acquisition one has global information on the status of that device. The Detailed Status information is obtained with a specific request and comes as a special acquisition message containing, amongst other information, five (or four for certain families), 32-bits words, each bit representing a particular fault, Fig. 1.3. DATA BASE CLASSIFICATION OF THE FAULT INFORMATIONAll the necessary information for the Fault report system is contained in two separate databases. A complete list of fault messages for all the PS Complex equipment is housed in the DBRT. The messages are stored and numbered in progressive order as they are provided by the equipment specialists; adding new messages is then a simple operation. Each message starts with indication of the category (Warning, Resettable fault etc..).Appropriate functions exist that, given the fault number, retrieve the corresponding message. To date, about 400 fault messages are stored. The information specific to each device is housed in the corresponding Data Table in the concerned DSC. This information is stored in the form of 5 (4) groups of 32 integers, one group for each one of the previously mentioned categories of faults. The 32 integers in each group correspond to the 32 possible fault messages defined by the specialist for this specific category and for this device; each integer contains the number of fault message stored in the DBRT, as mentioned.4. THE FIRST LEVEL OF APPLICATION - THE ALARM SYSTEMAs mentioned, the Alarm System periodically polls the various devices of the PS Complex and reports the anomalous situations in the form of short messages on the consoles. In the case where the polled device belongs to that class accessed by the Control Protocol, first of all the global fault descriptor Qualifier is acquired.If one or more of the fault indicators are lighted, the function “Firstfault” is called. This function: i)scans the Qualifier descriptor to identify, amongst the fault indicators, the most important in orderof severity,ii)interrogates the concerned device to obtain a Detailed Status Message, such as previously described. Among the 32 bits of fault in the selected category of indicator, it identifies the most important,iii)finds the error message number corresponding to the position of the identified bit in the Data Table of the concerned device,iv)retrieves, using this number as an input parameter the corresponding Fault Message from the DBRT and displays it on the console.In this way the most important fault, amongst the 5 (4) x 32 possible fault conditions in this device, is displayed to the operator by the Alarm System.5. THE SECOND LEVEL OF APPLICATIONS - THE DETAILED STATUS ACQUISITIONAfter a fault has been signaled by the Alarm System, somebody, either the specialist or the maintenance person, needs more information about the status of the concerned device. In this case the Detailed Status program is requested. This application is totally data driven in the sense that all the necessary information is contained in the DBRT and in the Data Tables - no new code or compilation is necessary when equipment is added or deleted in the accelerator complex.In the example of Fig. 2 the case of a beam current transformer (TRAFO) is reported:i)the application first searches in the DBRT to identify all the equipment families in the concernedaccelerator using the Control Protocol facility; the complete list is then reported on the display,ii)after the user has selected a family, the application searches once again in the DBRT for all the VME crates (DSC) containing the concerned kind of device and displays the list,iii)the user clicks on the name of the DSC containing the hardware of the faulty device. By using the DBRT-stored information, the names (in the example beam transformers) of the concerned devices are displayed on the screen,iv)finally the selection of the name of the device causes the request of a Detailed Status Message from the concerned hardware. This message contains (in this case) the 4x32 bits of fault information, as previously described. At this point the experienced specialist recognizes the various faults present in the device, often just by looking at the position of the set bits in the message,v)otherwise, he can click on one group of 32 bits to obtain the messages on the display. To do this the application uses the information contained in the DBRT and in the Data Table, as already explained for the Alarm System. The complete list of the 32 fault messages is displayed, where the messages corresponding to actual faults are displayed in bold characters.6. THE FAULT HISTORY APPLICATIONAs mentioned in the Section 1, certain faults are volatile and others are resettable by the operator before the specialist for the concerned device can be informed of their occurrence. On the other hand, it could be very useful to keep track of such faults for subsequent investigations. For this reason the Fault History system has been created.The system is composed of two separate entities: a real time task, and an appropriate application. For various reasons, not reported here, almost all the devices of the PS Complex have a real time task, running in the corresponding DSC, and executing, among other activities, a periodical (~ 1.2 sec) acquisition of the relevant parameters which are subsequently stored in the Data Table.For the devices using the Control Protocol facility, a complete acquisition message is acquired. In this case, when the specific software of a given device recognizes the occurrence of a fault that needs to be recorded, it sets an appropriate “look at me” flag in the next outgoing acquisition message. The real time task, after recognition of this flag, issues a request for a Detailed Status Message to the emitting device. The message with a timestamp is subsequently stored in an appropriate area of the Data Table organized in the form of a FIFO ring buffer. One such buffer exists per family of devices and per DSC:; it is presently sized to contain 100 Detailed Status Messages. In this way we have the complete record of the last 100 fault situations which have occurred, with their times.The application program starts in the same way as described in i) and ii) of the previous section. At this point, by selecting the appropriate DSC name the program interrogates the ring buffer for this DSC and for the concerned family of devices, and displays the list of the recorded faulty elements with time, in chronological order. From this point on the program behaves in the same way as for iv) and v) of the previous chapter, the only difference being that now the Detailed Status messages are extracted from the ring buffer and not requested directly of the device.A useful feature, called Filter, permits easy following in time the fault behavior of a single device; to do this the user has only to provide the name of the inquired device, Fig. 3. The program searches the ring buffer and presents to the user a list of all occurrences of fault situations for the same device, chronologically organized. By examining the successive Detailed Status messages the specialist can extract important information on the fault's evolution.7. CONCLUSIONSIn the PS Complex there are to date about a thousand pieces of equipment and devices using the Control Protocol; of this quantity, about half are thus far equipped with Detailed Status reports.The presented standardization in Fault Reporting has proved to be a good compromise between the application of rules and constraints needed by any standard, and the necessity of implementing it in the accelerator environment where the variety of devices has in effect no limits.The constraints have been limited only to the classification of the faults and in the formalism of their representation. This is especially appreciated in the control room where the operators receive information from thousands of devices. On the other hand, the equipment specialist is free to define the faults to be reported, the order of their severity, the corresponding fault messages, and that fault that should be recorded in the history.REFERENCES[1] F. Perriollat, C. Serre, “ The new CERN PS control system; overview and status”, ICALEPCS‘93 , Berlin, Germany, October 18-23, 1993, Nucl. Instr. And Meth. A352 (1994) 86.[2]J. Cuperus, W.Heinze, C.-H. Sicard, “ Control Module Handbook”, Version 4, CERN InternalNote PS/CO 95-01, 1995.[3]J. Cuperus, “ The database for accelerator control in the CERN PS Complex”, IEEE Conference onParticle Accelerators, Washington, D.C. , March 1987.[4]J. M. Bouché’, J. Cuperus, M. Lelaizant , “ The data driven Alarm System for the CERN PSaccelerator Complex”, ICALEPCS ‘93, Berlin, Germany, October 18-23, 1993, Nucl. Instr. And Meth. A352 (1994) 196.[5]Reported by G. Benincasa, (USAP Members: G. Benincasa, O. Berrig, P. Burla, A. Burns,P. Charrue, G. Gelato, R. Keyser, K.-H Kissler, T. Linnearc, F. Perriollat, J. Pett, M. Ranany,C.-H. Sicard, P. Strubin), “Final report on the uniform equipment access at CERN”, CERNInternal Report PS 93-16 (CO), 1993.[6]G. Baribaud, I. Barnett, G. Benincasa, O. Berrig, R. Brun. P. Burla, R. Cappi, G. Coudert,C. Dehavay, B. Desforges, R. Gavaggio, G. Gelato, H.K. Kuhn, J. Pett, R. Pittin, J.P. Royer,E. Schulte, C. Steinbach, P. Strubin, D. Swoboda, N. Trofimov “ Control Protocol : theproposed new CERN Standard access procedure to accelerator equipment - Status Report”, ICALEPCS ‘91, Tsukuba, Japan, KEK Proc. 92-15 (1992) 591.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Head Model Acquisition from SilhouettesKwan-Yee K.Wong,Paulo R.S.Mendonc¸a,and Roberto CipollaDepartment of EngineeringUniversity of CambridgeCambridge,CB21PZ,UK[kykw2|prdsm2|cipolla]@/research/vision Abstract This paper describes a practical system developed for generating3Dmodels of human heads from silhouettes alone.The input to the system is an im-age sequence acquired from circular motion.Both the camera motion and the3Dstructure of the head are estimated using silhouettes which are tracked throughoutthe sequence.Special properties of the camera motion and their relationships withthe intrinsic parameters of the camera are exploited to provide a simple parame-terization of the fundamental matrix relating any pair of views in the sequence.Such a parameterization greatly reduces the dimension of the search space for theoptimization problem.In contrast to previous methods,this work can cope withincomplete circular motion and more widely spaced images.Experiments on realimage sequences are carried out,showing accurate recovery of3D shapes.1IntroductionThe reconstruction of3D head models has many important applications,such as video conferencing,model-based tracking,entertainment and face modeling[13].Existing commercial methods for acquiring such models,such as laser scans,are expensive, time-consuming and cannot cope with low-reflectance surfaces.Image based systems can easily overcome these difficulties by tracking point features along video sequences [5].However,this can be remarkably difficult for human faces,where there are not many reliable landmarks with long life span along the sequence.In this paper we present a practical system for generating3D head models from silhouettes alone.Silhouettes are comparatively easy to track and provide useful infor-mation for estimating the camera motion[1,10]and reconstruction[12,2,15].Since they tend to concentrate around regions of high curvature,they provide a compact way of parameterizing the reconstructed surface.In our system,images are acquired by moving the camera along a circular path around the head.This imposes constraints on the fun-damental matrix relating each pair of images,simplifying the motion estimation.The system does not require the motion to be a full rotation and the images can be acquired at more widely spaced positions around the subject,an advantage over the technique introduced in[10].Section2presents the theoretical background of motion estimation from silhouettes. The algorithms for model building are described in Section3,and Section4shows the experimental results.Conclusions are given in Section5.2Theoretical BackgroundThe fundamental difficulty in solving the problem of structure and motion from silhou-ettes is that,unlike point or line features,the silhouettes do not readily provide image correspondences that allow for the computation of the epipolar geometry,summarized by the fundamental matrix.The usual solution to this problem is the use of epipolar tangencies[11,3],as shown in Fig.1.An epipolar tangent point is the projection of a frontier point[3],which is the intersection of two contour generators.If7or moreFigure1.A frontier point is the intersection of two contour generators and is visible in both views.The frontier point projects onto a point on the silhouette which is also on an epipolar tangentepipolar tangent points are available,the epipolar geometry can be estimated.The in-trinsic parameters of the cameras can then be used to recover the motion[6,4].How-ever the unrealistic demand for a large number of epipolar tangent points makes this approach impractical.By constraining the motion to be circular,a parameterization of the fundamental matrix with only6degrees of freedom(dof)is possible[14,5,9].This parameterization explicitly takes into account the main image features of circular mo-tion,namely the image of the rotation axis,the horizon and a special vanishing point, which arefixed throughout the sequence.This makes it possible to estimate the epipolar geometry by using only2epipolar tangencies[9].In[10],a practical algorithm has been introduced for the estimation of motion and structure from silhouettes of a rotating object.The image of the rotation axis and the vanishing point arefirst determined by estimating the harmonic homology associated with the image of surface of revolution spanned by the object.In order to obtain such an image,a dense image sequence from a complete circular motion is required.In this paper,the parameters of the harmonic homology and other motion parameters are esti-mated simultaneously by minimizing the reprojection errors of epipolar tangents.This algorithm does not require the image of such a surface of revolution and thus can copewith incomplete circular motion and more widely spaced images,an advantage over the algorithm presented in[10].2.1Symmetry and Epipolar Geometry in Circular MotionConsider a pinhole camera undergoing circular motion.If the camera intrinsic param-eters are kept constant,the projection of the rotation axis will be a line which is pointwisefixed on each image.This means that,for any point on,the equation is satisfied,where is the fundamental matrix related to any image pair inthe sequence.For circular motion,all the camera centers lie on a common plane.The image of this plane is a special line,the horizon.Since the epipoles are the images of the camera centers,they must lie on.In general,and are not orthogonal.Another feature of interest is the vanishing point which corresponds to the normal direction of the plane defined by the camera center and the axis of rotation.The vanishing point and the horizon satisfy.A detailed discussion of the above can be found in[14,5,9].Consider now a pair of cameras,denoted as and,related by a rotation about an axis not passing through their centers,and let be the fundamental matrix associated with this pair.It has been shown that corresponding epipolar lines associated with are related to each other by a harmonic homology[9],given byIn[16],an algorithm has been presented for estimating the camera intrinsic pa-rameters from2or more silhouettes of surfaces of revolution.For each silhouette,the associated harmonic homology is estimated and this provides2constraints on the camera intrinsic parameters:(2)where is the camera calibration matrix.Conversely,if the camera intrinsic parameters are known,(2)provides2constraints on and as a result has only2 dof.2.2Parameterization of the Fundamental MatrixIn[8,17],it has been shown that any fundamental matrix can be parameterized as ,where is any matrix that maps the epipolar lines from one image to the other,and is the epipole in the second image.In the special case of circular motion,it follows that(3) Note that has6dof:2tofix,and4to determine.From(2),if the camera intrinsic parameters are known,2parameters are enough to define and thus will have only4dof.An alternative parameterization for the fundamental matrix in the case of circular motion[14,5,9]is given bySince a silhouette has at least two epipolar tangencies(one at its top and another atits bottom),there will be totally measurements from all pairs ofimages.Due to the dependence between the associated fundamental matrices,however, these measurements only provide(or2when)independent constraints on the parameters.As a result,a solution will be possible if.Figure3.The parameters of the fundamental matrix associated with each pair of images in thesequence can be estimated from the reprojection errors of epipolar tangents.The solid lines are tangents to the silhouettes passing through the epipoles,and the dashed lines are the epipolar lines corresponding to the tangent pointsThe minimization of the reprojection errors will generate a consistent set of funda-mental matrices,which,together with the camera intrinsic parameters,can be decom-posed into a set of camera matrices describing a circular motion compatible with the image sequence.The algorithm for motion estimation is summarized in Algorithm1. Having the motion of the camera estimated,the3D model can then be reconstructed from the silhouettes using the simple triangulation technique introduced in[15].4Experiments and ResultsIn order to evaluate the performance of the algorithm described in Section3,2human head sequences,each with10images,were acquired using the setup shown in Fig.4. The camera is mounted to the extensible rotating arm of the tripod,whose height can be adjusted according to the height of the subject.Each image in the sequence was taken after rotating the arm of the tripod roughly by,with the subject standing close to the tripod.The intrinsic parameters of the camera are obtained from an offline calibration process.The silhouettes of the heads are tracked using cubic B-spline snakes[2](see Fig.5and6).track the silhouettes of the head using cubic B-spline snakes; initialize,and the angles between the cameras; while not converged dofor each image pair doform fundamental matrix;locate epipolar tangents;compute reprojection errors;end forupdate parameters to minimize the sum of reprojection errors; end whileFigure6.Image sequence(II)used in the experiment,with the silhouettes of the head tracked using cubic B-spline snakesThe initial guess for the horizon and the image of the rotation axis was picked by ob-servation,and the angles of rotation were initialized as respectively.The sum of the reprojection errors was minimized using the Levenberg-Marquardt algorithm[7].The reconstructed3D head models can be found in Fig.7and8.The shapes of the ears,lips, noise and eyebrows demonstrate the quality of the3D models recovered.5ConclusionsIn this paper we have presented a simple and practical system for building3D models of human heads from image sequences.No prior model is assumed,and in fact the sys-tem can be applied to a variety of objects.The only constraint on the camera is that it must perform circular motion,though the exact camera orientations and positions are unknown.Besides,the camera is not required to perform a full rotation and there is no need for using a dense image sequence.The silhouettes of the head are the only in-formation used for both motion estimation and reconstruction,circumventing the lack, instability and occlusion of landmarks on faces.The silhouettes also provide a natu-ral and compact way of parameterizing the head model,concentrating contours around regions of high curvature.The experimental results show the accuracy of the acquired model.AcknowledgementsP.R.S.Mendonc¸a would like to acknowledge CAPES/Brazilian Ministry of Education for the grant BEX1165/96-8that partially funded this research.References1.R.Cipolla,K.˚Astr¨o m,and P.J.Giblin.Motion from the frontier of curved surfaces.In Proc.5th Int.Conf.on Computer Vision,pages269–275,1995.2.R.Cipolla and A.Blake.Surface shape from the deformation of apparent contours.Int.Journal of Computer Vision,9(2):83–112,1992.Figure7.Different views of the VRML model from the model building process using the10 images in Fig.5Figure8.Different views of the VRML model obtained from the model building process using the10images in Fig.63.R.Cipolla and P.J.Giblin.Visual Motion of Curves and Surfaces.Cambridge UniversityPress,Cambridge,1999.4.O.Faugeras.Stratification of three-dimensional vision:Projective,affine and metric repre-sentations.J.Opt.Soc.America A,12(3):465–484,1995.5. A.W.Fitzgibbon,G.Cross,and A.Zisserman.Automatic3D model construction for turn-table sequences.In3D Structure from Multiple Images of Large-Scale Environments,Euro-pean Workshop SMILE’98,Lecture Notes in Computer Science1506,pages155–170,1998.6.H.C.Longuet-Higgins.A computer algorithm for reconstructing a scene from two projec-tions.Nature,293:133–135,1981.7. D.G.Luenberger.Linear and Nonlinear Programming.Addison-Wesley,USA,secondedition,1984.8.Q.-T.Luong and O.Faugeras.The fundamental matrix:Theory,algorithm,and stabilityanalysis.Int.Journal of Computer Vision,17(1):43–75,1996.9.P.R.S.Mendonc¸a,K.-Y.K.Wong,and R.Cipolla.Recovery of circular motion from profilesof surfaces.In B.Triggs,A.Zisserman,and R.Szeliski,editors,Vision Algorithms:Theory and Practice,Lecture Notes in Computer Science1883,pages151–167,Corfu,Greece,Sep 1999.Springer–Verlag.10.P.R.S.Mendonc¸a,K.-Y.K.Wong,and R.Cipolla.Camera pose estimation and reconstruc-tion from image profiles under circular motion.In D.Vernon,editor,Proc.6th European Conf.on Computer Vision,volume II,pages864–877,Dublin,Ireland,Jun2000.Springer–Verlag.11.J.Porrill and S.B.Pollard.Curve matching and stereo calibration.Image and Vision Com-puting,9(1):45–50,1991.12.R.Vaillant and ing extremal boundaries for3D object modelling.IEEETrans.on Pattern Analysis and Machine Intell.,14(2):157–173,1992.13.T.Vetter.Synthesis of novel views from a single face image.Int.Journal of ComputerVision,28(2):103–116,June1998.14.T.Vieville and ing singular displacements for uncalibrated monocular visualsystems.In Proc.4th European Conf.on Computer Vision,volume II,pages207–216,1996.15.K.-Y.K.Wong,P.R.S.Mendonc¸a,and R.Cipolla.Reconstruction and motion estimationfrom apparent contours under circular motion.In T.Pridmore and D.Elliman,editors,Proc.British Machine Vision Conference,volume1,pages83–92,Nottingham,UK,Sep1999.British Machine Vision Association.16.K.-Y.K.Wong,P.R.S.Mendonc¸a,and R.Cipolla.Camera calibration from symmetry.InR.Cipolla and R.Martin,editors,The Mathematics of Surfaces IX,pages214–226,Cam-bridge,UK,Sep2000.Institute of Mathematics and its Applications,Springer–Verlag. 17.Z.Zhang.Determining the epipolar geometry and its uncertainty:A review.Int.Journal ofComputer Vision,27(2):161–195,1998.。

相关文档
最新文档