2003) “Distributed Cognition Where the Cognitive and

合集下载

Cognitive Dynamics- Dynamic Cognition

Cognitive Dynamics- Dynamic Cognition

Cognitive Dynamics-Dynamic Cognition?Reginald Ferber1Fachbereich2Universit¨a t–GH PaderbornD-33095Paderborn,Germany2Abstract:In the last ten years a paradigm shift took place in cognitive science.While during the seventies problems were commonly attacked by symbol processing approaches,in the last decade many researchers employed connectionist models.These models can be seen as dynamical systems on metric spaces.There is not yet a developed theory of the behavior of these systems, but they seem to be a challenge for future research.The purpose of this paper is to introduce the problem and the dynamic approach to it.1Cognitive ProcessesThe subject of cognitive science is the description and simulation of cognitive processes and structures,especially in the areas of memory,problem solving,verbal behavior, and image identification.Some cognitive processes,for example the production and the parsing of sentences, seem to employ sophisticated symbol manipulating operations,other processes,such as image identification or access to word meaning,seem to rely on fast processing of huge amounts of rather vague knowledge that has been learned in many different situations. This learning is performed in a smooth way,enabling generalization,context sensitivity and noisy inputs.This use of experience makes it necessary to compare situations,i.e.to decide if a new situation is equal to or resembles an old one.This comparison might be achieved by use of distance measures which are sensitive to many parameters of the situation.Distances between symbolic objects are rather artificial constructions, while they are elementary to elements of metric spaces.Another controversy in cognitive science,which is closely related to the question of symbolic processing,is the question,to what degree the cognitive system is modular ([8]).A modular view assumes that the cognitive apparatus consists of independent modules between which data are exchanged.This assumption seems to be the natural consequence of a symbol processing approach.On the other hand,it seems difficult to explain the high speed of perception processes with modular symbol processing systems based on rather slow biological neurons.There are also empirical data that seem to contradict a strictly modular approach.1This research was supported by the Heinz-Nixdorf-Institut and by the Deutsche Forschungsgemein-schaft(DFG)Kennwort:Worterkennungssimulation2E-mail:ferber@psycho2.uni-paderborn.de1To explain these aspects,models with distributed memory and parallel processes have been proposed that can be interpreted as dynamical systems on metric spaces.These models are known under many names,such as connectionism,parallel distributed processing(pdp),neural networks([10],[12],[1],[7],[9],[13]),and are defined in many different ways.In the following paragraph some formal definitions will be given that catch the central aspects of these models to unify terminology(See also[3]).2Neural NetworksThe following very general definition includes most of the deterministic models used in literature.Beside these,there are non-deterministic or probabilistic models.2.1.Cellular StructureLet be a set and a countable set.is called a configuration of values of on the cells of.denotes the space of all configurations.For every cell let be afinite,ordered subset,the neighborhood of cell.The set of all neighborhoods defines a directed graph with node set and the set of edges,the connection graph,net structure or grid of the cellular structure.For every cell let be a local function.Let further be the set of all local functions.Then is calleda cellular structure.with is called the globalfunction of.If isfinite,is called afinite cellular structure.If isfinite,is called a cellular automaton.Figure1:Three Different grid structures.Neighbors are indicated by anarrow from the neighbor to the cell itself.a)Arbitrary grid,b)Rectangulargrid with VON NEUMANN neighborhood,c)One dimensional circular grid.The global function defines an autonomous dynamical system on the configuration space. The behavior of the system can be influenced by the structure of the grid and by the nature of the local functions.Both kinds of restrictions are used to construct models of cognitive behavior.The following restriction on the local functions is used frequently:22.2.Neural Net1.A cellular structure with andwith(1)and monotonic not decreasing functions is called a(deterministic) neural net.is called the output function of cell.is called the weight from cell to cell.2.A function of the formis called a linear threshold function with weights and threshold.The dynamic behavior of a neural net on a given grid is determined by the output functions and the weights.In many cases the same output function is chosen for all cells.Then the behavior of the system depends only on the weights.They can be chosen to achieve a desired behavior of the system.This can be done either in one single step(see for examples[6],[4])or in a longer adaptation process of small smooth changes either before the use of the net as a model or even during the use of the system. This construction of appropriate weights is often called learning.The following restriction on the structure of the net forces a simple dynamical behavior: 2.3.Feed Forward NetLet be a cellular structure.The set is called the set of input cells.Let be the set of cells that can be reached from the cells of by passing through exactly edges of the connection graph of.If and all are disjoint,the grid is called a feed forward grid and are called layers of the grid.is called input layer,with is called output layer and all layers in between are called hidden layers.is calleda feed forward net or a feed forward network.Feed forward neural nets are used to transform an input pattern of values on the input layer into a pattern of values on the output layer of the grid.A well known example with three layers is the perceptron developed1962by F.ROSENBLA TT[14]and extensively studied by M.MINSKY and S.PAPERT in[11].Other examples with more layers and continuous output functions are back-propagation networks.The name is due to the way in which the weights are computed:First the weights are set to random values;3Figure2:A feed forward grid with5layersthen,using a sample of given input and target patterns as training material,the pattern on the output cells produced by the network from the input pattern is compared with the target ing a gradient descend method,the weights are changed in such a way that for this input pattern the difference between the output of the net and the target is reduced.To adapt the weights between earlier layers,error values are propagated back to these layers,and a correction for the weights is computed using these error values. (For details compare[9].)The dynamic of a feed forward net is quite simple:Starting with an arbitrary config-uration,the values of the input cells in thefirst iterate depend only on their local functions since they have no input(argument).From the second iteration on the values of the cell in are constant,since they have only the constant input from the cells of.In this way the values of the subsequent layers become constant in subsequent iterates.In a net with layers the iteration sequence reaches the samefixed point from every configuration within iterations.3ExampleIn the following we shall concentrate on experiments investigating the process of word recognition.One goal of these experiments is to answer the question o f modularity of the cognitive system,in this case the question,if there is an independent module functioning as a mental lexicon where the words known by a person are stored.First we shall give a brief description of the experimental situation in which data on human word recognition are collected.Then we shall outline a simulation of such data using a back-propagation network.Finally a dynamic model is proposed.43.1Word Recognition and PrimingWord recognition is an experimental paradigm that is used frequently to investigate cognitive processes in verbal behavior.The basic idea is to measure the time people need to respond to the presentation of a written word,the so called target.The requested reactions are either to name the target(naming experiment),or to decide,if a presented string of characters is a word of the native language of the person or not,by pressing an appropriate button(lexical decision experiment).In both cases the time elapsing between the onset of the presentation of the target and the onset of the reaction is measured.There are many studies investigating the effect of–frequency of the target in language–regularity of pronunciation–length of the targetand the like.Priming experiments investigate the effect of context on naming and lexical decision. In this case the presentation of the target is preceded by the brief presentation of another word,the so called prime.This prime can be related to the target in different ways. It can–be the same word typed differently(upper vs.lower case)(identity priming).–be semantically related(semantic or associative priming)–precede the target frequently in natural language(syntactic priming)–be similar as a string of characters(graphemic priming)If the presentation of a target that is related to the preceding prime leads to a quicker reaction,then the mental lexicon is probably not completely modular.The results show complex behavior(see[5],[16]for an overview and references). While some studies found some of the priming effects,others did not.There seem to be many factors influencing the results.At least it seems to be rather unlikely that a mental lexicon exists that is completely modular.3.2A Back-propagation ModelWe shall now present a model of word recognition that catches some of the features of a parallel and distributed system.3.2.1.The ModelIn1989M.SEIDENBERG and J.McCLELLAND proposed a“Distributed,Developmental Model of Word Recognition and Naming”[15].They used a modified back-propagation model and were able to simulate“many aspects of human performance including(a) differences between words in terms of processing difficulty,(b)pronunciation of novel5items,(c)differences between readers in terms of word recognition skill,(d)transition from beginning to skilled reading,and (e)differences in performance on lexical decision and naming tasks.”[15:page 523].The net they used,consisted of 3layers:an “orthographic”input layerof 400cells,a hidden layer with 100to 200cells,and an output layer that was divided in two parts:a “phonological”output part with 460cells and an orthographic part that was similar to the input layer.The phonological part of the output was used to simulate naming data,the orthographic part was used to simulate lexical decision data.The layers were fully forward connected,i.e.for and it holds .orthographicinput orthographicoutputphonologicaloutputerroneous pattern converges to the correct one.This process should take more time, if the error is big.The model was trained with2,884stimulus–target pairs,presented from about14 times for low frequent words up to230times for the most frequent words.With every presentation the weights were changed for the orthographic part of the output and the phonological part of the output.Thus the weights from the input to the hidden layer were trained twice,for the orthographic to phonological net and the orthographic to orthographic net.3.2.2.RemarksSeveral remarks can be made on the model described above(3.2.1).1.The model realizes a“metric”system,since input and output are elements of andimensional space.It can be seen as a continuous mapping from to.This continuity is probably one of the reasons for the ability of the model to exploit regularities of the training material and generalize them to new material.2.The effectiveness of the continuity in generalization depends on the representationof the input.On the one hand it has to represent enough of the crucial information of the individual input to distinguish it from other inputs,on the other hand it has to generalize over the individual inputs to extract features they have in common.The representations used in the model are very sophisticated,hence a good deal of its power may be due to the“constructed”representations.3.As the authors mention the number of cells in the hidden layer has a strong influenceon the performance of the model.It determines how much information can be passed through this layer,i.e.how detailed or generalizing the treatment of a single input can be.4.The special structure of the net with the hidden layer in common for the orthographicto phonological net and the orthographic to orthographic net,can be a reason for the model’s generalization behavior in the simulation of the lexical decision task.The representation of the information on the hidden layer has to take account of both the phonological and the orthographic properties of a word.5.The authors stress the point that their model has no lexicon.But the orthographicto orthographic net is a device that reproduces a word from a string of letters.Due to the continuity it is somewhat robust against small perturbation.It will produce the correct output even if only partial information is given as input.Hence with an appropriate functional definition of a lexicon,it is just a distributed implementation of a mental lexicon,including phonetic influences as described in the last remark(4).76.The authors view their model as part of a larger model including layers for meaningand context.In the present implementation it is not visible how these additional components should be integrated.Hence the simulation of further processes such as priming is not possible.7.Because of the feed forward structure of the net,there is no possibility to explainthe influence of previous inputs or stronger influence of a longer input.To the model it makes no difference if the input is presented once or for a longer time.4A Dynamic Model of Word Recognition and PrimingThe model outlined in3.2.1simulates reaction times by distances between patterns of activities on parts of a neural net and expected target patterns.It is assumed that larger distances result in longer times for the formation of correct patterns as input to the next component of the cognitive system.In the remaining part of the paper we shall outline some ideas how a simulation could work that uses the convergence of a dynamical system on a metric space to simulate word recognition processes.4.1Basic AssumptionsFirst some assumptions are listed that point out the basic principles of the dynamic model.4.1.1.Cognition as Dynamical ProcessThefirst assumption of the dynamical model is,that cognitive processes are simulated by a dynamical system given by the global function of a neural network.The cognitive states are represented by the configurations,the time course of the process is simulated by the iteration sequence.If the iteration sequence approaches a small attractor,for example afixed point,this either corresponds to a stable cognitive state,for example the meaning of a word or image,or it is a constant or periodic input to other neural nets stimulating further processes.In both cases the assumption is central,that only configuration sequences that have some stability over(a short period of)time can cause something to happen.4.1.2.Learning:Detecting Regularities in the InputThe second basic idea is,that the neural net is slowly but constantly changed by its input in such a way that co-occurring events are associated,i.e.the configurations resulting from frequent and frequently co-occurring events in the input of the system should be stable.This enables the net to“detect”regularities in its input(compare[7]).From the point of view of the dynamical system this means that by changing the weights of the8neural net,the attractors of the global function and their basins of attraction have to be changed in such a way that frequent input patterns become attractors.4.1.3.Constantly Triggering InputIn contrast to3.2.1it is assumed,that the grid has no pre-defined structure,especially no feed forward structure,but that the structure develops during learning.It should be not very densely connected and it should contain loops.Input is presented to the net in such a way that the input pattern is added to the activities of the input cells for several applications of the global function;i. e.the system is no longer an autonomous system,but is triggered by the external input.with.The input cells are only a small fraction of all cells of the net.From this fraction the influence of the input spreads out to the other cells. There it can match with the existing patterns(of the previous attractor)or it can force them to change,moving the system to the basin of a different attractor.This constant triggering allows on the one hand to control the duration and strength of the input,on the other hand influences of previous inputs are preserved for a while to interact with new influences,as it is necessary to simulate priming effects(compare3.2.2.7).4.1.4.Subnets and ModularityThe distributed representation of the processed information as pattern on the cells of the grid allows complicated interactions,including modularization.It is possible that a subset of cells is strongly interconnected,but has only a few connections to other cells. Such a subset or subnet could be called a module.It is also possible that the system converges for a while relatively independent on two such subnets towards sub-patterns of different attractors,and that later on conflicts arise between these two sub-patterns.For example there might be subnets incorporating“meaning”,and“context”,as proposed by[15].In such a case the configuration coming from the(orthographic)input may converge on one part of the net(say meaning)to one attractor but on the other part (context)it may converge to another attractor,because the surrounding information points toward a different interpretation.This may lead to a contradiction andfinally one of the two attractors will win.The idea of shaping attraction basins is very powerful.It opens possibilities for the explanation of many effects in word recognition.On the other hand it is not yet in such a concrete state that any one of these explanations can be more than a hypothesis.4.2Simulation of Word Recognition ProcessesIn terms of this model the processes involved in naming,lexical decision and priming can be described in the following way:94.2.1.NamingFor the naming task the system has to stimulate the pronunciation of the written word.In a modular approach it is assumed that this is done by the production of a phonological code,which in turn is the basis for the generation of a motor code that controls articulation.A comparable system is also possible for the dynamical model, as a cascade of neural nets,one stimulating the next one as soon as it has reached a stable state(see also[2]).The dynamic model can explain several other phenomena: Frequent words are named faster,since their attractors are strong;regularly pronounced words are named faster,since the sequence of letters are more frequent and hence lead to faster convergence.4.2.2.Lexical decisionThe lexical decision task requires to distinguish between character strings representing words and character strings that do not represent words.In general the words used for this purpose are well known,short,and frequent words of the native language of the subject.The non–word strings are constructed in such a way that they have the same length and that they are pronounceable.From4.1.2it should follow that there is no attractor for these strings since they are new to the system,and there is no meaning associated to them.Hence in those parts of the grid whose configurations represent meaning there should be no convergence.Of course there can be convergence just by chance,but that is equivalent to a wrong answer of a person.4.2.3.PrimingPriming effects occur,when the system is moved by the influence of the prime towards the attractor of the target:The input of the prime changes the configuration of the net in such a way that,if the following target is related to the prime,the configuration will be already closer to the attractor of the target,than it has been before the prime influenced the net.Hence the attractor is reached faster than without the prime.4.2.3.1Identity priming.If the target is the same word as the prime but written in lower case letters,while the prime was written in upper case letters,most of the patterns induced by the two strings will be the same.Hence the impact of the prime on the net will be very similar to that of the target.4.2.3.2Semantic priming.If the prime and the target are semantically related,they appear more frequently together(see[18]).Hence they can lead to the same attractor concerning“meaning”and“context”:the influence of the prime moves the system closer to an attractor that is in many respects also a possible attractor for the target. 4.2.3.3Syntactic priming is based on frequent co-occurrence of words in language. According to4.1.2this s hould lead to faster convergence.104.2.3.4Graphemic priming is based on the similarity of character strings,i. e.the prime is a string of characters in which only very few characters are changed compared to the target.If the strings are entered by activating input cells that represent short sequences(tuples)of characters,most of these tuples will be the same in the prime and the target.Hence a weak form of identity priming will take place.4.2.4.Priming with ambiguous wordsOf special interest are experiments with ambiguous targets,i.e.letter strings that have several meanings.In general a semantic priming effect is observed only for the primary meaning,i.e.the more frequent meaning.If the prime has a strong impact towards the less frequent meaning(secondary meaning),for example if a whole sentence is used to prime that meaning,the reaction is also faster.A closer analysis of the processes([17]) shows that atfirst both meanings are activated according to their frequency.While the primary meaning quickly reaches a high availability,the availability of the secondary meaning grows slower.After about300ms the secondary meaning reaches nearly the same availability as the primary meaning.Afterwards its availability decreases again.These data could be explained by a process like that described in4.1.4.First there is an relatively independent evolution of patterns on different parts of the net,one representing the primary meaning,one representing the secondary meaning.After a while the developing patterns grow so large that they get into a conflict in which the pattern of the primary meaning suppresses that of the secondary meaning.abFigure4:Two ambiguousfigures:Left the so called Necker Cube:Eithervertex a or vertex b can be seen as being in front.Thefigure on theright can either be seen as two black faces or as a white candlestick.A similar process could cause the well known switching effects for ambiguousfigures like those shown infigure4:The two meanings are represented by neighboring attractors of the dynamical system.The influence of additional information moves the system from the basin of one attractor to that of the other.11References[1]Arbib,M.A.(1987).Brains,Machines,and Mathematics.Springer,(Expanded edition).[2]Dell,G.(1986).A spreading-activation theory of retrieval in sentencde porduction.Psychological Review93(3),283-321.[3]Ferber,R.(1992).Neuronale Netze.In Jahrbuch¨Uberblicke Mathematik,S.C.Chatterji,B.Fuchssteiner,U.Kulisch,R.Liedl,&W.Purkert,Eds.Vieweg,Braunschweig,Wiesbaden,pp.137–157.[4]Ferber,R.V orhersage der Suchwortwahl von professionellen Rechercheuren in Literatur-datenbanken durch assoziative Wortnetze.In Mensch und Maschine–Informationelle Schnittstellen der Kommunikation.(Proceedings ISI’92)(1992),H.H.Zimmermann,H.-D.Luckhardt,&A.Schulz,Eds.,Universit¨a tsverlag Konstanz,pp.208–218.[5]Gorfein,D.S.,Ed.(1989).Resolving Semantic Ambiguity.Springer-Verlag.[6]Hopfield,J.J.(1982).Neural networks and physical systems with emergent collectivecomputational A.[7]Kohonen,T.(1988).Self-Organization and Associative Memory.Springer Verlag,Berlin,(Second edition).[8]Levelt,W.J.M.(1991).Die konnektionistische Mode.Sprache&Cognition10(Heft2),61–72.[9]McClelland,J.L.,Rumelhart, D. E.,&the PDP Research Group(1986).ParallelDistributed Processing Explorations in the Microstructure of Cognition.The MIT Press, Cambridge Massachusetts.[10]McCulloch,W.S.,&Pitts,W.(1943).A logical calculus of the ideas immanent in nervousactivity.Bulletin of Mathematical Biophysics,vol.5.[11]Minsky,M.L.,&Papert,S.A.(1988).Perceptrons.The MIT Press,Cambridge,Ma.,(Expanded edition,first edition1969).[12]Palm,G.(1982).Neural Assemblies An alternative Approach to Artificial Intelligence.Springer.[13]Ritter,H.,Martinez,T.,&Schulten,K.(1990).Neuronale Netze.Addison-Wesley.[14]Rosenblatt,F.(1962).Principles of Perceptrons.Spartan,Washington,DC.[15]Seidenberg,M.S.,&McClelland,J.L.(1989).Distributed,developmental model of wordrecognition and naming.Psychological Review No.4,523–568.[16]Sereno,J. A.(1991).Graphemic,associative,and syntactic priming effects at a briefstimulus onset asynchrony in lexical decision and naming.Journal of Experimental Psychology:Learning,Memory,and Cognition No.3,459–477.[17]Simpson,G. B.,&Burgess, C.(1985).Activation and selection processes in therecognition of ambiguous words.Journal of Experimental Psychology:Human Perception and Performance(11),28–39.[18]Wettler,M.,Rapp,R.,&Ferber,R.(1993).Freie Assoziationen und Kontiguit¨a ten vonW¨o rtern in Texten.Zeitschrift f¨u r Psychologie201.12。

cognitive-Learning-Theory

cognitive-Learning-Theory

information – processing theory
visualization
dual code theory of memory (双重编码理论)
The theory hypothesizes that information is retained in long-term memory in two forms: visual and verbal.
Massed practice:
practice newly learned information intensively until it is
thoroughly learned
Distributed practice:
To provide practice on newly learned knowledge and skills over an extended period of time to increase the chances that the knowledge and skills will be retained.
sensory register
Inspiration
(educational implication)2
People must pay attention to information if they are to retain it.
Working Memory Store
Sensory Input
Elaboration
Elaborative rehearsal: Thinking about, or elaborating, the information meaning while rehearsing. ==>transferring to LTM

二语习得的认知途径

二语习得的认知途径


McLaughlin cont’d

Learning involves restructuring representations of knowledge Provides one explanation for U-shaped curve Provides one explanation for fossilization McLaughlin perceives his theory as a “partial account” of SLA--one which needs to be complemented by a linguistic theory which constrains the developing grammatical system
ACT*

Some criticisms:

Most people agree that not all knowledge must initially be “declarative:” knowledge seems to also be acquired “implicitly” through either languagespecific mechanisms or other general cognitive systems (pattern recognition, etc) How can you tell when a process is automatic?


Ex: learning to drive a car with a standard transmission
McLaughlin cont’d

The car example provides an example of learning in this information processing model

语言学教程Chapter 6. Language and Cognition课件

语言学教程Chapter 6. Language and Cognition课件
and written production.
语言学教程Chapter 6. Language and Cognition
(1) Access to words
• Steps involved in the planning of words: • 1. processing step, called conceptualization,…… • 2.to select a word that corresponds to the chosen concept. • 3.morpho-phonological encoding • Generally, morphemes are accessed in sequence, according to
• 1). Serial models…… • 2). Parallel models…… • Structural factors in comprehension of sentences • 1). minimal attachment which defines “structural simpler” • 2).“Garden path” • Lexical factors in comprehension • Information about specific words is stored in the lexicon.
语言学教程Chapter 6. Language and Cognition
6.2.3 language production
• Language production involves…… • 1. generation of single words • 2. generation of simple utterances • Different mappings in language comprehension and in language production • Discussions: • A. production of words orally, • B. production of longer utterances, • C. the different representations and processes involved in spoken production

Review by

Review by

Cognitive Science versus Cognitive Ergonomics: A dynamic tension or an inevitable schism? Jean-Michel Hoc, Pietro C. Cacciabue, and Erik Hollnagel (Eds.)Expertise and technology: Cognition and human-computer cooperation. Hillsdale, NJ: Erlbaum, 1995. 289 pp. ISBN 0-8058-1511-2. $45 hardback.Review byWayne D. Gray & Brian D. EhretIn their foreword, the editors give a nod toward the “internationally based human-computer interaction (HCI) community, at the level of research as well as at the level of application” (p. xi). This is the community to which the reviewers belong and as such we eagerly awaited this volume of chapters reporting on the work of the mainly European “large-system” cognitive ergonomics community. After witnessing first-hand the excitement and challenge of applying cognitive theory to HCI, we were more than a little curious as to the successes of and challenges to cognitive theory when it is applied to larger scale applications. Alas, for the most part, we were disappointed. While this book contains several excellent chapters, on the whole we came away with the impression of a community that has isolated itself from mainstream cognitive theory and has little interest in testing or contributing to that theory. Indeed, it is not clear for whom, other than the large-systems community, this book is intended. It is heavily laden with jargon and makes little attempt to establish contact with any other tradition. In addition, the book is somewhat “user-unfriendly” containing a meager 2.5 page subject index and no author index (references are listed only at the end of the chapter in which they are cited).Although our review of the book is primarily negative, several of the chapters are quite good and would justify having your library buy it.The book is organized into three sections, the first pertains to cognition in dynamic environments, the second addresses expertise, and the third deals with human-computer cooperation. These sections are preceded by an overview chapter and are followed by a conclusion chapter.Chapter 1 by Hollnagel, Cacciabue, and Hoc organizes and summarizes the large-system perspective and provides pointers into the literature. Of particular note is the importance given to the role of models and simulations in understanding the complex interactions among humans, systems, and tasks.Section 1 includes four chapters and seems intended to be the theoretical foundations section. Chapter 2, by Hoc, Amalberti, and Boreham, provides a very high level discussion of diagnosis that could benefit from a lessor scope and concrete examples. Chapter 3, by Kjaer-Hansen, is a largely out of date review of “Unitary Theories of Cognitive Architectures.”The two chapters by Caccibue & Hollnagel (ch. 4) and Woods & Roth (ch 5) would like to distinguish between their use of computational cognitive modeling and everyone else’s. The nub of the distinction seems to be between models of “toy tasks” that the authors claim are based in cognitive theory and models of important, real-world tasks that the authors claim must eschew cognitive theory.It is interesting that computational cognitive modeling has flourished in the HCI community by taking an approach opposite to that advocated here. In a tradition going back at least to Card, Moran, & Newell (1983), the HCI community has paid close attention to theories of what Caccibue and Hollnagel refer to as “micro-”cognition with the successful goal of applying suchtheories to real-world HCI tasks. In recent years the HCI community has embraced cognitive architectures such as Soar, ACT-R, and construction/integration with emerging success (Kirschenbaum, Gray, & Young, in press).Although we do not like the distinctions made in these two chapters, we understand the authors’ motivations and offer some distinctions of our own. It is important to distinguish between modeling done for scientific or theoretical purposes versus that done for engineering purposes (the latter has been referred to in the HCI literature as “approximate modeling," Card, et al., 1983). However, we maintain that approximate models can be built upon the foundations established by scientific or theoretical modeling. Modeling not based upon cognitive theory may work well for complex tasks as long as these tasks involve relatively simple cognition (such as much of expert systems where the complexity is in the task not in the head). However, there are dangers to this approach. The entire infrastructure is arbitrary and less constrained than one based upon a cognitive architecture. Also, if more than one cognitive mechanism is required (complex cognition) it is not clear whether the various mechanisms will be able to interact correctly. That degree of coordination would require an architecture. While we do not disagree with their goals, we wish our “large-systems” colleagues were more interested in drawing from and contributing to cognitive theory.Section 2 looks at the development of competence and expertise. The section begins with a clearly written chapter by Boreham on expert-novice differences in medical diagnosis. The remaining chapters are less successful, tending to share three negative characteristics. First, they seem largely out of touch with the mainstream research on expertise as represented, forexample, by the Ericsson & Smith (1991) collection of chapters. Second, many seem intent on developing domain-specific theories that make little contact with existing cognitive theory. Third, in their attempt to make theory-based, taxonomic distinctions they neglect to include case studies and examples that would make these distinctions concrete.Section 3 turns to “Cooperation between humans and computers” and contains the best and worst chapters in the book. In chapter 10, Benchekroun, Pavard, & Salembier present an interesting use of cognitive modeling to predict the influence of a new software system on the communication efficiency of an emergency center. Chapter 11 by Moray, Hiskes, Lee, & Muir is a lovely chapter that shows the application of the social psychology construct of “trust” to human-machine interaction. We left this chapter inspired to read more of the literature on process control.Chapter 12, by Rizzo, Ferrante, & Bagnara presents a collection of categories and anecdotes on human error. Chapter 13, by Millot & Mandiau promises to compare the distributed AI (DAI) approach and the “more pragmatic human engineering approach” (p. 215) to “implementing a cooperative organization” (p 215). The experiment presented to this end seems poorly motivated or maybe just poorly explained.Hollnagel’s chapter 14 sheds much heat and smoke but little light on a number of tangential issues while demonstrating a lack of understanding for much of contemporary cognitive theory. For example, on page 230 he talks about “the useless automaton analogy” and “A particular case is the use of the information processing metaphor (Newell & Simon, 1972) - or even worse,assuming that a human being is an information processing system (as exemplified by Simon, 1972; Newell, 1990).” Later on the same page he says,I will not argue that the automaton analogy is ineffectual as a basis fordescribing human performance per se; I simply take that for granted. (This pointof view is certainly not always generally accepted and often not even explicitlystated, for instance, by the mainstream of American Cognitive Science; it isnevertheless a view which is fairly easy to support.)On page 240 we are subjected to a rather glib and unmotivated, “the lack of proven theories or methods is deplorable. . . . There are many practitioners, and they all have their little flock of faithful followers” and another, in a similar vein, about AI work on adaptive systems.While we believe that theories exist to be challenged, we also believe that in the scientific community challengers need to substantiate their assertions. Indeed, not only is such substance missing, but in the pages that follow Hollnagel proposes a theoretical explanation that sounds in keeping with the Newell and Simon account.We had a better time with the next two chapters. Both Boy in chapter 15 and Lind & Larsen in chapter 16 embrace contemporary cognitive theory with interesting results. While clearly discussed, Boy’s theory nevertheless remains vague due to the lack of a worked example. Lind and Larsen work us through a detailed example of one of their multilevel flow models.The summary chapter is generally well written but, for us, Hollnagel’s smoke (for example, pp. 281-282 and his unmotivated attacks on Simon, page 284) obscures any light the chapter may have been intended to shed.ConclusionsIf the goal of this book was to communicate large-systems cognitive ergonomics to a larger community then we judge that it has missed its mark. While there are several interesting (e.g., chapters 6, 10, 11, 15, and 16) and intellectually stimulating (e.g., chapters 4 and 5) chapters the majority are neither. Some seem intended as “in-house” communications, while others build domain-specific theories with numerous abstract theoretical distinctions without ever embedding the distinctions in example. Worse still, others confuse unsubstantiated assertions and innuendos for intellectual discourse.Cognitive theory is far from sacrosanct. Indeed, in recent years the dynamicism of mainstream cognitive theory has been shown by its adaptation and incorporation of the connectionist challenge from below and its recent response to the challenge of situated action from above (e.g., Vera & Simon, 1993). We firmly believe that applied cognition must be based upon cognitive theory. Any other approach runs one of two risks, either the applied endeavour becomes bogged down with constructing task-specific theories or it tends to the vacuous empiricism that is the bane of much human factors work.ReferencesCard, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.Ericsson, K. A., & Smith, J. (Eds.). (1991). Toward a general theory of expertise: Prospects and limits. New York: Cambridge.Vera, A. H., & Simon, H. A. (1993). Situated action: A symbolic interpretation. Cognitive Science, 17(1), 7-48.Kirschenbaum, S. A., Gray, W. D., & Young (in press, 1996). Cognitive architectures for human-computer interaction. SIGCHI Bulletin.。

04_CBI_Curriculum

04_CBI_Curriculum

• Lee & VanPatten: Atlas Complex – Whose responsibility is it to learn…Most instructors “assume that their principal task is one of improving the ways in which they express their expertise.. In moving away from teaching-fronted to teacherassessed interactions, instructors will necessarily behave in a less Atlaslike way (Lee & VanPatten, 2002) • Vygotsky: higher-order cognitive functions are culturally-mediated by the signs and artifacts emergent of practical activity. - Social Semiotic Theory - Signs - Activity Theory - Zone of Proximal Development - Distributed Cognition - Dialogic Learning - Metacognition
Mastering skills Input/output Transmission of message Filling in information gap Understanding multiple signs Scaffolding Collaborative dialogue. Relating self to others. Negotiation of meaning.

ai对人类的好处英语作文

ai对人类的好处英语作文Artificial Intelligence: The Boon for HumanityArtificial Intelligence (AI) has been a topic of great interest and debate in recent years, as its impact on various aspects of our lives becomes increasingly prominent. As we delve deeper into the realm of technological advancements, it becomes evident that AI is not merely a futuristic concept, but rather a tangible reality that is shaping the world around us. In this essay, we will explore the myriad benefits that AI offers to humanity, highlighting its potential to revolutionize our lives and pave the way for a brighter future.One of the most significant advantages of AI is its ability to enhance human productivity and efficiency. By automating a wide range of tasks, AI can free up valuable time and resources, allowing individuals and organizations to focus on more complex and strategic endeavors. AI-powered software and algorithms can handle repetitive and time-consuming tasks with unprecedented speed and accuracy, reducing the likelihood of human error and improving overall workflow. This increased efficiency can lead to significant cost savings, increased revenues, and a more streamlined decision-making process, ultimately benefiting both businesses andindividuals.Another remarkable aspect of AI is its potential to revolutionize healthcare. AI-powered diagnostic tools can analyze vast amounts of medical data, including patient histories, medical imaging, and laboratory results, to provide more accurate and timely diagnoses. This can lead to earlier detection of diseases, enabling healthcare professionals to intervene and provide treatment more effectively. Furthermore, AI-driven personalized medicine can tailor treatments to an individual's unique genetic makeup, improving the efficacy of medical interventions and reducing the risk of adverse reactions. The integration of AI in healthcare has the power to save lives, improve patient outcomes, and alleviate the burden on overstretched healthcare systems.In the realm of education, AI can be a transformative force. AI-powered adaptive learning platforms can personalize the learning experience for each student, adjusting the pace and content based on their individual needs and progress. This can lead to more engaging and effective learning, catering to the unique learning styles and abilities of each student. Additionally, AI can assist educators in grading assignments, providing feedback, and identifying areas where students may require additional support, allowing teachers to focus more on guiding and nurturing their students. The integration of AI in education has the potential toenhance learning outcomes, reduce educational disparities, and provide students with a more enriching and accessible educational experience.Furthermore, AI can play a crucial role in addressing some of the most pressing global challenges we face today. For instance, AI-powered systems can aid in the development of renewable energy solutions, improving energy efficiency, and optimizing the distribution of resources. AI algorithms can also assist in the analysis of environmental data, enabling more accurate predictions of natural disasters and the development of proactive mitigation strategies. This can lead to the protection of lives, the preservation of critical infrastructure, and the mitigation of the long-term consequences of environmental degradation.Another area where AI can have a profound impact is in the realm of accessibility and inclusion. AI-powered assistive technologies can empower individuals with disabilities, providing them with tools and solutions that enhance their independence, mobility, and communication abilities. For example, AI-driven voice recognition and text-to-speech technologies can enable individuals with visual impairments to access digital content more easily, while AI-powered prosthetics can restore mobility and dexterity to those with physical disabilities. By leveraging the capabilities of AI, we can create a more inclusive and accessible world, empowering individuals and ensuringthat no one is left behind.Perhaps one of the most intriguing aspects of AI is its potential to enhance and complement human intelligence. AI systems can process and analyze vast amounts of data, identify patterns, and generate insights that may elude human cognition. When combined with human expertise and decision-making, AI can amplify our problem-solving capabilities, leading to more informed and effective decision-making. This symbiotic relationship between human and artificial intelligence can unlock new frontiers of discovery, innovation, and progress, paving the way for breakthroughs in fields as diverse as scientific research, technological development, and artistic expression.It is important to acknowledge that the integration of AI into our lives also comes with its own set of challenges and concerns. Issues such as data privacy, algorithmic bias, and the potential displacement of human labor must be carefully addressed. However, these challenges are not insurmountable, and with thoughtful governance, ethical frameworks, and collaborative efforts, we can harness the power of AI while mitigating its risks and ensuring its benefits are equitably distributed.In conclusion, the benefits of AI for humanity are far-reaching and profound. From enhancing productivity and efficiency torevolutionizing healthcare, education, and global problem-solving, AI holds the promise of transforming our lives in ways that were once unimaginable. As we continue to embrace and explore the capabilities of AI, it is essential that we do so with a deep sense of responsibility, ensuring that the technological advancements we create serve the greater good of humanity. By harnessing the power of AI and aligning it with our values and aspirations, we can build a future that is more prosperous, inclusive, and sustainable for all.。

and Cognition, 6, 20-25.

ReferencesAuble, P.M., & Franks, J.J. (1978). The effects of effort toward comprehension on recall. Memory and Cognition, 6, 20-25.Baddeley, A.D., & Longman, D.J.A. (1978). The influence of length and frequency of training session on the rate of learning to type. Ergonomics, 21, 627-635.Barnett, S.M., & Ceci, S.J. (2002). When and where do we apply what we learn?: A taxonomy for far transfer. Psychological-Bulletin, 128(4), 612-637.Bartlett, F.C. (1932). Remembering: A study in experimental psychology. Cambridge, UK: Cambridge University Press.Bassok, M. (1990). Transfer of domain-specific problem-solving procedures. Journal of Experimental Psychology: Learning, Memory and Cognition, 16, 522-533.Bassok, M., & Holyoak, K.J. (1989). Interdomain transfer between isomorphic topics in algebra and physics. Journal of Experimental Psychology: Learning, Memory and Cognition, 15, 153-166. Bjork, R.A., & Richardson-Klavhen, A. (1989). On the puzzling relationship between environment context and human memory. In C. Izawa (Ed.), Current Issues in Cognitive Processes: TheTulane Flowerree Symposium on Cognition (pp. 313-344). Hillsdale, NJ: Erlbaum. Blanchette, I., & Dunbar, K. (2000). How analogies are generated: The roles of structural and superficial similarity. Memory & Cognition, 28, 108-124.Blanchette, I. & Dunbar, K. (2002). Representational change and analogy: How analogical inferences alter target representations. Journal of Experimental Psychology: Learning, Memory, andCognition, 28, 672-685.Bransford, J.D., Brown, A.L., & Cocking, R.R. (1999). How People Learn: Brain, Mind, Experience, and School. Washington, DC: National Academy Press.Bransford, J.D. & Schwartz, D. (1999). Rethinking transfer: A simple proposal with multiple implications. In A. Iran-Nejad & P. D. Pearson (Eds.),Review of Research in Education (Vol.24 pp. 61-100). Washington, DC: American Educational Research Association. Bransford, J.D., Stein, B.S., Vye, N.J., Franks, J.J., Auble, P.M., Mezynski, K.J., & Perfetto, G.A.(1983). Differences in approaches to learning: An overview. Journal of ExperimentalPsychology: General, 3, 390-398.Broudy, H.S. (1977). Types of knowledge and purposes of education. In R.C. Anderson, R.J. Spiro, & W.E. Montague (Eds.), Schooling and the Acquisition of Knowledge (pp. 1-17). Hillsdale, NJ: Erlbaum.Brown, A. L. (1990). Domain-specific principles affect learning and transfer in children. Cognitive Science, 14, 107-133.Brown, A. L., Bransford, J. D. , Ferrara, R.A., & Campione, J. (1983). Learning, remembering, and understanding. Cognitive Development.V ol.3 (pp. 78-166). J. Flavell and E. M. Markman.New York, John Wiley and Sons.Brown , A. L. & Campione, J. (1998). Designing a community of young learners: Theoretical and practical lessons. In Lambert, Nadine M. and McCombs, Barbara L. (Eds). (1998). Howstudents learn: Reforming schools through learner-centered education. (pp. 153-186).Washington, DC: American Psychological Association.Brown, A.L. & Kane, M.J. (1988). Preschool children can learn to transfer: Learning to learn and learning from examples. Cognitive Psychology, 20, 493-523.Carey, S. & Smith, C. (1993). On understanding the nature of scientific knowledge. Educational Psychologist, 28 (3), 235-251.Carey, S. & Spelke, E. (1994). Domain-specific knowledge and conceptual change. In L. A.Hirschfeld & S. A. Gelman (Eds.), Mapping the mind: Domain specificity in cognition andculture (pp. 169-200). New York: Cambridge University Press.Carraher, T.N. (1986). From drawings to buildings: Mathematical scales at work. International Journal of Behavioral Development, 9, 527-544.Ceci, S.J., & Williams, W.M. (1997). Schooling, intelligence and income. American Psychologist, 52(10), 1051-1058.Chi, M. T. H. (2000). Self-explaining: The dual processes of generating inference and repairing mental models. In Glaser, R. (Ed). (2000). Advances in instructional psychology: Educational design and cognitive science, V ol. 5. (pp. 161-238). Mahwah, NJ: Lawrence Erlbaum Associates. Dunbar, K. (2001). The analogical paradox: Why analogy is so easy in naturalistic settings, yet so difficult in the psychology laboratory. In D. Gentner, K. Holyoak, & B. Kokinov (Eds.).Analogy: Perspectives from cognitive science (pp. 313-334). Cambridge, MA: MIT press. Eich, E. (1985). Context, memory, and integrated item/context imagery. Journal of Experimental Psychology: Learning, Memory, & Cognition, 11, 764-770.Fullan, M. (2001). Leading in a Culture of Change. San Francisco, CA: Jossey-Bass.Gelman, R. & Lucariello, J. (2002). Learning in cognitive development. In Pashler, H. & Gallistel,C.R. (Eds.), Stevens’ Handbook of Experimental Psychology, Third Edition, V ol.3 (pp. 396-443).New York: Wiley.Gelman, R. & Williams, E. (1998). Enabling constraints for cognitive development and learning: Domain specificity and epigenesis. In D. Kuhn and R. Siegler, (Eds.), Cognition, perception and language. V ol. 2. Handbook of Child Psychology (Fifth Ed) (pp. 575-630). New York:Wiley.Gick, M.L., & Holyoak, K.J. (1980). Analogical problem solving. Cognitive Psychology, 12, 306-355.Glaser, R. (1994). Learning theory and instruction. In G. d’Ydewalle, P. Eelen, & P. Bertelson (Eds.), International Perspectives on Psychological Science, V ol. 2: The State of the Art(pp.341-357). Hove, UK: Lawrence Erlbaum Associates.Halpern, D. F.(1998). Teaching critical thinking for transfer across domains. American Psychologist, 53(4), 449-455.Hartnett, P. & Gelman, R. (1998). Early understandings of numbers: Paths or barriers to the construction of new understandings? Learning and Instruction: The Journal of the European Association for Research in Learning and Instruction,8(4),341-374.Hayes, J.R., & Simon, H.A. (1977). Psychological differences among problem isomorphs. In N.J.Castellan, Jr., D.B. Pisoni, & G.R. Potts (Eds.), Cognitive Theory (Vol. 2, pp. 21-41).Hillsdale, NJ: Lawrence Erlbaum Associates.Kerr, R., & Booth, B. (1978). Specific and varied practice of a motor skill. Perceptual and Motor Skills, 46, 395-401.Klahr, D., & Carver, S.M. (1988). Cognitive objectives in a LOGO debugging curriculum: Instruction, learning, and transfer. Cognitive Psychology, 20, 362-404.Lave, J. (1988). Cognition in Practice: Mind, Mathematics and Culture in Everyday Life.Cambridge, UK: Cambridge University Press.Littlefield, J., Delclos, V., Lever, S., Clayton, K., Bransford, J., & Franks, J. (1988). Learning LOGO: Method of teaching, transfer of general skills, and attitudes toward school andcomputers. In R.E. Mayer (Ed.), Teaching and Learning Computer Programming (pp. 111-136). Hillsdale, NJ: Lawrence Erlbaum Associates.Mandler, J. M. & Orlich, F. (1993). Analogical transfer: The roles of schema abstraction and awareness. Bulletin of the Psychonomic Society, 31(5), 485-487.Mannes, S.M., & Kintsch, W. (1987). Knowledge organization and text organization. Cognition & Instruction, 4, 91-115.McNamara, D.S., Kintsch, E., Songer, N.B., & Kintsch, W. (1996). Are good texts always better?Text coherence, background knowledge, and levels of understanding in learning from text.Cognition and Instruction, 14, 1-43.Mestre, J.P. (2002). Probing adults’ conceptual understanding and transfer of learning via problem posing. Journal of Applied Developmental Psychology, 23, 9-50.Nitsch, K.E. (1977). Structuring decontextualized forms of knowledge. Ph.D. dissertation, Vanderbilt University. Described in Bransford, J.D. (1979). Human Cognition: Learning,Understanding & Remembering. Belmont, CA: Wadsworth Publishing Company.Reed, S.K., Dempster, A., & Ettinger, M. (1985). Usefulness of analogous solutions for solving algebra word problems. Journal of Experimental Psychology: Learning, Memory, &Cognition, 11, 106-125.Reed, S.K., Ernst, G.W., & Banerji, R. (1974). The role of analogy in transfer between similar problem states. Cognitive Psychology, 6, 436-450.Saxe, G. B. (1989). Transfer of learning across cultural practices. Cognition and Instruction, 6(4), 325-330.Schwartz, D.L., & Bransford, J.D. (1998). A Time for Telling. Cognition and Instruction, 16, 475-522.Schwartz, D.L., & Moore, J.L. (1998). The role of mathematics in explaining the material world: Mental models for proportional reasoning. Cognitive Science,22, 471-516.Shea, J.B., & Morgan, R.L. (1979). Contextual inference effects on the acquisition, retention, and transfer of a motor skill. Journal of Experimental Psychology: Human Learning and Memory, 5, 179-187.Silva, A. B., & Kellman, P. J. (1999) Perceptual learning in mathematics: The algebra-geometry connection. In Han, M., & Stoness, S.C. (Eds.). Proceedings of the Twenty-First AnnualConference of the Cognitive Science Society. (pp. 683-688). Mahway, NJ: Lawrence Erlbaum Associates.Simon, D.A., & Bjork, R.A. (2001). Metacognition in motor learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27, 907-912.Singley, K., & Anderson, J.R. (1989). The Transfer of Cognitive Skill. Cambridge, MA: Harvard University Press.Slamecka, N.J., & Graf, P. (1978). The generation effect: Delineation of a phenomenon. Journal of Experimental Psychology: Human Learning & Memory, 4, 592-604.Stigler, J. W. & Fernandez, C. (1995). Learning mathematics from classroom instruction: Cross-cultural and experimental perspectives. In C.A, Nelson (Ed.), Basic and applied perspectives on learning, cognition, and development (pp. 103-130). Mahwah, NJ: Lawrence ErlbaumAssociates.Wittrock, M.C. (1990). Generative processes of comprehension. Educational Psychologist, 24, 345-376.。

英文参考文献格式精选3篇

英文参考文献格式精选3篇英文参考文献格式精选1篇[1] a, b, c, d, and e. TITLE, JOURNAL, vol. 5, no. 5, pp. 5–7, May 2010.在上面的格式中,作者为一个时用:a两个作者用:a and b三个作者用:a, b and c四个、五个时的情况以此类推超过五个则用:a, b, c, d, e, and et al.各个月份的简写形式为:Jan. Feb. Mar. Apr. May June July Aug. Sept. Oct. Nov. Dec.IEEETrans的写法:IEEE Trans. Electron Devices IEEE Int. Conf. Communications也有一些期刊按照以下格式要求:[1] Duan X Y, Li S, Lucy G, and Jim G. TITLE. Journal of *, 2012, 3(22): 14-26.可以看出,中国作者名称的写法是:姓氏全拼,首字母大写,名字无论是两个还是三个,都是取首字母并大写。

卷和期写在一起,省略掉了vol和no,期用小括号括起来。

当然此前所述,这儿学研端只列举常用的实例,各种期刊要求不一,按照期刊要求即可。

英文参考文献格式精选2篇英文(例子):[01] Brown, H. D. Teaching by Principles: An Interactive Approach to Language Pedagogy[M]. Prentice Hall Regents, 1994.[02] Brown, J Set al. Situated Cognition and the Culture of Learning[J].Educational Reasercher, 1, 1989.[03] Chris, Dede. The Evolution of Constructivist Learning Envi-ronments: Immersion in Distributed Virtual Worlds[J]. Ed-ucational Technology, Sept-Oct, 1995.[04] Hymes, D.On communicative competence[M]. J. B. Pride; J. Holmes (eds). Sociolinguistics. Harmondsworth: Penguin, 1972.[05] L. E. Sarbaugh. Intercultural communication[M]. New Brunsw-ick, N.J.U.S.A: Transaction Books, 1988.[06] Puhl, A.. Classroom A ssessment[J]. EnglishTeaching Forum, 1997.[07]#from 英文参考文献格式精选3篇来自 end# Thomas, Jenny. Cross-cultural Pragmatic Failure[J]. Applied Linguistics, 1983, (4): 91-111.[08] William B Gudykunst. Intercultural communication theory[M]. Beverly Hills, CA: Sage Pub, 1983.英文参考文献格式精选3篇英文文献采用 APA格式:单一作者著作的书籍:姓,名字首字母.(年). 书名(斜体). 出版社所在城市:出版社.Sheril, R. D. (1956). The terrifying future: Contemplating color television. San Diego: Halstead.两位作者以上合著的书籍:姓,名字首字母., & 姓,名字首字母.(年). 书名(斜体). 出版社所在城市:出版社. Smith, J., & Peter, Q. (1992). Hairball: An intensive peek behind the surface of an enigma. Hamilton, ON: McMaster University Press. 文集中的文章:Mcdonalds, A. (1993). Practical methods for the apprehension and sustained containment of supernatural entities. In G. L. Yeager (Ed.), Paranormal and occult studies: Case studies in application (pp. 42–64). London: OtherWorld Books. 期刊中的文章(非连续页码):Crackton, P. (1987). The Loonie: God's long-awaited gift to colourful pocket change? Canadian Change, 64(7), 34–37.期刊中的文章(连续页码):姓,名字首字母.(年). 题目. 期刊名(斜体). 第几期,页码.Rottweiler, F. T., & Beauchemin, J. L. (1987). Detroit and Narnia: Two foes on the brink of destruction. Canadian/American Studies Journal, 54,66–146.月刊杂志中的文章:Henry, W. A., III. (1990, April 9). Making the grade in today's schools. Time, 135, 28-31.。

ai的好处与坏处英语作文初一

ai的好处与坏处英语作文初一Artificial intelligence (AI) has become an increasingly prevalent force in our modern world, permeating various aspects of our lives and transforming the way we live, work, and interact. As with any technological advancement, AI brings with it a myriad of both benefits and drawbacks that must be carefully considered. In this essay, we will delve into the multifaceted nature of AI, exploring its potential advantages and disadvantages.One of the primary advantages of AI is its ability to augment and enhance human capabilities. AI-powered systems can perform tasks with remarkable speed, accuracy, and efficiency, far surpassing the limitations of human cognition. This has revolutionized numerous industries, from healthcare to finance, where AI-driven algorithms can analyze vast amounts of data, identify patterns, and make informed decisions with remarkable precision. In the medical field, for instance, AI-powered diagnostic tools can assist healthcare professionals in early detection of diseases, leading to more timely and effective interventions. Similarly, in the financial sector, AI-based trading algorithms can make split-second decisions, responding tomarket fluctuations with a level of agility and adaptability that would be impossible for human traders.Moreover, AI has the potential to improve the quality of life for individuals by automating routine tasks and relieving humans of tedious, repetitive work. This frees up time and mental resources, allowing people to focus on more creative, fulfilling, and meaningful endeavors. Household tasks, such as scheduling, cleaning, and home security, can be streamlined through the use of AI-powered smart home technologies, providing greater convenience and a higher quality of life. Similarly, in the workplace, AI-driven automation can handle administrative duties, data entry, and other mundane tasks, enabling employees to devote their energies to more strategic and innovative projects.Another significant advantage of AI is its ability to enhance human decision-making. By processing vast amounts of data and identifying complex patterns, AI systems can provide valuable insights and recommendations that can inform and guide human decision-making processes. This is particularly useful in fields where the sheer volume of information can be overwhelming for individuals, such as in scientific research, policy analysis, and strategic planning. AI-powered tools can sift through massive datasets, uncover hidden connections, and present actionable insights that can inform more informed and data-driven decisions.However, the rise of AI also brings with it a number of potential drawbacks and challenges that must be addressed. One of the primary concerns is the potential displacement of human workers due to the automation of various tasks and jobs. As AI-powered systems become more sophisticated and capable, they may replace human labor in a wide range of industries, leading to job losses and economic disruption. This can have profound social and economic consequences, particularly for those in low-skilled or routine-based occupations. Policymakers and stakeholders must work collaboratively to develop strategies that mitigate the negative impact of AI-driven automation on employment and ensure a smooth transition to a more technologically advanced workforce.Another significant concern surrounding AI is the issue of bias and ethical considerations. AI systems are ultimately designed and trained by human beings, and as such, they can inherit and amplify the biases and prejudices present in the data used to train them. This can lead to discriminatory outcomes, where AI-powered decisions and recommendations perpetuate existing societal biases against certain groups or individuals. Addressing this challenge requires a concerted effort to develop AI systems with robust ethical frameworks, transparency, and accountability mechanisms to ensure that they are aligned with principles of fairness, non-discrimination, and respect for human rights.Additionally, the increasing reliance on AI-powered systems raises concerns about privacy and data security. As AI applications collect and process vast amounts of personal data, there is a heightened risk of data breaches, unauthorized access, and the potential misuse of sensitive information. Robust data governance frameworks, strong cybersecurity measures, and clear privacy policies must be implemented to protect individuals' right to privacy and safeguard their personal data from malicious actors.Furthermore, the rapid development and deployment of AI technologies have raised questions about the long-term implications and potential risks of advanced AI systems. Concerns have been raised about the possibility of AI systems becoming increasingly autonomous and self-improving, potentially leading to unintended consequences or even existential risks for humanity. While these concerns may seem speculative, it is crucial that policymakers, researchers, and the public engage in ongoing discussions and proactive measures to ensure the responsible and ethical development of AI technologies.In conclusion, the advent of AI presents a complex landscape of both advantages and disadvantages. On the one hand, AI has the potential to enhance human capabilities, improve quality of life, and inform more effective decision-making. On the other hand, thechallenges posed by AI, such as job displacement, bias, privacy concerns, and long-term existential risks, require careful consideration and proactive measures to address. As we continue to harness the power of AI, it is essential that we do so with a deep sense of responsibility, ethical reflection, and a commitment to ensuring that the benefits of this transformative technology are equitably distributed and its risks are mitigated. By striking a balance between the opportunities and the challenges of AI, we can work towards a future where this technology empowers and enriches our lives, rather than undermining our well-being or posing existential threats.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

ABSTRACTAmong the many contested boundaries in science studies is that betweenthe cognitive and the social. Here, we are concerned to question this boundary froma perspective within the cognitive sciences based on the notion of distributedcognition. We first present two of many contemporary sources of the notion ofdistributed cognition, one from the study of artificial neural networks and one fromcognitive anthropology. We then proceed to reinterpret two well-known essays byBruno Latour, ‘Visualization and Cognition: Thinking with Eyes and Hands’ and‘Circulating Reference: Sampling the Soil in the Amazon Forest’. In both cases we findthe cognitive and the social merged in a system of distributed cognition without anyappeal to agonistic encounters. For us, results do not come to be regarded asveridical because they are widely accepted; they come to be widely accepted because,in the context of an appropriate distributed cognitive system, their apparent veracitycan be made evident to anyone with the capacity to understand the workings of thesystem.

KeywordsBruno Latour, cognitive versus social, distributed cognition, EdwinHutchins

Distributed Cognition:Where the Cognitive and the Social MergeRonald N. Giere and Barton Moffatt

Among the many contested boundaries in science studies is that betweenthe cognitive and the social. One of the most notorious invocations of thisboundary occurred in Latour & Woolgar’s postscript to the second editionof LaboratoryLife[(1986): 280] where they proposed ‘a ten-year mora-torium on cognitive explanations of science’ and promised ‘that if anythingremains to be explained at the end of this period, we too will turn to themind!’ Here the cognitive and the social are presented as binary opposites,with the social clearly dominant.1Appeals to the cognitive aspects of science occur in several verydifferent contexts. One is found primarily in Anglo-American analyticphilosophy of science, where ‘cognitive’ is associated with concepts such as‘normative’ and ‘rational’, both being intended in a substantive rather thana merely instrumental sense. We mention this philosophical context only tomake clear that this is not the notion of the cognitive we are concerned

Social Studies of Science 33/2(April 2003) 1–10© SSS and SAGE Publications (London, Thousand Oaks CA, New Delhi)[0306-3127(200304)33:2;1–10;035120]www.sagepublications.comwith here. We are concerned, rather, with the empirical conception of thecognitive as it appears in the context of the cognitive sciences, where thefocus is on the mechanisms of such cognitive capacities as, for example,vision, memory, language production and comprehension, judgment, andmotor control.Even from the perspective of the cognitive sciences, however, opposi-tion between the cognitive and the social has been pervasive. One of usonce published a reply in Social Studies of Scienceunder the title ‘TheCognitive Construction of Scientific Knowledge’, intending an explicitcontrast with the social construction of scientific knowledge.2That paper

was quite tolerant in not insisting that the cognitive story could be thewhole story about any scientific episode. Nevertheless, any social story wasviewed as distinct and, indeed, supplementary to, the cognitive story. Thishas been pretty much the standard view among those who have explicitlypursued a ‘cognitive approach’ to science studies.3Rejection of a sharp boundary between the cognitive and the social cannow be found in several quarters, including even philosophy, where ‘socialepistemology’ is becoming a recognized category.4Here we will be con-

cerned only with a questioning of the boundary coming from within thecognitive sciences. This questioning consists of an inquiry into forms ofdistributed cognition.

Distributed CognitionWe will consider just two of many contemporary sources of the notion ofdistributed cognition within the cognitive sciences. The first is due toMcClelland, Rumelhart and their associates in the Parallel DistributedProcessing Group in San Diego, CA, USA, during the early 1980s. Amongmany other things, this group explored the capabilities of networks ofsimple processors thought to be at least somewhat similar to neuralstructures in the human brain. It was discovered that what such networksdo best is recognize and complete patterns in input provided by theenvironment. The generalization to human brains is that man recognizespatterns through the activation of prototypes embodied as changes in theactivity of groups of neurons induced by sensory experience. But ifsomething like this is correct, how does man do the kind of linear symbolprocessing required for activities such as using language and doingmathematics?The answer given by McClelland et al. was that man does the kind ofcognitive processing required for these linear activities by creating andmanipulating external representations. These latter tasks can be done by acomplex pattern matcher. Consider the following simple example in-troduced by McClelland et al. [(1986): 44–48]. Try to multiply two three-digit numbers, say 456 ×789, in your head. Few people can do even thisvery simple arithmetical operation in their heads. Here is how many of uslearned to do it:

相关文档
最新文档