cms---kluckhohn-strodtbeck-cultural-dimensions(组织文化维度)

合集下载

Kluckhohn-and-Strodtback’s-value-orientation

Kluckhohn-and-Strodtback’s-value-orientation

Kluckhohn and Strodtback’s value orientationThe Kluckhohns and Strodtbeck, after examining hundreds of cultures, reached the conclusion that people turn to their culture for answers to the following questions。

(1) What is the character of human nature? (2) What is the relation of humankind to nature?(3) What is the orientation toward time?(4)What is the value placed on activity?And (5)What is the relationship of people to each other?The answers to these crucial questions serve as the bases for the five value orientations that are at the heart of their approach. These five orientations might best be visualized as points on a continuum. As you move through these five orientations,you will undoubtedly notice some of the same characteristics discussed by Hofstede。

This is very understandable because both approaches are talking about meaningful values found in all cultures. Hence, both sets of researchers were bound to track many of the same patterns.Human Nature Orientation1. evil2。

黑龙江省2023年英语a级考试真题及答案

黑龙江省2023年英语a级考试真题及答案

黑龙江省2023年英语a级考试真题及答案全文共3篇示例,供读者参考篇1Black Dragon River Province 2023 English A-Level Exam Questions and AnswersSection A: Reading ComprehensionRead the following article and answer the questions below:Title: The Beauty of Black Dragon RiverLocated in the northeastern part of China, Black Dragon River Province is known for its stunning natural beauty and rich cultural heritage. The province is home to a diverse range of landscapes, including lush forests, majestic mountains, and crystal-clear rivers.One of the most popular attractions in Black Dragon River Province is the Black Dragon River itself, which meanders through the picturesque countryside. The river is famous for its clear blue waters and tranquil atmosphere, making it a favorite spot for locals and tourists alike.In addition to its natural beauty, Black Dragon River Province is also known for its vibrant cultural scene. The province is home to a number of traditional villages where visitors can experience the rich cultural traditions of the region. Local artisans create beautiful handicrafts, such as intricately carved woodwork and brightly colored textiles.Overall, Black Dragon River Province offers visitors a unique blend of natural beauty and cultural richness. Whether you're interested in exploring the great outdoors or immersing yourself in the local culture, this province has something for everyone.Questions:1. Where is Black Dragon River Province located?2. What is the Black Dragon River famous for?3. What can visitors experience in the traditional villages of Black Dragon River Province?4. What makes Black Dragon River Province a unique destination for tourists?Answers:1. Black Dragon River Province is located in the northeastern part of China.2. The Black Dragon River is famous for its clear blue waters and tranquil atmosphere.3. In the traditional villages of Black Dragon River Province, visitors can experience the rich cultural traditions of the region, including beautifully crafted handicrafts.4. Black Dragon River Province offers a unique blend of natural beauty and cultural richness, making it an ideal destination for tourists.Section B: WritingWrite an essay of at least 500 words on the topic "The Importance of Preserving Cultural Heritage."In your essay, be sure to discuss:- Why is it important to preserve cultural heritage?- How can we promote the preservation of cultural heritage?- What are the benefits of preserving cultural heritage for future generations?Section C: Listening ComprehensionListen to the following audio clip and answer the questions below:Audio Clip: The History of Black Dragon River ProvinceQuestions:1. What is one of the main attractions in Black Dragon River Province?2. What is the Black Dragon River famous for?3. How does the speaker describe the cultural scene in Black Dragon River Province?Answers:1. One of the main attractions in Black Dragon River Province is the Black Dragon River itself.2. The Black Dragon River is famous for its clear blue waters.3. The speaker describes the cultural scene in Black Dragon River Province as vibrant and rich.Section D: Grammar and VocabularyComplete the following sentences with the correct grammar and vocabulary:1. I __________ (visit) Black Dragon River Province last summer and I __________ (be) amazed by its beauty.2. The local artisans __________ (create) beautiful handicrafts for generations.3. It is important to __________ (preserve) cultural heritage for future generations to enjoy.Answers:1. visited, was2. have been creating3. preserveEnd of ExamWe hope you enjoyed this exam on Black Dragon River Province! Good luck with your studies and future travels.篇2Black Dragon River Province, located in the northeastern part of China, is known for its beautiful scenery, rich history, and diverse culture. In 2023, the province hosted the A-level English exam, which has become a topic of interest among students and educators. The exam tested students' proficiency in various aspects of the English language, including grammar, vocabulary, reading comprehension, and writing skills.Here are the 2023 A-level English exam questions and answers:Grammar Section:1. Complete the sentence: If I ________ more time, I would have finished the project.Answer: had2. Choose the correct form of the verb to complete the sentence: She ________ to Japan last summer.A) goesB) wentC) is goingD) will goAnswer: B) went3. Fill in the blank with the appropriate pronoun: My sister and ________ went to the store.Answer: IVocabulary Section:1. Choose the synonym for "ubiquitous":A) rareB) commonC) uniqueD) exceptionalAnswer: B) common2. What is the opposite of "vibrant"?A) dullB) livelyC) energeticD) colorfulAnswer: A) dullReading Comprehension Section:Read the following passage and answer the questions that follow:"Mount Changbai, also known as Baekdu Mountain, is a majestic peak located on the border between China and North Korea. It is famous for its stunning natural beauty and rich biodiversity. The mountain is also considered sacred by bothChinese and Korean people, with many myths and legends surrounding it. Visitors can enjoy hiking, skiing, and exploring the volcanic landscape of Mount Changbai."1. Where is Mount Changbai located?Answer: Mount Changbai is located on the border between China and North Korea.2. What is Mount Changbai famous for?Answer: Mount Changbai is famous for its stunning natural beauty and rich biodiversity.Writing Section:Write an essay on the following topic:"Describe your dream vacation destination and why you would like to visit it."Answer: My dream vacation destination is the Maldives, a tropical paradise in the Indian Ocean. I have always been fascinated by the crystal-clear waters, white sandy beaches, and luxurious resorts of the Maldives. I would love to spend my days relaxing on the beach, snorkeling in the vibrant coral reefs, and experiencing the warm hospitality of the local people. The Maldives is a place where I can escape from the hustle and bustleof daily life and unwind in a peaceful and tranquil environment. I am drawn to the beauty and serenity of this exotic destination, and I hope to one day make my dream vacation a reality.Overall, the 2023 A-level English exam in Black Dragon River Province challenged students to demonstrate their English language skills in a variety of areas. The exam questions were designed to assess students' understanding of grammar, vocabulary, reading comprehension, and writing abilities. As students prepare for future exams, they can use these sample questions and answers as a guide to improve their English proficiency. Good luck to all the students taking the A-level English exam in the coming years!篇32023 English A-Level Exam Questions and Answers in Heilongjiang ProvincePaper 1: Reading ComprehensionPart A: Multiple Choice QuestionsRead the following passage and answer the questions that follow:The English language has become an essential skill in today's globalized world. It is spoken by over a billion people around the globe, and is the official language in many countries. Learning English opens up a world of opportunities, from studying abroad to working in multinational companies.1. What is the main idea of the passage?A. English is an important language to learn.B. English is only spoken in a few countries.C. Learning English has no benefits.D. English is a difficult language to learn.2. How many people speak English worldwide?A. Half a billionB. A billionC. Two billionD. Three billion3. What opportunities can learning English provide?A. Working only in local companiesB. Studying abroadC. Learning only one languageD. Reading foreign literaturePart B: MatchingMatch the words with their definitions:1. Resilient2. Conscientious3. Adaptable4. Tenaciousa. Able to adjust easily to new conditionsb. Showing great care and attention to detailc. Strong and able to recover from difficult situationsd. Holding onto something firmly or persistentlyPaper 2: WritingIn no less than 300 words, write an essay on the importance of learning a second language. Include examples and personal experiences to support your argument.Answer Key:Part A:1. A. English is an important language to learn.2. B. A billion3. B. Studying abroadPart B:1. C. Resilient2. B. Conscientious3. A. Adaptable4. D. TenaciousWe hope that these questions and answers help you in preparing for the English A-Level exam in Heilongjiang Province in 2023. Good luck on your exam!。

思途旅游CMS标签调用说明前台模板二次开发文档

思途旅游CMS标签调用说明前台模板二次开发文档

思途CMS标签调用说明书本文档主要描述系统标签的功能与用法,系统标签的存储位置统一存放在include/taglib/smore/目录下,标签的命名格式为标签名.lib.php1.Attrgrouplist用途:此标签主要用于读取线路,酒店,租车,景点,文章,相册,团购的的属性组列表,此标签一般与getattrgrouplist配合使用,用于搜索列表,达到显示栏目相应属性的功能。

参数:typeid:需要调用属性的栏目id(线路:1,酒店:2,租车:3,文章:4,景点:5,相册:6,团购:13)filterid:需要排除的属性组id,如果排除多个则以逗号分隔。

row:调用的条数。

例子:这个标签一般用于在搜索列表使用如如上图所示,会调用线路属性组进行显示,typeid=1表示读取线路属性组,filterid=’91’表示排除属性组id为91的属性组,属性组id的查看可以在后台属性组管理页面进行查看。

如下图:2.getattrbygroup用途:用于通过某个属性组id或者属性名称来读取某个属性组相应的属性列表,该标签一般与attrgrouplist配合使用实现快速读取多个属性组信息。

参数:groupname:属性组的名称,如“旅行方式”typeid:同上groupid:属性组id的值。

row:调用的条数。

前台模板可用参数:[field:title/]:表示读取当前属性名称[field:id/]:表示读取当前属性id.例子:1.如我想单独调用线路属性组为“交通选择”的属性列表信息,则可以通过以下代码进行实现{sline:getattrbygroup typeid=’1’groupname=’交通选择’}<a data-id=”[field:id/]”>[field:title]</a>{/sline:getattrbygroup}也可以使用groupid来实现同样的效果,{sline:getattrbygroup typeid=’1’groupid=’84’}<a data-id=”[field:id/]”>[field:title]</a>{/sline:getattrbygroup}Groupid可以在后台相应栏目属性配置那里获取。

高中英语戏剧欣赏单选题60题

高中英语戏剧欣赏单选题60题

高中英语戏剧欣赏单选题60题1. In the play "Hamlet", Hamlet is often described as __________.A. optimistic and braveB. cautious and hesitantC. cruel and selfishD. simple and naive答案:B。

本题考查对《 哈姆雷特》中哈姆雷特人物性格的理解。

选项 A 乐观勇敢不符合哈姆雷特的性格特点,他在复仇过程中充满了犹豫和思考。

选项C 残忍自私与哈姆雷特的形象不符。

选项D 单纯天真也不能准确描述哈姆雷特,他心思缜密,善于思考。

而选项B 谨慎犹豫符合他在面对复仇时的复杂心理。

2. The main character in the drama "Romeo and Juliet" shows a trait of __________.A. calm and rationalB. passionate and impulsiveC. cold and indifferentD. timid and cautious答案:B。

在 罗密欧与朱丽叶》中,主角的性格特点是充满激情且冲动的。

选项A 冷静理性不符合他们为了爱情不顾一切的表现。

选项 C 冷漠无情与他们热烈的爱情相悖。

选项 D 胆小谨慎也不是他们的性格,他们勇敢追求爱情。

3. In the classic play, the character who always follows the rulesstrictly is __________.A. a rebelB. a conformistC. an innovatorD. a free spirit答案:B。

本题考查对戏剧中人物类型的理解。

选项A 叛逆者通常不会严格遵循规则。

选项C 创新者侧重于创造新的事物,不一定严格遵循规则。

Kluckhohn_与_Strodtbeck的价值观取向

Kluckhohn_与_Strodtbeck的价值观取向

Kluckhohn 与 Strodtbeck的价值观取向Florence Kluckhohn (佛萝伦丝·克拉克洪)与Fred Strodtbeck(弗雷德·斯多特贝克)是较早提出文化理论的美国人类学家。

已故的美国哈佛大学女学者佛萝伦丝·克拉克洪曾在太平洋战争时参与了一个由美国战争情报处(Office of War Information)组建的一个约30人的专家队伍,研究不同文化的价值、民心和士气。

这个研究组通过对日本民族的心理和价值的分析,向美国政府提出了不要打击和废除日本天皇的建议,并依此建议修改要求日本无条件投降的宣言。

二战后不久,哈佛大学加强了对文化价值维度研究的支持力度,并与洛克菲勒基金会一起资助克拉克洪等人在美国德克萨斯州一片方圆40英里的土地上针对五个不同的文化社区展开一项大规模的研究。

这项研究的一个主要成果就是克拉克洪-斯多特贝克(Kluckhohn & Strodtbeck, 1961)的五种价值取向模式,该成果发表于《价值取向的变奏》(Variations in Value Orientations, 1961)一书中。

在该书中佛萝伦丝·克拉克洪沿用了她的丈夫Clyde Kluckhohn(克莱德・克拉克洪)提出的有关价值取向的定义。

所谓价值取向指的是“复杂但确定的模式化原则,与解决普通的人类问题相联系,对人类行为和思想起着指示与导向作用”(Kluckhohn & Strodtbeck, 1961:4)。

这一模式包括了五个价值取向:人性取向、人与自然的关系取向、时间取向、活动取向和关系取向。

克拉克洪与斯多特贝克的价值取向理论基于以下三个基本的假设:(1)任何时代的任何民族都必须为某些人类的共同问题提供解决的方法;(2)这些问题的解决方法不是无限的或任意的,而是在一系列的选择或价值取向中的变化。

(3)每种价值取向都存在于所有的社会和个体中,但每个社会和个体对价值取向的偏好不同。

Kluckhohn与Strodtbeck的价值观取向

Kluckhohn与Strodtbeck的价值观取向

的价值观取向 StrodtbeckKluckhohn 与Florence Kluckhohn (佛萝伦丝·克拉克洪)与Fred Strodtbeck(弗雷德·斯多特贝克)是较早提出文化理论的美国人类学家。

已故的美国哈佛大学女学者佛萝伦丝·克拉克洪曾在太平洋战争时参与了一个由美国战争情报处(Office of War Information)组建的一个约30人的专家队伍,研究不同文化的价值、民心和士气。

这个研究组通过对日本民族的心理和价值的分析,向美国政府提出了不要打击和废除日本天皇的建议,并依此建议修改要求日本无条件投降的宣言。

二战后不久,哈佛大学加强了对文化价值维度研究的支持力度,并与洛克菲勒基金会一起资助克拉克洪等人在美国德克萨斯州一片方圆40英里的土地上针对五个不同的文化社区展开一项大规模的研究。

这项研究的一个主要成果就是克拉克洪-斯多特贝克(Kluckhohn & Strodtbeck, 1961)的五种价值取向模式,该成果发表Variations in Value Orientations, 1961)一书中。

在该书中佛于《价值取向的变奏》(萝伦丝·克拉克洪沿用了她的丈夫Clyde Kluckhohn(克莱德?克拉克洪)提出的有关价值取向的定义。

所谓价值取向指的是“复杂但确定的模式化原则,与解决普通的人类问题相联系,对人类行为和思想起着指示与导向作用”(Kluckhohn & Strodtbeck, 1961:4)。

这一模式包括了五个价值取向:人性取向、人与自然的关系取向、时间取向、活动取向和关系取向。

克拉克洪与斯多特贝克的价值取向理论基于以下三个基本的假设:)任何时代的任何民族都必须为某些人类的共同问题提供解决的方法;1((2)这些问题的解决方法不是无限的或任意的,而是在一系列的选择或价值取向中的变化。

(3)每种价值取向都存在于所有的社会和个体中,但每个社会和个体对价值取向的偏好不同。

跨文化交际3

跨文化交际3

Values
Cultural patterns
Social Practices
Norms
Chapter3 The Hidden Core of Culture
The Definition of Values
• According to the Concise Oxford Dictionary, values are : one’s principles or standards; one’s judgment of what is valuable or importance in life.
Chapter3 The Hidden Core of Culture
Case study
• Story 1:
During the American Civil War, a very hungry young man fell down in front of a farm gate. The famer gave him food but in return he asked the young man to move a pile of wood in his yard--- in fact it was not at all necessary to move the wood. When the young man left, the farmer moved the wood back to its original place. Seeing all this, the famer’s son was confused. Q: why did the farmer not just give the young man some food? What values underlie the behavior of the old man?

CMUcam3 An Open Programmable Embedded Vision Sensor

CMUcam3 An Open Programmable Embedded Vision Sensor

CMUcam3:An Open Programmable Embedded Vision SensorAnthony Rowe Adam Goode Dhiraj GoelIllah NourbakhshCMU-RI-TR-07-13May2007Robotics InstituteCarnegie Mellon UniversityPittsburgh,Pennsylvania15213c Carnegie Mellon UniversityAbstractIn this paper we present CMUcam3,a low-cost,open source,embedded com-puter vision platform.The CMUcam3is the third generation of the CMUcamsystem and is designed to provide aflexible and easy to use open source develop-ment environment along with a more powerful hardware platform.The goal of thesystem is to provide simple vision capabilities to small embedded systems in theform of an intelligent sensor that is supported by an open source community.Thehardware platform consists of a color CMOS camera,a frame buffer,a low cost32-bit ARM7TDMI microcontroller,and an MMC memory card slot.The CMUcam3also includes4servo ports,enabling one to create entire,working robots usingthe CMUcam3board as the only requisite robot processor.Custom C code canbe developed using an optimized GNU toolchain and executables can beflashedonto the board using a serial port without external downloading hardware.The de-velopment platform includes a virtual camera target allowing for rapid applicationdevelopment exclusively on a PC.The software environment comes with numer-ous open source example applications and libraries including JPEG compression,frame differencing,color tracking,convolutions,histogramming,edge detection,servo control,connected component analysis,FATfile system support,and a facedetector.1IntroductionThe CMUcam3is an embedded vision sensor designed to be low cost,fully pro-grammable,and appropriate for realtime processing.It features an open source de-velopment environment,enabling customization,code sharing,and community.In the world of embedded sensors,the CMUcam3occupies a unique niche.In this design exercise we have avoided high-cost components,and therefore do not have many of the luxuries that other systems have:L1cache,an MMU,DMA,or a large RAM store.Still,the hardware design of the CMUcam3provides enough processing power to be useful for many simple vision tasks[1],[2],[3],[4],[5].An ARM microcontroller provides excellent performance and allows for the exe-cution of a surprisingly broad set of algorithms.A high speed FIFO buffers images from the CIF-resolution color camera.Mass storage is provided by an MMC socket using an implementation of the FATfilesystem so that thefile written by the CMUcam3 are immediately readable to an ordinary er interaction occurs via GPIO,servo outputs,two serial UARTs,a button,and three colored LEDs.We provide a full C99environment for buildingfirmware,and include libraries such as libjpeg,libpng,and zlib.Additionally,we have developed a library of vision algorithms optimized for embedded processing.Full source is provided for all components under a liberal open source license.The system described in this paper has been implemented and is fully functional. The system has passed CE testing and is available from multiple international commer-cial vendors for a cost of approximately US$239.[12]IFigure1:Photograph of the CMUcam3mated with the CMOS camera board.An MMC memory card used for mass storage can be seen protruding on the right side of the board.The board is5.5cm×5.5cm and approximately3cm deep depending on the camera module.1.1Embedded Vision ChallengesEmbedded Vision affords a unique set of functional requirements upon a computational device meant to serve as a visual-sensor.In fact,taken as a general purpose processor, the CMUcam3is rather underpowered compared to desktop computers or even PDAs. However,if examined as a self-contained vision subsystem,several benefits become clear.The system excels in IO-constrained environments.The small size and low power of the CMUcam3enables it to be placed in unique environments,collecting data au-tonomously for later review.If coupled with a wireless network link(such as802.15.4 or GPRS),the CMUcam3can perform sophisticated processing to send data only as needed over a potentially expensive data channel.Its low cost allows the CMUcam3to be purchased in greater quantities than other solutions.This makes the CMUcam3more accessible to a larger community of de-velopers.In several applications,for instance surveillance,reduced cost allows for a meaningful tradeoff between high performance from a single sensor node and the distribution of lower-cost nodes to achieve greater coverage.The CMUcam3also has benefits when used as a self-contained part of a greater system.Because of its various standard communications ports(RS-232,SPI,I2C), adding vision to an existing system becomes straightforward,particularly because the computational overhead is assumed by the separately dedicated CMUcam3processor rather than imposed upon the main processor and its I/O system.IIFinally,having completely open sourcefirmware allowsflexibility and reproducibility—anyone can download and compile the code to run on the hardware or alternatively a desktop computer(using the virtual-cam module).1.2Related WorkThere have been numerous embedded image processing systems constructed by the computer vision community in the service of research.In this section we will presenta selection of systems that have similar design goals to that of the CMUcam3.The Cognachrome[10]system is able to track up to25objects at speeds as highas60Hz.Its drawbacks include cost(more than US$2000),size(four by two by ten inches)and power(more than5×that of the CMUcam3)as limitations when creating small form factor nodes and robots.The Stanford MeshEye[6]was designed for use in low power sensor networks.The design uses two different sets of image sensors,a low resolution pair of sensors is usedto wake the device in the presence of motion,while the second VGA CMOS camera performs image processing.The system is primarily focused on sensor networking applications,and less on general purpose image processing.The UCLA Cyclops[7],also designed around sensor networking applications,usesan8-bit microprocessor and an FPGA to capture and process images.The main draw-backs are low image resolution(128×128)due to limited RAM and slow processingof images(1to2FPS).Specialized DSP based systems like the Bluetechnix[9]Blackfin camera boards provide superior image processing capabilities at the cost of power,price and com-plexity.They also typically require expensive commercial compilers and external de-velopment hardware(i.e.JTAG emulators).In contrast,the CMUcam3’s development environment is fully open source,freely available and has built-infirmware loading using a serial port.Various attempts have been made to use general purpose single board computers including the Intel Stargate[8]running Linux in combination with a USB webcam for image processing.Though open source,such systems are quite expensive,large,and demanding of power.Furthermore,USB camera acquired images are typically trans-mitted to the processor in a compressed pressed data results in lossy and distorted image information as well as the extra CPU overhead required to decompressthe data before local processing is possible.The use of slow external serial bus proto-cols including USB v1.0limits image bandwidth resulting in low frame rates.Finally,a number of systems[1],[2],[3]consist of highly optimized software designed to run on standard desktop machines.The CMUcam3is unique in that it targets applications where the use of a standard desktop machine would be prohibitive because of size,cost or power requirements.2CMUcam3In the following section,we will describe and justify the design decisions leading tothe hardware and software architecture of the CMUcam3.III'HEXJ /('V Figure 2:CMUcam3hardware block diagram consisting of three main components:processor,frame buffer and CMOS camera.2.1Hardware ArchitectureAs shown in Figure 2,the hardware architecture for the CMUcam3consists of threemain components:a CMOS camera chip,a frame buffer,and a microcontroller.Themicrocontroller configures the CMOS sensor using a two-wire serial protocol.Themicrocontroller then initiates an image transfer directly from the CMOS camera to theframe buffer.The microcontroller must wait for the start of a new frame to be signaledat which point it configures the system to asynchronously load the image into the framebuffer.Once the CMOS sensor has filled at least 2blocks of frame buffer memory (128bytes),the main processor can begin asynchronously clocking data 8bits at a time outof the image buffer.The end of frame triggers a hardware interrupt at which point themain processor disables the frame buffer’s write control line until further frame dumpsare needed.The CMUcam3has two serial ports (one level shifted),I 2C,SPI,four standardhobby servo outputs,three software controlled LEDs,a button and an MMC slot.Atypical operating scenario consists of a microcontroller communicating with the CMU-cam3over a serial connection.Alternatively,I 2C and SPI can be used,making theCMUcam3compatible with most embedded systems without relying soley on RS-232.The SPI bus is also used to communicate with FLASH storage connected to the MMCslot.This allows the CMUcam3to read and write gigabytes of permanent storage.IVUnlike previous CMUcam systems,all of these peripherals are now controlled by the processor’s hardware and hence do not detract from processing time.The expansion port on the CMUcam3is compatible with various wireless sensor networking motes including the Telos[15]motes from Berkeley.The image input to the system is provided by either an Omnivision OV6620or OV7620CMOS camera on a chip[14].As in the CMUcam and CMUcam2,the CMOS camera is mounted on a carrier board which includes a lens and supporting passive components.The camera board is free running and will output a stream of8-bit RGB or YCbCr color pixels.The OV6620supports a maximum resolution of352×288at50 frames per second.Camera parameters such as color saturation,brightness,contrast, white balance gains,exposure time and output modes are controlled using the two-wire SCCB protocol.Synchronization signals including a pixel clock(directly connected to the image FIFO)are used to read out data and indicate new frames as well as horizontal rows.The camera also provides a monochrome analog signal.One major difference between the CMUcam2and the CMUcam3is the use of the NXP LPC2106microcontroller.The LPC2106is a32-bit60MHz ARM7TDMI with built-in64KiB of RAM and128KiB offlash memory.The processor is capable of software controlled frequency scaling and has a memory acceleration module(MAM) which provides it with near single cycle fetching of data from FLASH.A built-in boot loader allows downloading of executables over a serial port without external program-ming hardware.Since the processor uses the ARM instruction set,code can be com-piled with the freely available GNU GCC compiler.Built-in downloading hardware and free compiler support makes the LPC2106an ideal processor for open source de-velopment.The frame buffer on the CMUcam3is a50MHz,1MB AL4V8M440video FIFO manufactured by Averlogic.The video FIFO is important because it allows the camera to operate at full speed and decouples processing on the CPU from the camera’s pixel clock.Running the camera at full frame rate yields better automatic gain and exposure performance due to factory default tuning of the CMOS sensor.Even though pixels can not be accessed in a random access fashion,the FIFO does allow for resetting the read pointer which enables multiple pass image processing.One disadvantage of the LPC2106is that it has relatively slow I/O.Reading a single pixel value can take as long as14clock cycles,of those12are spent waiting on I/O.Software down sampling, operating on a single image channel,or doing software windowing greatly accelerates image processing since skipping a pixel takes ing the FIFO,algo-rithms can be developed thatfirst process a lower resolution image and can later rewind and revisit regions at higher resolutions if more detail is required.For example frame differencing can be performed on a low resolution gray scale image,while frames of interest containing motion can be saved as high resolution color images.Since pro-cessing is decoupled from individual pixel access times,the pixel clock on the camera does not need to be set to the worst case per pixel processing time.This in turn allows for higher frames rates that would not be possible without the frame buffer.In many embedded applications,such as sensor networks,power consumption is an important factor.To facilitate power savings,we provide three power modes of operation(active,idle and power down)as well as the ability to power down just the camera module.In the active mode of operation when the CPU,camera and FIFOVCPU core3.31549.5Frame Buffer525125MMC3.31033na na499.7Table1:This table shows a breakdown of the power consumption of various compo-nents while the camera is fully active.are all fully operating the system consumes500mW of power.Table1shows the distribution of power consumption across the various components.When in an idle state,where RAM is maintained and the camera is disabled,the system consumes around300mW.The transition time between idle and active is on the order of30us. For applications where very low duty cycles are required and startup delays of up to1 second can be tolerated,we provide an external power down pin which gates external power to the board bringing the consumption down to nearly zero(25uW).In the power down state of operation,the processor RAM is not maintained and hence camera parameters must be restored by thefirmware at startup.2.2Software ArchitectureStandard vision systems assume the availability of PC-class hardware.Systems such as OpenCV[17],LTI-Lib[19],and MATLAB[13]require megabytes of memory ad-dress space and are written in runtime-heavy languages such as C++and Java.The CMUcam3has only64KiB of RAM and thus cannot use any of these standard vision libraries.To solve this problem,we designed and implemented the cc3vision system as the main software for CMUcam3.We also implement several components on top of cc3 as described in this section.2.2.1The cc3Software Vision SystemThe cc3system is a C API for performing vision and control,optimized for the small environment of the CMUcam3.Features:•Abstraction layer for interfacing with future hardware systems•Modern C99style with consistently named types and functions•Support of a limited number of image formats for simplicity•Documentation provided via Doxygen[18]•Versioned API for future extensibility•virtual-cam module for PC-based testing and debugging(see below)VIcc3is a part of the CMUcam3distribution,and is openly available at the CMUcamwebsite[12].Below is an example of the cc3based source code showing you how totrack a color:int main(void){cc3_image_t img;cc3_color_track_pkt t_pkt;//init filesystem drivercc3_filesystem_init();//configure uartscc3_uart_init(0,CC3_UART_RATE_115200,CC3_UART_MODE_8N1,CC3_UART_BINMODE_TEXT);cc3_camera_init();cc3_camera_set_colorspace(CC3_COLORSPACE_RGB);cc3_camera_set_resolution(CC3_CAMERA_RESOLUTION_LOW);cc3_camera_set_auto_white_balance(true);cc3_camera_set_auto_exposure(true);printf("Enter color bounds to track:");scanf("%d%d%d%d%d%d\n",&t_pkt.lower_bound.chan[CC3_RED_CHAN],&t_pkt.lower_bound.chan[CC3_GREEN_CHAN],&t_pkt.lower_bound.chan[CC3_BLUE_CHAN], &t_pkt.upper_bound.chan[CC3_RED_CHAN],&t_pkt.upper_bound.chan[CC3_GREEN_CHAN], &t_pkt.upper_bound.chan[CC3_BLUE_CHAN]);img.channels=3;img.width=cc3_g_pixbuf_frame.width;img.height=1;img.pix=cc3_malloc_rows(1);while(1){cc3_pixbuf_load();cc3_track_color_scanline_start(t_pkt);while(cc3_pixbuf_read_rows(img.pix,1)){cc3_track_color_scanline(&img,t_pkt);}cc3_track_color_scanline_finish(t_pkt);printf("Color blob found at%d,%d\n",t_pkt.centroid_x,t_pkt.centroid_y);}}VIIThe next example shows how a developer can access raw pixels.The following code section returns the location of the brightest red pixel found in the image:uint8_t max_red,max_red_y,max_red_x;cc3_pixel_t my_pix;max_red=0;cc3_pixbuf_load();while(cc3_pixbuf_read_rows(img.pix,1)){//read a row into the image//picture memory from the camerafor(uint16_t x=0;x<img.width;x++){//get a pixel from the img row memorycc3_get_pixel(&img,x,0,&my_pix);if(my_pix.chan[CC3_CHAN_RED]>max_red){max_red=my_pix.chan[CC3_CHAN_RED];max_red_x=x;max_red_y=y;}}y++;}printf("Brightest Red Pixel:%d,%d\n",max_red_x,max_red_y);2.2.2virtual-camThe virtual-cam module is part of the cc3system as mentioned above.It provides a simulated environment for testing library and project code on any standard PC by compiling with the system’s native GCC compiler.This allows for full use of the PC’s debugging tools to diagnose problems in user code.Oftentimes,a difficult to understand behavior observed on the CMUcam3will easily manifest itself as a bad pointer dereference or other easily found bug when run on a standard PC with memory protection.While not all of CMUcam3’s functionality is implemented in virtual-cam(miss-ing features include the hardware-specific components of servo control and GPIO), enough functionality is provided to enable off-line diagnostic testing.2.2.3CMUcam2EmulationThe CMUcam2[20]provides a simple human readable ASCII communication proto-col allowing for interactive control of the camera from a serial terminal program or a micro-controller.The CMUcam2is capable of many functions including in-built color tracking,frame differencing,histogramming as well as binary image transfers.The CMUcam2comes with a graphical user interface running on a PC that allows users to experiment with various functions.The CMUcam3emulates most of the CMUcam2’sVIII(a)(b)(c)(d)Figure3:The following images show the advantage of color tracking in the HSV color space.Figure(a)shows an RGB image,(b)shows the intensity(V)component of the HSV image,(c)shows the Hue and Saturation components of the image without intensity(d)shows the segmented hand with the center of mass in the middle. functions making it a drop-in replacement for the CMUcam2.The CMUcam2emu-lation extends upon the original CMUcam2with superior noisefiltering,HSV color tracking and JPEG compressed image transfers.2.2.4Color TrackingThe original CMUcam tracks color blobs using a simple RGB threshold color model. Though computationally lightweight,it does not adapt well to changing light condi-tions and can only track a single color at one time.The CMUcam3improves tracking performance by providing the option to use the Hue Saturation Value(HSV)color space,provisions for connected component blobfiltering and the ability to track mul-tiple colors.Figure3shows how the HSV color space can remove lighting effects simplifying color segmentation.Since the system is open source,it is simple for end users to further improve color tracking by building more complex color models.2.2.5Frame DifferencingAs an example program to illustrate frame differencing,we provide a simple security camera application.The camera continuously compares the previous image and theIXcurrent image.If an images changes by more than a preset threshold,the image is saved as a JPEG on the MMC card.2.2.6ConvolutionsWe provide a general convolution library that allows custom kernels to be convolved across an image.This can be used for variousfilters that perform tasks like edge detection or blurring.2.2.7CompressionNew to the CMUcam3is the ability to compress images with both libjpeg and ing different destination managers,one can redirect the output of libjpeg to the MMC,serial output,or any other communication bus.Depending on the quality of the image,libjpeg can produce images as small as4KiB.2.2.8Face DetectionThe CMUcam3incorporates the ability to detect faces in plain-background environ-ments.The face detector technique is based on the feature-based approach,proposed by Viola and Jones,in which a cascade of classifiers are trained for Haar-like rectan-gular features selected by AdaBoost[16].The integral image is a key data structure used in Viola-Jones.Unfortunately,it consumes significant memory.Even a low resolution integral image of176×144re-quires about76KiB of memory,far exceeding available memory.Along with memory constraints,the processor lacksfloating point hardware.As a result,two unique customizations were applied to the face detection implementation for CMUcam3:•Only a part of the whole image is loaded in main memory at any time.As a consequence,the maximum resolution of a detected face is limited to60×60 pixels.•All the classifier thresholds and corresponding compared values are computed usingfixed point arithmetic,via a binary scaling method.A few other optimizations were made to improve performance:•When scanning sub-windows,neighboring sub-windows are illumination nor-malized with iteratively computed standard deviation(std),instead of being com-puting independently.This can provide a speed up of approximately3×.•Sub-windows that are are too homogeneous(std<14)or too dark or bright (mean<30or mean>200)are discarded immediately,short-circuiting unnec-essary computation in regions unlikely to yield positive detection hits.With these changes,CMUcam3face detection operates on-board at1Hz.X(a)(b)Figure4:Sample output from a modified Viola-Jones face detector.Faces are denoted with boxes.Image(b)shows how texture in the background can occasionally be de-tected as a false positive.2.2.9PollyThe Polly[3]algorithm provides visual navigation information based on color.This navigation was used on the Polly robot to give tours of the MIT AI laboratory in the early90’s.The algorithm originally consisted of three steps:blurring the image,edge detection and generating a free space map starting from the bottom of the image upward towards any edges.Our implementation applies a3x3blur followed by a simple edge detector.We thenfilter out small edges using our connected component module.As can be seen in Figure5the algorithm returns a histogram of the free space in front of the robot.Polly is able to run on-board CMUcam3at4fps,operating on a176x144 image.2.2.10SpoonBotSpoonBot is a small mobile robot consisting of a CMUcam3,two continuous rotation hobby servos mounted to wheels,a four AA battery pack and a micro-servo connected to a plastic spoon.The two hobby servos allow SpoonBot to drive forward,backward and rotate left and right.The rear mounted micro-servo pushes the spoon up and down acting as a tilt degree of freedom.SpoonBot can use the Polly algorithm described above to drive around a table top or it can follow colored objects.All control and navigation is run locally on the CMUcam3,since the board can compute and command servo control signals directly,without the need for conventional robot control hardware. 3PerformanceIn this section we discuss execution time and memory consumption for various CMU-cam3software components.Depending on the image resolution and complexity of the algorithm,these values can vary significantly.The goal of this section is to provide some intuition for the various types of image processing that are possible using theXIFigure5:Sample output of the Polly algorithm.Thefirst column shows the original image.The second column shows the image after a blurfilter,edge detection and small connected componentfilter.Thefinal column shows the histogram representing free area in front of the camera.Load Frame210ms128ms52ms32msPack PixelsTotal FPSFigure6:Thisfigure compares the execution times of loading a frame,copying the image from the frame buffer to the processor,unpacking the pixels and processing the new frame.JPEG,Track Color(TC)and Track Color in the HSV color space(TC-HSV)are shown at two different resolutions.The numbers in parenthesis represent the frame rate of the operation.cc3_pixbuf_read_rows()function.Operating at a lower resolution obviously decreases the execution time because fewer pixels are fetched.Operating on a single channel instead of three channels provides only a1.625×increase in speed.This in-crease is due to no longer having to read all of the color pixels,however,since the CMOS camera does not have a monochrome output mode,color information must still be clocked out of the FIFO.Thefinal Pack Pixel column shows the time required to convert the GRGB pattern from the camera in memory into the local RGB pixel struc-ture.This corresponds to the cc3_get_pixel()function call.It is possible to greatly reduce the pixel construction time by designing algorithms that operate on the raw memory from the camera.This becomes a trade-off between simple portable code and execution speed.We provide examples of both methodologies for those interested in highly optimized implementations.Figure6shows the relative time consumption of the previously mentioned frame loading operations along with processing times for three different algorithms:JPEG, Track Color and Track Color HSV.The JPEG algorithm in this example compresses aXIIIcolor image in memory and does not write the output to a storage device.The Track Color(TC)and Track Color HSV(TC-HSV)algorithms are profiled directly from the CMUcam2emulation code.Each algorithmfinds the bounding box,centroid and den-sity of a particular color specified.For this test we show the worst-case performance by tracking all active pixels.The Track Color HSV benchmark is identical to Track Color except that it performs a software based conversion from the RGB to HSV color space for each pixel.The general trend found in these plots is that very simple algorithms such as tracking color are mostly I/O limited.For example Track Color spends only 17%of the time on processing.A more complex algorithm,JPEG,spends62%of its time on processing.JPEG also shows an example of where optimized pixel accesses can drastically reduce the pixel packing time.However as can be seen in the JPEG operating on a QCIF image,as resolution decreases these optimizations become less relevant.As previously mentioned,the LPC2106has64KiB of internal RAM and128KiB of ROM.By default,9KiB of RAM is reserved for stack space and9KiB of RAM is used by the core software libraries(including libc buffers).A176×144(QCIF) gray-scale image requires25KiB of RAM,while a100×100RGB image requires 30KiB of memory.All processing on larger sized images must be performed on a section by section basis,or using a sliding window scan-line approach.For example, JPEG requires only eight full rows(8KiB)of the image in addition to the storage required for the compressed image(less than12KiB).The code space consumed by most CMUcam3applications is quite small.The full CMUcam2emulation with JPEG compression and the FATfile system requires96KiB of ROM.A simple program that loads images and links in the standard library functions requires52KiB of ROM.The FATfilesystem and MMC driver require an additional12KiB of ROM.4Conclusions and Future WorksThe goal of this work was to design and publicly release a low cost,open source,em-bedded color computer vision platform.The system can provide simple vision capabil-ities to small embedded systems in the form of an intelligent sensor that is supported by an open source community.Custom C code can be developed using an optimized GNU toolchain andflashed onto the board using the serial port without external downloading hardware.The development platform includes a virtual camera target and numerous open source example applications and libraries.The main drawback of the CMUcam3hardware platform is the lack of RAM and computation speed required for many complex computer vision algorithms.We cur-rently have a prototype system using a600MHz Blackfin media processor from Analog Devices.Ideally,we would like to provide a software environment for this new plat-form that is compatible with our existing environment to help reduce the learning curve typically associated with high-end DSP systems.Eventually,applications can be pro-totyped on a PC using our virtual-cam with various hardware deployment options to support that particular application’s needs.Staying true to the spirit of the CMUcam project,we are also developing a simpler and cheaper hardware platform using a lower cost ARM7processor without the frame buffer.This device will be compatible withXIV。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
相关文档
最新文档