Research Methods and Usability Guidelines for Ecommerce Web Sites
电子商务参考文献

电子商务参考文献电子商务是指通过互联网等电子手段进行商业活动的一种商业模式。
它已经成为了现代企业发展的重要方式,为企业带来了巨大的商机和发展空间。
在电子商务的发展过程中,有许多重要的文献被引用和参考,本文将介绍一些经典的电子商务参考文献。
1. Porter, M. E. (2001). Strategy and the Internet. Harvard Business Review.迈克尔·波特是战略管理领域的著名学者,他在这篇经典的文章中强调,互联网并非一个威胁,而是一个改变商业规则的机会。
文章提出了互联网给各行业带来的竞争优势机会,并以5个力量模型来分析企业在互联网时代的竞争策略。
2. Hoffman, D. L., & Novak, T. P. (1996). Marketing in hypermedia computer-mediated environments: Conceptual foundations. Journal of Marketing, 60(3), 50-68.霍夫曼和诺瓦克是互联网营销领域的重要学者,他们的文章对电子商务中的市场营销进行了深入研究。
文章概括了电子商务环境中的市场特点,提出了网络媒体环境中的互动和个性化定制的特点,为企业在网络营销中提供了指导。
3. Rayport, J. F., & Sviokla, J. J. (1994). Managing in the Marketspace. Harvard Business Review.雷波特和斯维奥克拉是电子商务战略领域的重要学者,他们的文章讨论了电子商务对市场的影响和企业管理的变革。
文章强调企业需要从传统的市场到互联网时代的市场空间进行管理转变,并提出了一些实践指导和策略建议。
4. Li, T., & Li, X. (2011). Information technology and firm profitability: Mechanisms and empirical evidence. MIS Quarterly, 35(4), 1167-1188.李彤和李旭是电子商务领域的重要学者,他们的文章通过实证研究揭示了信息技术对企业盈利能力的影响机制。
Research-Methods研究方法

1.4 Research is reliable
Internal reliability
The extent to which data collection procedure is consistent and accurate
Inter-rater reliability The extent to which different raters agree on the data collected from observation
结论:任务型教学法优于传统教学法
12/15/2019
External validity
the extent to which that findings can be applied or generalized to situations outside the research context Population and subjects Descriptive explicitness of the variables Contexts of research Observer’s paradox Butterfly effects Time
Understanding social phenomenon
Relation, effects and causes
Hypothesis-generating
Hypothesis-testing
Heuristic probe
Specific variables
Context-specific
Internal consistency reliability
The extent to which all the items in an measure instrument (eg. a questionnaire) elicit the same information.
电子商务 中英对照 E-commerce Chapter3

from its services.
盈利模式。 是公司从客户那里获得现金流的策略技术。
p106
Revenue models for online business
Web business revenue-generating models
1. Web catalog 网上目录模式 2. Digital content 数字内容模式 3. Advertising-supported 广告支持模式 4. Advertising-subscription mixed 广告-收费混合模式 5. Fee-based 交易费用模式 • Fee for transaction/fee for services Can work for both sale types: B2C & B2B
网络广告的成功受两个障碍的限制:
首先是没有测量网站访问的统计方法。(有时广告主甚至与网网站能够吸引到让大广告主感兴趣的访问者数量。
Can obtain large advertiser interest by: Using a specialized(专门的) information Web site
• Return policy
p112
Fee-for-Content Revenue Models
Firms owning written information or information rights Embrace the Web as a highly efficient distribution mechanism 拥有知识产权的企业认为互联网是一种高效的分配途径 Use the digital content revenue model 使用数字内容盈利模式 Sell subscriptions for access to information they own
研究方法有哪些英文

研究方法有哪些英文There are several research methods commonly used in various disciplines. Here are some examples:1. Experimental research: This method involves manipulating variables and measuring their effects on the outcome. It typically involves a control group and an experimental group.2. Survey research: This method involves collecting data through questionnaires or interviews. It is used to gather information about attitudes, opinions, behaviors, or traits of a specific population.3. Observational research: This method involves observing and recording behaviors or phenomena as they naturally occur without interfering with the variables.4. Case study research: This method involves in-depth investigation and analysis of a specific individual, group, or event, aiming for an in-depth understanding of the subject.5. Correlational research: This method involves examining the relationship between two or more variables without manipulating them.It seeks to determine if a relationship exists and the strength of that relationship.6. Content analysis: This method involves analyzing and interpreting the content of documents, media, or texts to uncover patterns, themes, or sentiments.7. Experimental research: This method involves manipulating variables and measuring their effects on the outcome. It typically involves a control group and an experimental group.8. Grounded theory: This method is used in qualitative research to develop theories or hypotheses based on empirical data analysis.9. Meta-analysis: This method involves combining and re-analyzing data from multiple studies on the same topic to draw conclusions or identify patterns across studies.10. Action research: This method involves collaborating with participants to identify and address problems in real-life settings, aiming to bring about practical and meaningful change.Note: The above methods are just a few examples, and the choice of research method depends on the research questions, available resources, and the nature of the study.。
电子商务平台用户体验优化策略研究报告

电子商务平台用户体验优化策略研究报告第一章引言 (3)1.1 研究背景 (3)1.2 研究目的 (3)1.3 研究方法 (4)第二章电子商务平台用户体验概述 (4)2.1 用户体验的定义与重要性 (4)2.2 电子商务平台用户体验的构成要素 (5)2.3 用户体验优化策略的分类 (5)第三章电子商务平台界面设计优化 (5)3.1 界面布局优化 (6)3.1.1 明确界面层次 (6)3.1.2 保持一致性 (6)3.1.3 灵活运用网格布局 (6)3.2 色彩搭配优化 (6)3.2.1 符合品牌特点 (6)3.2.2 保持色彩平衡 (6)3.2.3 考虑色彩心理学 (6)3.3 字体与排版优化 (6)3.3.1 选择合适的字体 (6)3.3.2 保持字体一致性 (7)3.3.3 合理排版 (7)3.4 动画与交互效果优化 (7)3.4.1 适度使用动画 (7)3.4.2 优化动画效果 (7)3.4.3 提高交互效果 (7)第四章电子商务平台导航优化 (7)4.1 导航结构优化 (7)4.2 搜索功能优化 (8)4.3 标签与分类优化 (8)4.4 导航辅助功能优化 (8)第五章电子商务平台商品展示优化 (8)5.1 商品信息展示优化 (8)5.1.1 信息完整性 (8)5.1.2 信息呈现方式 (9)5.1.3 信息更新及时性 (9)5.2 商品推荐优化 (9)5.2.1 推荐算法优化 (9)5.2.2 推荐内容多样化 (9)5.2.3 推荐结果展示优化 (9)5.3 商品评价与评论优化 (9)5.3.1 评价体系完善 (9)5.3.2 评价审核机制 (9)5.3.3 评价展示优化 (9)5.4 商品筛选与排序优化 (10)5.4.1 筛选条件多样化 (10)5.4.2 排序方式灵活 (10)5.4.3 排序结果优化 (10)第六章电子商务平台交易流程优化 (10)6.1 购物车功能优化 (10)6.1.1 用户界面优化 (10)6.1.2 购物车操作便捷性优化 (10)6.1.3 购物车数据同步 (10)6.2 结算流程优化 (10)6.2.1 收货信息填写优化 (10)6.2.2 商品清单确认优化 (11)6.2.3 结算界面优化 (11)6.3 支付方式优化 (11)6.3.1 支持多种支付方式 (11)6.3.2 支付安全优化 (11)6.3.3 支付成功率优化 (11)6.4 订单处理与售后服务优化 (11)6.4.1 订单处理速度优化 (11)6.4.2 售后服务渠道优化 (11)6.4.3 售后服务流程优化 (11)第七章电子商务平台用户互动优化 (12)7.1 用户反馈与投诉处理优化 (12)7.1.1 反馈与投诉收集渠道优化 (12)7.1.2 反馈与投诉处理流程优化 (12)7.2 社区与论坛优化 (12)7.2.1 社区氛围营造 (12)7.2.2 社区功能优化 (12)7.3 个性化推荐优化 (13)7.3.1 数据分析与挖掘 (13)7.3.2 推荐内容多样化 (13)7.4 用户激励与奖励优化 (13)7.4.1 奖励机制设计 (13)7.4.2 激励措施实施 (13)第八章电子商务平台功能优化 (13)8.1 网站速度优化 (14)8.1.1 网站速度的重要性 (14)8.1.2 网站速度优化策略 (14)8.2 网站稳定性优化 (14)8.2.1 网站稳定性概述 (14)8.2.2 网站稳定性优化策略 (14)8.3 数据安全与隐私保护优化 (15)8.3.1 数据安全与隐私保护的重要性 (15)8.3.2 数据安全与隐私保护优化策略 (15)8.4 移动端体验优化 (15)8.4.1 移动端体验的重要性 (15)8.4.2 移动端体验优化策略 (15)第九章电子商务平台数据分析与优化 (15)9.1 用户行为数据分析 (15)9.1.1 数据来源与采集 (15)9.1.2 数据分析方法 (16)9.1.3 数据可视化 (16)9.2 用户满意度调查与优化 (16)9.2.1 满意度调查方法 (16)9.2.2 满意度优化策略 (16)9.3 用户流失率分析与管理 (16)9.3.1 流失率分析 (16)9.3.2 流失率管理策略 (16)9.4 数据驱动优化策略 (17)9.4.1 基于数据的个性化推荐 (17)9.4.2 数据驱动的营销活动 (17)9.4.3 数据驱动的商品策略 (17)9.4.4 数据驱动的服务优化 (17)第十章结论与展望 (17)10.1 研究结论 (17)10.2 研究局限 (17)10.3 未来研究方向 (18)第一章引言1.1 研究背景互联网技术的飞速发展,电子商务已经成为我国经济发展的重要推动力。
电子商务参考文献

电子商务参考文献随着互联网的蓬勃发展和技术的迅速进步,电子商务在世界范围内正在成为一种日益重要的商业模式。
电子商务的兴起不仅深刻地改变了我们的生活方式,也对传统商业模式带来了革命性的冲击。
为了更好地理解和掌握电子商务的发展趋势、挑战和机遇,许多学者和专家们进行了大量的研究,并出版了一系列的参考文献。
本文将介绍一些有关电子商务的经典参考文献,以供学者和从业者参考。
1. Internet Marketing: Strategy, Implementation, and Practice(互联网营销:战略、实施与实践)作者:Dave Chaffey出版年份:2019年这本书是互联网营销领域的经典之作,全面介绍了互联网营销的策略、实施和实践。
书中涵盖了众多互联网营销的关键概念和技术,如搜索引擎优化(SEO)、社交媒体营销和内容营销等。
对于电子商务从业者来说,这本书是一本不可或缺的指南,可帮助他们构建有效的互联网营销策略。
2. E-commerce 2019: Business, Technology, and Society(2019年电子商务:商业、技术与社会)作者:Kenneth C. Laudon, Carol Traver出版年份:2019年这本书定位于大学本科和研究生的教材,全面介绍了电子商务的商业、技术和社会方面。
作者通过案例研究和实地调研,深入探讨了电子商务的发展趋势、商业模式、电子支付、安全性和隐私等重要话题。
对于对电子商务进行研究或者希望了解电子商务的学者和从业者来说,这本书是一本权威的参考资料。
3. Electronic Commerce: A Managerial and Social Networks Perspective (电子商务:管理和社交网络视角)作者:Efraim Turban, David King, Jae Kyu Lee, Ting-Peng Liang, Deborrah C. Turban出版年份:2015年这本书从管理和社交网络的视角,探讨了电子商务的关键问题和新兴趋势。
电商网站商品排名算法的研究与应用

电商网站商品排名算法的研究与应用随着电子商务的兴起,越来越多的消费者选择在电商网站购买商品。
然而,网站上商品繁多,消费者面对着数不胜数的选择。
为了让消费者更方便快速地找到自己想要的商品,电商网站采取了一系列的信息筛选和排名算法。
在本文中,我们将深入探讨电商网站中商品排名算法的研究和应用。
一、什么是商品排名算法商品排名算法是一种将网站上的商品进行排序的算法,它通过某种公式或者模型,将商品按照一定的顺序排列出来。
这种算法可以明显地提高消费者的购物效率和体验,为电商网站带来更多的流量和销售收益。
二、商品排名算法的规则在电商网站中,商品排名算法的规则通常是由网站开发者根据网站的需求和特点制定的。
其中,最常见的商品排名规则有以下几种。
1.综合排名规则综合排名规则是最常见的一种排名规则,它主要考虑的是商品的综合属性。
这种排名规则将商品的销量、价格、评价等多个方面的信息综合计算,生成一个综合的得分,然后按照得分高低进行排名。
2.销量排名规则销量排名规则是基于商品销售情况进行排名的规则。
最畅销的商品通常意味着消费者对该商品的需求度较高,因此这种排名规则通常会将销售量高的商品放在更加显眼的位置,以吸引更多的消费者。
3.评价排名规则评价排名规则是基于消费者对商品的评价情况进行排名的规则。
当消费者对某个商品的评价较高时,这种商品通常会被排在更高的位置,以便更多的消费者能够看到它。
三、商品排名算法的应用商品排名算法在电商网站中有着广泛的应用。
在这里,我们将介绍一些常见的商品排名算法。
1.基于热度的排名算法基于热度的排名算法是一种非常简单且易于实现的算法。
它主要是通过统计商品的点击量、浏览量等信息,来推算出每个商品的热度指数,从而对商品进行排名。
这种排名算法不需要太多的计算和模型构建,因此在一些小型电商网站中得到了广泛的应用。
2.基于协同过滤的排名算法基于协同过滤的排名算法是一种构建在用户行为数据上的算法。
它主要考虑的是用户的历史行为,推荐和他们历史行为相似的商品。
No Results Found

Running head: NO RESULTS FOUNDYour Search Returned: 0 ResultsImproving Digital Library Search ToolsPaul Aumer-RyanThe University of Texas at AustinDecember 13, 2006INF 397.3Abstract“No results found” is a misleading phrase because it masquerades as a definitive answer; in reality, the library being searched may contain content that matches a user’s query. This research seeks to understand the effect of null result sets on search behavior and on the perception of contents in digital libraries. Concepts utilized by social computing frameworks and current-generation eCommerce Web sites will be examined for their efficacy in digital library search interfaces. An experiment is outlined that attempts to flesh out a connection between null result sets and authoritativeness of digital library contents.IntroductionThe poster child for failure in information retrieval is a slew of search results that have nothing to do with the search query presented to the system by the user. If, for example, one were to search a database of household pets for the term “dog” and only cats with the name “dog” were returned, we could easily consider this to be a failure (unless, of course, the database actually did not contain any dogs). Similarly, if one were to perform a Web search on a hypothetical 5-year-old neighbor named Michael Jackson, and the only results returned dealt with the famous musician, there would be limited recourse to explain to the system the intended meaning (see, e.g., McRae-Spencer & Shadbolt, 2006). Much research has been undertaken in the information retrieval (IR) field to address this problem, ranging from the traditional concept of search term vectors, to popularity based on interrelations, to similar word pairings (e.g., synonyms, singular/plural forms of words, and verb tenses). To a large extent these efforts have been tremendously successful; modern search engines—whether they categorize the documents on a private Intranet, the items in a digital library, or the Web at large—are much better atinterpreting individual intention from search queries than their predecessors; most inappropriate and unrelated items do not make it into the search results for a given query, and there is a fair amount of trust in modern search engines that the results returned are a faithful representation of the documents in the collection that relate to the search query.However, there is often an unintended consequence of our desire to perfectly match search results with the semantic intention behind a search query, especially when dealing with imperfect systems and the wide array of individual differences. In a sense, it is the opposite effect from the previously mentioned poster child of failure (i.e., too many results that do not relate to the search query): when an empty result set is returned to a user’s search query. More succinctly, it can be summed up by an often unattractive visual: the “no results found” page returned by a search engine (see Appendix A for sample phrases taken from the “no results found” pages of current academic search tools).By itself, a “no results found” page is not bad (although it may be poorly designed, or have any number of usability or navigation issues); however, the meaning it conveys may not be the appropriate one. “No results found” is a misleading phrase because it masquerades as a definitive answer; in reality, the collection being searched may actually contain content that matches a user’s query.Let us pause for a moment and examine the various meanings that “no results found” can have for a digital library user. There is the definition from computer science: “The explicit assemblage of characters you submitted does not occur anywhere in our index of items in our collection.” There is also the anthropomorphic definition, where it is as if the computer is saying: “We don’t understand what you just typed” or “we understand some of the things you typed, but not all of them.” Then there are the potentially unintended and far too abruptmeanings: “We have what you are looking for, but we call it something else”; “We don’t have what you are looking for”; “Go away.” How are we to know how the user interprets these pages? “No results found” seems very authoritative and final; in fact, it is a statement of truth, and it is coming from a computer (and computers tend not to be known for their ability to equivocate).In examining this problem, it helps to narrow our focus a bit; in this case, we will be restricting our discussion to digital libraries and their search tools. Digital library search routines are presented with a much different and a much more varied subject of focus than their larger siblings, Web search engines. The most obvious difference between the two is collection size, but there are others.Digital library search engines must be able to appropriately deal with smaller collections. Whereas one of the features that makes Web search engines so appealing is the vast size of the index (thus ensuring, in most cases, that results will be found), digital libraries must return relevant results from a much smaller data set. This makes it all the more important to deal with “no results found” pages, since they are likely to occur more frequently. Additionally, the Web, by its very nature, is interrelated, and this allows for some fascinating conclusions to be drawn about the connections between seemingly unrelated documents (e.g., Google’s PageRank relies in large part on a page’s “popularity” by counting how many other pages link to it). In most digital libraries this is impossible because of the independent nature of the items in the collection. Finally, certain digital libraries—those that are of a single media and document type, say collections of journal articles—have certain benefits compared to their multimedia brethren and the Web at large. Because of their homogeneity, metadata about the items in the collectionbecomes more meaningful, and further inferences can be drawn that can influence the appropriateness of search results.Defining the “Digital Library”Before proceeding, I believe that the term “digital library” deserves some discussion; while it is safe to say that there is a common root that we can all agree on when invoking the term “digital library,” there is enough variation to warrant a specific definition for the purpose of this paper. Herein, a “library” is a collection of described objects that has been purposefully assembled from a larger, potentially unorganized heap. A “digital library” only adds the distinction that its items be digital or digitized (but be aware that this carries heavy connotations, since digital objects are inherently free from the restraints physical objects have, i.e., they can only exist in one place at a time and hence are not easily duplicated, lent, or categorized). Note that there is no concept of a complete digital library (for in that case, it contains everything and ceases to be a library, for it is no longer representative of anything outside the library). By this definition, the “Web” is not a digital library (it has no inherent organizational scheme, nor is it a subset of a larger collection), but a Web search engine’s (e.g., Google’s) index of Web pages is (it is an organized subset of all potential Web pages). However, because of the massive size of Web search engines and the interrelated nature of the objects they index, it is helpful for the purposes of this paper to consider them in a different category than digital libraries that focus on a smaller and more distinct subset of digital objects.Models of Search BehaviorIn the same way that there are many types of digital libraries, it is important to note that all users of search engines do not behave in the same fashion. Since the “no results found” page relies heavily on the assumption that all searchers are adept at and comfortable with query refinement (e.g., see Velez, Weiss, Sheldon, & Gifford, 1997), it is important to examine if searchers really do fit this mold. The most basic division of searcher behaviors is that between expert and novice, but these are not the only categories. Research by Heinstrom (2003) develops three categories of searchers: Deep Divers (those who follow one topic and are not easily distracted), Broad Scanners (those who look at a variety of resources, but not in too much depth), and Fast Surfers (those who avoid lengthy paragraphs and glean information through rapidity). In these categories, both Broad Scanners and Fast Surfers are unlikely to do much query refinement (i.e., to change their original search query in small increments to either narrow their search results or broaden them).The other main difference in searcher behavior lies in the intention of the search process. When searchers are searching, they are either actively seeking information (i.e., information seeking; see Marchionini, 1997) or casually exploring (i.e., information encountering; see Erdelez, 2000; 1997). Those that are performing the directed act of information seeking will exhibit different tendencies than those who are merely encountering (e.g., those exploring will be more likely to chase down unlikely sources of information, with the potential for opening up new realms). Similarly, it is expected that those exploring will be less likely to “try again” after receiving no results from a search query, since they are not particularly invested in that particular library and that particular query.The overriding implication here is that not only do searcher behaviors impact the way searches are performed, but they also affect the way searchers deal with empty result sets. The conclusion must be that a single purpose, catch-all page that states that no results were found is improper for many styles of search behavior.Suggestions in Search ResultsA common cause of empty result sets is when the user uses a term that the digital library does not recognize—in this case, “no results found” does not necessarily mean that the library does not contain what is being sought, but rather that it does not understand the query. Examples of this type of misunderstanding include misspellings or use of jargon, abbreviations, or acronyms. Dalianis (2002) discusses the implementation of spelling support (while most Web search engines now include spelling support, most digital libraries are behind the curve and do not). The current method of implementing dictionary support is to place a “did you mean to type X” prominently on the (empty) results page, allowing the user to simply click the correct spelling to get the appropriate results. An alternative is to detect common misspellings and automatically search using the correct spelling, especially in the cases where the original, supposedly misspelled query returned no results (e.g., a user searching for “kats” would be able to see results for “cats”).Similarly, the implementation of a thesaurus (whether field-specific or general purpose) can help alleviate some inappropriate “no results found” pages (see, e.g., Medelyan & Witten, 2006; Song, Song, Allen, & Obradovic, 2006; Weaver, Strickland, & Crane, 2006). Thesaurus-based searching is one way to see relationships between supposedly distinct objects in a digital library, and can help users make connections that they may not have known about. This can beimplemented in much the same way as the spelling correction, with a list of synonyms displayed prominently on the search results page, or by automatically including synonyms in the search results (e.g., a user searching for “cats” could also see results with “kitten” or “feline”).Another common problem with search results, especially in digital libraries that focus on specialized fields with a hefty amount of jargon, is the recognition of acronyms and abbreviations. For example, most academic journals and conferences have both a full name and a shorter acronym (e.g., JCDL for the Joint Conference on Digital Libraries). When searching a digital library of articles published in journals and conference proceedings, will the same results be returned for both the acronym and the full name? Ideally this should be the case, but it often is not. Many researchers have examined the potential for automatically recognizing acronyms (which would allow us to avoid giving two labels to metadata fields, or to maintain a dictionary that relates acronyms and abbreviations to their full text equivalents), with promising—yet still not widely implemented—results (see, e.g., Dominich, Goth, & Skrop, 2003; Larkey, Ovilvie, Price, & Tamilio, 2000; Yeates, 1999; Taghva & Gilbreth, 1999; Federiuk, 1999).Finally, a search engine can perform automatic permutations on a search phrase that did not return any results (in other words, the searcher can be offered a list of potential results if certain search terms were removed from the query). At the time of this writing I only know of one “library” that uses this technique, with a fair amount of success: eBay. It is easily applicable to digital library searches, and can help users recognize when the “no results found” page is actually saying, “I understand most of what you typed, but not all of it.”Social Aspects in Search ResultsThe final category of research that is applicable to digital library searching that we will examine comes from social computing. With the recent explosion of Web content that relies on social networks (e.g., , , , , and ) and their success in showing how social relationships can be a strong substratum for information relationships, it seems inevitable that digital libraries will embrace social computing styles.With that in mind, a wave of recent research has indeed focused on social computing in the realm of digital libraries: collaborative filtering (Huang, Li, & Chen, 2005); sharing information encounters (Marshall & Bly, 2004); and peer verification (Schmalbeck, Stuart-Moore, & Evans, 2006), to name a few. It is a short leap to see the potential that social computing methods can have on empty result pages—if other people searched for the same thing or something similar, and had the same difficulties in finding the documents they were looking for, why not learn from that experience and apply it for future users?We may also ask: why not allow the community to add their own metadata to objects in the digital library? What is lost in consistency and hierarchy can be made up in robustness and variety (not to mention that the original, consistent, and hierarchical metadata can be left untouched). Similarly, there is the potential for involving the community in the ranking of the objects in digital libraries (e.g., this was helpful, that was not).The only convincing negative answer to these questions revolves around traditional notions of privacy and confidentiality in the library sphere, especially with acts of personalization or peer recommendation. Both of these require some amount of personal information storage by the digital library, and personal information sharing between patrons of the library, and this can be troublesome. However, it is important to note that not all socialactions demand privacy in the library sphere—after all, physical libraries require us to attend them in person, which is in some sense an invasion of privacy. It seems that the safest methods include explicit contributions by users to the library—tagging, ranking, and sharing—rather than implicit data mining. A traditional library is in many ways a community center, and we as researchers should not be afraid of making their digital counterparts behave in the same way.Proposed StudyPrevious studies that have examined empty result sets (see DeFelice, Kastens, Rinaldo, & Weatherley, 2006; Kan & Poo, 2005; Zhuang, Wagle, & Giles, 2005) have been more focused on the collections themselves rather than the users encountering the empty result sets. In order to probe the effects that null result sets have on digital library users, I propose a study that addresses two broad questions. First, what are the affective implications of encountering a null result set? Measures of affect are an appropriate indicator of many aspects of the user experience, and bear a relationship to many expectations for a digital library: namely, retention of users, ease-of-use, minimizing extraneous cognitive load, satisfaction, and sense of accomplishment (Dillon, 2001). By understanding the affective impact that digital library search tools have on end users, both individually and as groups, we can shed some light on user expectations of digital library interfaces as well as improve their efficacy for everyday use.Second, what impact does the digital library interface have on the interpretation of its contents? An examination of the effect of a digital library’s interface on the perception of its contents may have strong repercussions for digital library designers and administrators. The general assumption is that the content and the interface to the content are separate entities. For example, a research paper can exist in two different digital libraries; one of these libraries canhave a highly usable interface, while the other is a disaster of design. However, users who end up finding the research paper in question will treat it the same way, regardless of which library they found it in. In other words, the content in the research paper (or book, video, or any other item in a collection) is self-contained, and not in any way affected by the way in which it arrived in the hands of a reader (in this case, the digital library and its interface). This study seeks to test this notion, with the belief that this is not the case; I believe that a poorly developed user interface, especially in the case of a digital library (where there is a large cache of information completely mediated by a forced interface that separates the user from his or her goal), will have a strong effect on the way a user sees the content within the collection.The purpose of this study is twofold. First, by further understanding the affective response to elements of digital library interfaces (i.e., search tools), we can expect future digital libraries to be more in line with the emotional responses of their users. Second, by demonstrating a link (or lack thereof) between the digital library interface and the contents within the library, future efforts into digital library design and development will be more grounded by their actual effect on users; in other words, if highly usable digital library interfaces merely offer aesthetic beauty to users, but no real productivity benefits, their importance can be relegated appropriately. On the other hand, if the interfaces do indeed have a strong effect on productivity, and (even more importantly) on the interpretation and respect of the actual items in the collection, then it is further proof of the vast importance of appropriate and beneficial design for the end users of digital libraries.MethodIn this study, a mock digital library will be created as a standardized test bed for research. Efforts will be made to make the digital library both simplistic and clean (to minimize extraneous variables that may affect participants, and to emphasize the benefits of a clean interface) and believable (so participants will be in the mindset that they are using a real digital library).Participants will interact with the mock digital library via a simple search tool; they will be under the impression that they are evaluating a new digital library and its overall design and responsiveness. They will be given a topic to search for (this topic will be chosen to be academic in nature, but not a topic that is intimately familiar to potential participants) and several comprehension questions to answer regarding the topic. The mock digital library will contain a small (30 or less) set of documents pertaining to the topic at hand and another small set of unrelated documents (30 or less); their presentation on the search results screen will be the main manipulation in this study.Participants will be divided into three groups: the first (control) group will get appropriate results returned to them on the search results pages, regardless of the terms they use in their search query; the second (experimental #1) group will always get an empty result set back in response to their first search query; further refinement of that search query will deliver the appropriate results; finally, the third (experimental #2) group will encounter inappropriate and unrelated results in response to their first search query; further refinement of that query will then deliver the appropriate results. We expect to have at least 50 participants in each group, drawn from a university student population of undergraduates with a small amount of research experience.The following data will be collected before the experiment is performed: a set of demographic questions (age, gender, college major); a brief affective mood inventory, such as the SAM (Dormann, 2003; Morris, 1995a; 1995b; Lang, 1985); and a survey of the participant’s familiarity with technology and research (experience with computers, experience with research, knowledge of the term “digital library” and a usable definition, previous use of libraries and library materials, both digital and physical).During the experiment, the following data will be collected: overall time-on-task; search queries typed into the search field; number of mouse clicks; and number of back button presses.After the experiment, the following data will be collected: a set of questions confirming comprehension of the topic searched for (these questions will be drawn from the appropriate items in the collection to ensure that they were examined); an overall impression of the digital library and the search tool, similar to Toms, Dufour, & Hesemeier (2004); a repeat of the affective measure used to determine mood (e.g., the SAM); and finally, an overall rating of the perceived authoritativeness of the contents in the digital library.This last question that deals with authoritativeness is the central question of the study; I hope to find a significant difference between the groups (using the analysis of variance statistical test, F). Measures of authority will be similar to Chesney’s (2006) examination of Wikipedia’s credibility by novices and experts.Expected FindingsI expect the following hypotheses to be supported:H1: Participants who encounter a null result set after their first search query will take more time to complete their task;H2: Participants who encounter a null result set after their first search query will rank the results as less authoritative;H3: Participants who encounter a null result set after their first search query will have a lower opinion of the search tool;H4: Participants who encounter a null result set after their first search query will exhibit more negative affect (e.g., frustration, anger, distress);H5: Novice users are more susceptible to H1 – H4 than expert users (Chesney, 2006).Participants who encounter inappropriate and unrelated results after their first search query are expected to react similarly.This study is intended to elucidate the dangers of “no results found” responses (especially when they are misleading) by showing their actual effect on potential digital library users. Further, if participants do indeed see the results following a “no results found” page as less authoritative, then this implies that the contents of a digital library are being evaluated not on their own merit, but by the interface’s effect on them. It is also important to note that if users have a lower opinion of a digital library (because of a “no results found” page, or any other reason), they are likely to turn elsewhere for their information needs (in other words, the assumption of the captive user is not necessarily true).ConclusionSince the focus here has been so strongly on empty search result pages, a topic which gets little attention both in the design of digital libraries and in the research literature, I thought I would close with a brief discussion of their ephemeral existence. “No results found” pages oftendisappear during the design and testing phase for a digital library; evidence is readily available by examining any number of null result sets in digital library search tools. Some are grammatically incorrect; others look thrown together; all exhibit terseness unbecoming of a helpful library assistant. Why is this? I would speculate that it is because “no results found” pages are not seen as actual pages or destinations in the digital library; they are merely fleeting error messages that have little impact other than prodding the user to “try again.” I believe there is also the assumption that they occur so infrequently (especially in formulaic testing phases) that it would be a waste of time to make them better or more infrequent. After all, in a world where information overload is so apparent (Levy, 2005) that it has become cliché, why focus on the opposite? But because of their deceptive nature (i.e., no results “found” does not necessarily mean no results “exist”), I believe they deserve our attention until they no longer mislead digital library users.Appendix A: Sample Phrases from No Results Found Pages(Note the difference in punctuation, capitalization, and phrase vs. sentence structure.)•“No results found”•“No results returned for your criteria.”•“No results were returned.”•“Nothing Found”•“Sorry, no documents were found matching search terms.”•“There are 0 results”•“No Results Found.”•“0 articles with title/keywords/abstract containing *”•“Your search matched 0 documents.”•“There are no products that match your search”•“No videos were found to match your query.”•“No results were found.”•“Sorry, your request returned no records.”•“Results: Not Found”•“No documents were found for your search.”•“No Results matching your search term(s) were found.”ReferencesAnick, P. G., & Tipirneni, S. (1999). The paraphrase search assistant: terminological feedback for iterative information seeking. Proceedings of the 22nd annual international ACMSIGIR conference on Research and development in information retrieval, 153-159. Bertini, E., Catarci, T., Di Bello, L., & Kimani, S. (2005). Visualization in digital libraries. In Hemmje, Niederee, & Risse (Eds.), Integrated Publication and Information Systems toInformation and Knowledge Environments (pp. 183-196). Berlin, Germany: Springer. Bertuca, D. J. (2001). Letting go of the mouse: Using alternative computer input devices to improve productivity and reduce injury. OCLC Systems & Services, 17(2), 79-83. Chesney, T. (2006). An empirical examination of Wikipedia's credibility. First Monday, 11(11). Coors, V., & Jung, V. (1998). Using VRML as an interface to the 3D data warehouse.Proceedings of the third symposium on Virtual reality modeling language.Cruz, I. F., & Lucas, W. T. (1997). A visual approach to multimedia querying and presentation.Proceedings of the fifth ACM international conference on Multimedia, 109-120. Dalianis, H. (2002). Evaluating a spelling support in a search engine. Proceedings of NLDB-2002, 7th International Workshop on the Applications of Natural Language toInformation Systems, Junio.DeFelice, B., Kastens, K. A., Rinaldo, C., & Weatherley, J. (2006). Insights into collections gaps through examination of null result searches in DLESE. Paper presented at theProceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries. from:2048/10.1145/1141753.1141823.。