山东建筑大学计算机学院算法分析算法复习题(Yuconan翻译)上课讲义
《计算机算法设计和分析》习题及答案解析

《计算机算法设计与分析》习题及答案一.选择题1、二分搜索算法是利用( A )实现的算法。
A、分治策略B、动态规划法C、贪心法D、回溯法2、下列不是动态规划算法基本步骤的是( A )。
A、找出最优解的性质B、构造最优解C、算出最优解D、定义最优解3、最大效益优先是(A )的一搜索方式。
A、分支界限法B、动态规划法C、贪心法D、回溯法4. 回溯法解旅行售货员问题时的解空间树是( A )。
A、子集树B、排列树C、深度优先生成树D、广度优先生成树5.下列算法中通常以自底向上的方式求解最优解的是(B )。
A、备忘录法B、动态规划法C、贪心法D、回溯法6、衡量一个算法好坏的标准是( C )。
A 运行速度快B 占用空间少C 时间复杂度低D 代码短7、以下不可以使用分治法求解的是( D )。
A 棋盘覆盖问题B 选择问题C 归并排序D 0/1背包问题8. 实现循环赛日程表利用的算法是(A )。
A、分治策略B、动态规划法C、贪心法D、回溯法9.下面不是分支界限法搜索方式的是(D )。
A、广度优先B、最小耗费优先C、最大效益优先D、深度优先10.下列算法中通常以深度优先方式系统搜索问题解的是(D )。
A、备忘录法B、动态规划法C、贪心法D、回溯法11.备忘录方法是那种算法的变形。
( B )A、分治法B、动态规划法C、贪心法D、回溯法12.哈夫曼编码的贪心算法所需的计算时间为(B )。
A、O(n2n)B、O(nlogn)C、O(2n)D、O(n)13.分支限界法解最大团问题时,活结点表的组织形式是(B )。
A、最小堆B、最大堆C、栈D、数组14.最长公共子序列算法利用的算法是(B)。
A、分支界限法B、动态规划法C、贪心法D、回溯法15.实现棋盘覆盖算法利用的算法是(A )。
A、分治法B、动态规划法C、贪心法D、回溯法16.下面是贪心算法的基本要素的是(C )。
A、重叠子问题B、构造最优解C、贪心选择性质D、定义最优解17.回溯法的效率不依赖于下列哪些因素( D )A.满足显约束的值的个数B. 计算约束函数的时间C.计算限界函数的时间D. 确定解空间的时间18.下面哪种函数是回溯法中为避免无效搜索采取的策略(B )A.递归函数 B.剪枝函数 C。
2022年山东建筑大学计算机科学与技术专业《数据结构与算法》科目期末试卷A(有答案)

2022年山东建筑大学计算机科学与技术专业《数据结构与算法》科目期末试卷A(有答案)一、选择题1、无向图G=(V,E),其中:V={a,b,c,d,e,f},E={(a,b),(a, e),(a,c),(b,e),(c,f),(f,d),(e,d)},对该图进行深度优先遍历,得到的顶点序列正确的是()。
A.a,b,e,c,d,fB.a,c,f,e,b,dC.a,e,b,c,f, dD.a,e,d,f,c,b2、哈希文件使用哈希函数将记录的关键字值计算转化为记录的存放地址,因为哈希函数是一对一的关系,则选择好的()方法是哈希文件的关键。
A.哈希函数B.除余法中的质数C.冲突处理D.哈希函数和冲突处理3、静态链表中指针表示的是()。
A.下一元素的地址B.内存储器的地址C.下一元素在数组中的位置D.左链或右链指向的元素的地址4、用不带头结点的单链表存储队列,其队头指针指向队头结点,队尾指针指向队尾结点,则在进行出队操作时()。
A.仅修改队头指针B.仅修改队尾指针C.队头、队尾指针都可能要修改D.队头、队尾指针都要修改5、已知有向图G=(V,E),其中V={V1,V2,V3,V4,V5,V6,V7}, E={<V1,V2>,<V1,V3>,<V1,V4>,<V2,V5>,<V3,V5>, <V3,V6>,<V4,V6>,<V5,V7>,<V6,V7>},G的拓扑序列是()。
A.V1,V3,V4,V6,V2,V5,V7B.V1,V3,V2,V6,V4,V5,V7C.V1,V3,V5,V2,V6,V7D.V1,V2,V5,V3,V4,V6,V76、已知字符串S为“abaabaabacacaabaabcc”,模式串t为“abaabc”,采用KMP算法进行匹配,第一次出现“失配”(s!=t)时,i=j=5,则下次开始匹配时,i和j的值分别()。
山东建筑大学计算机学院算法分析算法复习题(Yuconan翻译)

1.The O-notation provides an asymptotic upper bound. The Ω-notationprovides an asymptotic lower bound. The Θ-notation asymptoticallya function form above and below.2.To represent a heap as an array,the root of tree is A[1], and giventhe index i of a node, the indices of its parent Parent(i) { return ⎣i/2⎦; },left child, Left(i) { return 2*i; },right child, right(i) { return 2*i + 1; }.代表一个堆中的一个数组,树的根节点是A[1],并且给出一个节点i,那么该节点的父节点是左孩子右孩子3.Because the heap of n elements is a binary tree, the height of anynode is at most Θ(lg n).因为n个元素的堆是一个二叉树,任意节点的树高最多是4.In optimization problems, there can be many possible solutions. Eachsolution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem.在最优化问题中,有很多可能的解,每个解都有一个值,我们希望找到一个最优解(最大或最小),我们称这个解为最优解问题。
山东建筑大学计算机学院算法分析第三部分复习

第三部分复习与作业(动态规划和贪心算法)1 Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to subproblems. divide-and-conquer algorithms partitionthe problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. In contrast, dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems.2 In optimization problems, there can be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem.3 optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems.4 In dynamic programming, we build an optimal solution to the problem from optimal solutions to subproblems.5 When a recursive algorithm revisits the same problem over and over again, we say that the optimization problem has overlapping subproblems.6 A subsequence of X if there exists a strictly increasing sequence <i1,i2, ..., i k> of indices of X such that for all j = 1, 2, ..., k, we have x i j= z j .7 Let X = <x1, x2, ..., x m> and Y = <y1, y2, ..., y n> be sequences, and let Z = <z1, z2, ..., z k> be any LCS of X and Y.1. If x m = y n, then z k = x m = y n and Z k-1is an LCS of X m-1 and Y n-1.2. If x m ≠y n, then z k ≠x m implies that Z is an LCS of X m-1 and Y.3. If x m ≠y n, then z k ≠y n implies that Z is an LCS of X and Y n-1.8 A greedy algorithm always makes the choice that looks best at the moment. That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution.9 The greedy-choice property and optimal sub-structure are the two key ingredients of greedy algorithm.10 greedy-choice property is a globally optimal solution can be arrived at by makinga locally optimal (greedy) choice.1 The Motors Corporation produces automobiles in a factory that has two assembly lines. An automobile chassis enters each assembly line, has parts added to it at a number of stations, and a finished auto exits at the end of the line. Each assembly line has n stations, numbered j = 1, 2, ..., n. We denote the j th station on line i (where i is 1 or 2) by S i,j. The j th station on line 1 (S1,j) performs the same function as the j th station on line2 (S2,j). The stations were built at different times and with different technologies, however, so that the time required at each station varies, even between stations at the same position on the two different lines. We denote the assembly time required at station S i,j by a i,j. a chassis enters station 1 of one of the assembly lines, and it progresses from each station to the next. There is also an entry time e i for the chassis to enter assembly line i and an exit time x i for the completed auto to exitassembly line i. The time to transfer a chassis away from assembly line i after having gone through station S i,j is t i,j, where i = 1, 2 and j = 1, 2, ..., n – 1. A manufacturing problem is to find the fastest way through a factory. Let f i[j] denote the fastest possible time to get a chassis from the starting point through station S i,j. Our ultimate goal is to determine the fastest time to get a chassis all the way through the factory, which we denote by f*. Please write its recursive formula and compute the fastest time and construct the fastest way through the factory of the instance.2 The matrix-chain multiplication problem can be stated as follows: given a chain<A1,A2,…,An>of matrices, where for i=1,2…,n, matrix A i has dimensionP i-1 P i, fully parenthesize the product A1,A2,…,A n in a way that minimizes the number of scalar multiplication. We pick as our subproblems the problems of determining the minimum cost of a parenthesization of A i A i+1 A j for 1 ≤i ≤j ≤n. Let m[i, j] be the minimum number of scalar multiplications needed to compute the matrix A i..j; for the full problem, the cost of a cheapest way to compute A1..n would thus be m[1, n]. Can you define m[i, j] recursively? Find an optimal parenthesization of a matrix-chain product whose sequence of dimensions is <2,1,3,2,4>3 In the longest-common-subsequence (LCS) problem, we are given two sequences X = <x1, x2, ...,x m> and Y = <y1, y2, ..., y n> and wish to find a maximum-length common subsequence of X and Y. Please write its recursive formula and determine an LSC of <1,0,0,1,0,1,0,1> and <0,1,0,1,1,0,1,1,0>.4 The 0–1 knapsack problem is posed as follows. A thief robbing a store finds n items; the i th item is worth v i dollars and weighs w i pounds, where v i and w i are integers. He wants to take as valuable a load as possible, but he can carry at most W pounds in his knapsack for some integer W. Which items should he take? This is called the 0–1 knapsack problem because each item must either be taken or left behind; the thief cannot take a fractional amount of an item or take an item more than once. In the fractional knapsack problem, the setup is the same, but the thief can take fractions of items, rather than having to make a binary (0–1) choice for each item. Can the above two problems is solved by greedy strategy? Why?(P382)5 What is an optimal Huffman code for the following set of frequencies, based on the6 numbers? a:45 b:13 c:12 d:16 e:9 f:5。
山东建筑大学计算机学院算法分析第二部分复习

True-false questions1 To represent a heap as an array,the root of tree is A[1], and given the index i of a node, the indices of its parent Parent(i) { return ⎣i/2⎦; },left child, Left(i) { return 2*i; },right child, right(i) { return 2*i + 1; }.2 min-Heaps satisfy the heap property: A[Parent(i)] ≥ A[i] for all nodes i > 1.3 Because the heap of n elements is a binary tree, the height of any node is at most Θ(lg n).4 for array of length n, all elements in range A[⎣n/2⎦ + 1 .. n] are heaps5 the running time of build a heap is O(n lg n).6 The tighter bound of the running time to build a max-heap from an unordered array in linear time.7 The call to BuildHeap() takes O(n) time, Each of the n - 1 calls to Heapify() takes O(lg n) time, Thus the total time taken by HeapSort() = O(n) + (n - 1) O(lg n)= O(n) + O(n lg n)= O(n lg n).8 A priority queue is a data structure for maintaining a set S of elements, each with an associated value or key.9 The running time of Quick Sort is O(n lg n) in the average case, and O(n2) in the worst case.10 Quick Sort is a divide-and-conquer algorithm. The array A[p..r] is partitioned into two non-empty subarrays A[p..q] and A[q+1..r], All elements in A[p..q] are less than all elements in A[q+1..r], the subarrays are recursively sorted by calls to quicksort.11 Quick sorts, unlike merge sorts, have no combining step: two subarrays form an already-sorted array.12 A decision tree represents the comparisons made by a comparison sort.13 The asymptotic height of any decision tree for sorting n elements is Ω(n lg n).14 The running time of Counting sort is O(n + k). But the running time of sorting is Ω(n lg n). So this is contradiction.15 The Counting sort is stable.16 The radix sort can be used on card sorting.17 In radix sort, Sort elements by digit starting with least significant, Use a stable sort (like counting sort) for each stage.18 In the selection problem, finding the i th smallest element of a set, there is a practical randomized algorithm with O(n) expected running time.19 In the selection problem,there is a algorithm of theoretical interest only with O(n) worst-case running time.1 Write the running time of the Heapify procedure with recurrences. Solve the recurrences with Master method.Heapify(A, i){l = Left(i); r = Right(i);if (l <= heap_size(A) && A[l] > A[i])largest = l;elselargest = i;if (r <= heap_size(A) && A[r] > A[largest])largest = r;if (largest != i)Swap(A, i, largest);Heapify(A, largest);}Fixing up relationships between i, l, and r takes Θ(1) timeIf the heap at i has n elements, how many elements can the subtrees at l or r have?Answer: 2n/3 (worst case: bottom row 1/2 full)So time taken by Heapify() is given byT(n)≤T(2n/3) + Θ(1)By case 2 of the master theorem is T(n)=O(lgn)2 proof: with the array representation for storing an n-element heap, the leaves are the nodes indexed by ⎣n/2⎦+1,⎣n/2⎦+2,…,n. (10 points)Because a leaf in a heap is a node that has no left son, so for the first leaf that has no children, 2i > n. That is, i = ⎣n/2⎦+1,⎣n/2⎦+2,…,n.3 Heapsort(A){BuildHeap(A);for (i = length(A) downto 2){Swap(A[1], A[i]);heap_size(A) -= 1;Heapify(A, 1);}}We know that the call to BuildHeap() takes O(n) time. Each of the n - 1 calls to Heapify() takes O(lg n) time. Thus the total time taken by HeapSort() = O(n) + (n - 1) O(lg n)= O(n) + O(n lg n)= O(n lg n)1 CountingSort(A, B, k)2 for i=1 to k3 C[i]= 0;4 for j=1 to n5 C[A[j]] += 1;6 for i=2 to k7 C[i] = C[i] + C[i-1];8 for j=n downto 19 B[C[A[j]]] = A[j];10 C[A[j]] -= 1;Analysze its running time.Selection problemRandomizedSelect(A, p, r, i)if (p == r) then return A[p];q = RandomizedPartition(A, p, r)k = q - p + 1;if (i == k) then return A[q]; // not in bookif (i < k) thenreturn RandomizedSelect(A, p, q-1, i);elsereturn RandomizedSelect(A, q+1, r, i-k);Write an algorithm of the selection problem that the worst-case running time is linear-time.1. Divide n elements into groups of 52. Find median of each group3. Use Select() recursively to find median x of the ⎣n/5⎦4. Partition the n elements around x. Let k = rank(x)5. if (i == k) then return xif(i < k) then use Select() recursively to find i th smallest element in first partitionelse (i > k) use Select() recursively to find (i-k)th smallest element in last partition T(n)=T(1/5n)+T(7/10n)+ Θ(n)。
(完整版)算法设计与分析考试题及答案,推荐文档

____________________________________。 4.若序列 X={B,C,A,D,B,C,D},Y={A,C,B,A,B,D,C,D},请给出序列
X 和 Y 的一个最长公共子序列_____________________________。 5.用回溯法解问题时,应明确定义问题的解空间,问题的解空间至
和
之分。
5、 f(n)= 6×2n+n2,f(n)的渐进性态 f(n)= O(
)
6、 贪心算法总是做出在当前看来
的选择。也就是说贪心算法并不从整体最优考
虑,它所做出的选择只是在某种意义上的
。
7、 许多可以用贪心算法求解的问题一般具有 2 个重要的性质:
性质和
性质。
二、简答题(本题 25 分,每小题 5 分)
五、算法理解题(本题 5 分) 设有 n=2k 个运动员要进行循环赛,
现设计一个满足以下要求的比赛日程表:
①每个选手必须与其他 n-1 名选手比赛各一次; ②每个选手一天至多只能赛一次;
③循环赛要在最短时间内完成。
我去(人1)如也果 就n=2k有,循人环赛!最少为需要U进R行扼几天腕; 入站内信不存在向你偶同意调剖沙 (2)当 n=23=8 时,请画出循环赛日程表。
六、算法设计题(本题 15 分) 分别用贪心算法、动态规划法、回溯法设计 0-1 背包问题。要求:说明所使用的算法
策略;写出算法实现的主要步骤;分析算法的时间。 七、算法设计题(本题 10 分)
建议收藏下载本文,以便随时学习! 通过键盘输入一个高精度的正整数 n(n 的有效位数≤240),去掉其中任意 s 个数字后, 剩下的数字按原左右次序将组成一个新的正整数。编程对给定的 n 和 s,寻找一种方案, 使得剩下的数字组成的新数最小。 【样例输入】 178543 S=4 【样例输出】 13
算法设计与分析复习题目及答案.docx

算法设计与分析复习题目及答案.docx一。
选择题1、二分搜索算法是利用(A)实现的算法。
A、分治策略B、动态规划法C、贪心法D、回溯法2、下列不是动态规划算法基本步骤的是(B)。
A、找出最优解的性质B、构造最优解C、算出最优解D、定义最优解3、最大效益优先是(A)的一搜索方式。
A、分支界限法B、动态规划法C、贪心法D、回溯法4、在下列算法中有时找不到问题解的是(B)。
A、蒙特卡罗算法B、拉斯维加斯算法C、舍伍德算法D、数值概率算法5. 回溯法解旅行售货员问题时的解空间树是(B)。
A、子集树B、排列树C、深度优先生成树D、广度优先生成树6.下列算法常以自底向上的方式求解最优解的是(B)。
A、备忘录法B、动态规划法C、贪心法D、回溯法7、衡量一个算法好坏的标准是( C )。
A 运行速度快B 占用空间少C 时间复杂度低D 代码短8、以下不可以使用分治法求解的是( D )。
A 棋盘覆盖问题B 选择问题C 归并排序D 0/1 背包问题9. 实现循环赛日程表利用的算法是(A)。
A、分治策略B、动态规划法C、贪心法D、回溯法10、下列随机算法中运行时有时候成功有时候失败的是( C )A 数值概率算法B 舍伍德算法C 拉斯维加斯算法D 蒙特卡罗算法11.下面不是分支界限法搜索方式的是(DA、广度优先B、最小耗费优先C、最大效益优先12.下列算法常以深度优先方式系统搜索问题解的是(A、备忘录法B、动态规划法C、贪心法13.备忘录方法是那种算法的变形。
( B )A、分治法B、动态规划法C、贪心法14.哈弗曼编码的贪心算法所需的计算时间为(BnB、 O(nlogn )n )A、O( n2 )C、O(215.分支限界法解最大团问题时,活结点表的组织形式是(A、最小堆B、最大堆C、栈组)。
D、深度优先D)。
D、回溯法D、回溯法)。
D、 O( n)B)。
D 、数16.最长公共子序列算法利用的算法是(B)。
A、分支界限法B、动态规划法C、贪心法D、回溯法17.实现棋盘覆盖算法利用的算法是(A)。
算法分析与设计考试复习题及参考答案jing

D.桶排
序
9、以下( A )不一定得到问题的最优解
A.贪心算法
B.回溯算法
C.分支限界法
D.动态
规划法
10、以下( C )不包括在图灵机结构中
A. 控制器
B. 读写磁头
C.计算器
D. 磁带
三、简答题(本题20分,每小题5分) 1、设有n=2k个运动员要进行循环赛,现设计一个满足以下要求的比赛
日程表:
不同输入实例下的算法所耗时间。最坏情况下的时间复杂性取的输入 实例中最大的时间复杂度: W(n) = max{ T(n,I) } , I∈Dn 平均时间复杂性是所有输入实例的处理时间与各自概率的乘积和: A(n) =∑P(I)T(n,I) I∈Dn 6. 简述二分检索(折半查找)算法的基本过程。
设输入是一个按非降次序排列的元素表A[i:j] 和x,选取 A[(i+j)/2]与x比较,如果A[(i+j)/2]=x,则返回(i+j)/2,如果 A[(i+j)/2]<x,则A[i:(i+j)/2-1]找x,否则在A[ (i+j)/2+1:j] 找x。上述过程被反复递归调用。 回溯法的搜索特点是什么 7. 背包问题的目标函数和贪心算法最优化量度相同吗?
p[v]=u }
dijkstra(G,w,s) 1. init-single-source(G,s) 2. S=Φ
3. Q=V[G]
4.while Q<> Φ
do u=min(Q)
S=S∪{u}
for each vertex v∈adj[u] //所有u的邻接点 v
do
relax(G,v,w)
2、某工厂预计明年有N个新建项目,每个项目的投资额 w[k]及其投
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
山东建筑大学计算机学院算法分析算法复习题(Y u c o n a n翻译)1.The O-notation provides an asymptotic upper bound. The Ω-notation provides anasymptotic lower bound. The Θ-notation asymptotically a function form above2.To represent a heap as an array,the root of tree is A[1], and given the index i ofa node, the indices of its parent Parent(i) { return ⎣i/2⎦; },left child, Left(i){ return 2*i; },right child, right(i) { return 2*i + 1; }.代表一个堆中的一个数组,树的根节点是A[1],并且给出一个节点i,那么该节点的父节点是左孩子右孩子3.Because the heap of n elements is a binary tree, the height of any node is at mostΘ(lg n).因为n个元素的堆是一个二叉树,任意节点的树高最多是4.In optimization problems, there can be many possible solutions. Each solutionhas a value, and we wish to find a solution with the optimal (minimum ormaximum) value. We call such a solution an optimal solution to the problem.在最优化问题中,有很多可能的解,每个解都有一个值,我们希望找到一个最优解(最大或最小),我们称这个解为最优解问题。
5.optimal substructure if an optimal solution to the problem contains within itoptimal solutions to subproblems.最优子结构中问题的最优解,至少包含它的最优解的子问题。
6. A subsequence of X if there exists a strictly increasing sequence <i1,i2, ..., i k> ofindices of X such that for all j = 1, 2, ..., k, we have x i j= z j .Let X = <x1, x2, ..., x m> and Y = <y1, y2, ..., y n> be sequences, and let Z = <z1,z2, ..., z k> be any LCS of X and Y.(1). If x m = y n, then z k = x m = y n and Z k-1 is an LCS of X m-1 and Y n-1.(2). If x m ≠ y n, then z k ≠ x m implies that Z is an LCS of X m-1 and Y.(3). If x m ≠ y n, then z k ≠ y n implies that Z is an LCS of X and Y n-1.7. A greedy algorithm always makes the choice that looks best at the moment. Thatis, it makes a locally optimal choice in the hope that this choice will lead to aglobally optimal solution.贪心算法经常需要在某个时刻寻找最好的选择。
正因如此,它在当下找到希望中的最优选择,以便引导出一个全局的最优解。
8.The greedy-choice property and optimal sub-structure are the two key ingredientsof greedy algorithm.贪心选择和最优子结构是贪心算法的两个重要组成部分。
9.When a recursive algorithm revisits the same problem over and over again, wesay that the optimization problem has overlapping subproblems.当一个递归算法一遍一遍的遍历同一个问题时,我们说这个最优化问题是重叠子问题。
10.greedy-choice property is a globally optimal solution can be arrived at by makinga locally optimal (greedy) choice.贪心选择性质是一个全局的最优解,这个最优解可以做一个全局的最优选择。
11.An approach of Matrix multiplication can develope a Θ(V4)-time algorithm forthe all-pairs shortest-paths problem and then improve its running time to Θ(V3lgV).一个矩阵相乘问题的解决可以一个时间复杂度算法的所有路径的最短路径问题,改进后的时间复杂度是。
12.Floyd-Warshall algorithm, runs in Θ(V3) time to solve the all-pairs shortest-pathsproblem.FW算法在时间复杂度下可以解决最短路径问题。
13.The running time of Quick Sort is O(n2) in the worst case, and O(n lg n) in theaverage case.快速排序的平均时间复杂度是O(n lg n),最坏时间复杂度是O(n2)。
14.The MERGE(A,p,q,r) procedure in merge sort takes time Θ(n).MERGE在归并排序中所花费的时间是。
15.Given a weighted, directed graph G = (V, E) with source s and weight function w :E →R, the Bellman-Ford algorithm makes |V| - 1 passes over the edges of thegraph.给一个带权重的有向图G = (V, E),权重关系w : E →R,则the Bellman-Ford算法需经过条边。
16.The Bellman-Ford algorithm runs in time O(V E).Bellman ford 算法的时间复杂度是。
17.A decision tree represents the comparisons made by a comparison sort.Theasymptotic height of any decision tree for sorting n elements is (n lg n).一个决策树代表一个比较类型,通过比较排序。
N个元素的任意决策树的渐进高度是。
True-false questions1.An algorithm is said to be correct if, for some input instance, it halts with thecorrect output F如果给一个算法输入一些实例,并且它给力正确的输出,则认识这个算法是正确的。
2.Insertion sort always best merge sort F插入排序总是优越与归并排序。
3.Θ(n lg n) grows more slowly than Θ(n2). Therefore, merge sort asymptoticallybeats insertion sort in the worst case. TΘ(n lg n)4.Currently computers are fast and computer memory is very cheap, we have noreason to study algorithms. F5.In RAM (Random-Access Machine) model, instructions are executed withconcurrent operations. F6.The running time of an algorithm on a particular input is the number of primitiveoperations or “steps” executed. T7.Quick sorts, have no combining step: two subarrays form an already-sorted array.T8.The running time of Counting sort is O(n + k). But the running time of sorting isΩ(n lg n). So this is contradiction. F9.The Counting sort is stable. T10.In the selection problem,there is a algorithm of theoretical interest only with O(n)worst-case running time. T11.Divide-and-conquer algorithms partition the problem into independentsubproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. In contrast, dynamic programming is applicablewhen the subproblems are not independent, that is, when subproblems sharesubsubproblems. T12.In dynamic programming, we build an optimal solution to the problem fromoptimal solutions to subproblems. T13.The best-case running time is the longest running time for any input of size n. F14.When we analyze the running time of an algorithm, we actually interested on therate of growth (order of growth). T15.The dynamic programming approach means that it break the problem into severalsubproblems that are similar to the original problem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem. T16.Insertion sort and merge sort both use divide-and-conquer approach. F17.Θ(g(n)) = { f (n) : there exist positive constants c1, c2, and n0 such that 0 ≤ c1 g(n)≤ f (n) ≤ c2 g(n) for all n ≥ n0 }18.Min-Heaps satisfy the heap property: A[Parent(i)] ≥ A[i] for allnodes i > 1. F19.For array of length n, all elements in range A[⎣n/2⎦ + 1 .. n] are heaps. T20.The tighter bound of the running time to build a max-heap from an unorderedarray isn’t in linear time. F21.The call to BuildHeap() takes O(n) time, Each of the n - 1 calls to Heapify()takes O(lg n) time, Thus the total time taken by HeapSort() = O(n) + (n - 1) O(lg n)= O(n) + O(n lg n)= O(n lg n). T22.Quick Sort is a dynamic programming algorithm. The array A[p..r] is partitionedinto two non-empty subarrays A[p..q] and A[q+1..r], All elements in A[p..q] are less than all elements in A[q+1..r], the subarrays are recursively sorted by calls to quicksort. F23.Assume that we have a connected, undirected graph G = (V, E) with a weightfunction w : E→R, and we wish to find a minimum spanning tree for G. BothKruskal and Prim algorithms use a dynamic programming approach to theproblem. F24.A cut (S, V - S) of an undirected graph G = (V, E) is a partition of E. F25.An edge is a light edge crossing a cut if its weight is the maximum of any edgecrossing the cut. F26.Kruskal's algorithm uses a disjoint-set data structure to maintain several disjointsets of elements. T27.Optimal-substructure property is a hallmark of the applicability of both dynamicprogramming. T28.Dijkstra's algorithm is a dynamic programming algorithm. F29.Floyd-Warshall algorithm, which finds shortest paths between all pairs of vertices ,is a greedy algorithm. F30.Given a weighted, directed graph G = (V, E) with weight function w : E →R, let p= <v1,v2,..., v k_>be a shortest path from vertex v1 to vertex v k and, for any i and j such that 1 ≤i ≤j ≤k, let p ij = <v i, v i+1,..., v j> be the subpath of p from vertex v i to vertex v j . Then, p ij is a shortest path from v i to v j. T31.Given a weighted, directed graph G = (V, E) with weight function w : E →R,Ifthere is a negative-weight cycle on some path from s to v , there exists a shortest-path from s to v. F32.Since any acyclic path in a graph G = (V, E) contains at most |V| distinct vertices,it also contains at most |V| - 1 edges. Thus, we can restrict our attention to shortest paths of at most |V| - 1 edges. T33.The process of relaxing an edge (u, v) tests whether we can improve the shortestpath to v found so far by going through u. T34.In Dijkstra's algorithm and the shortest-paths algorithm for directed acyclic graphs,each edge is relaxed exactly once. In the Bellman-Ford algorithm, each edge is also relaxed exactly once . F35.The Bellman-Ford algorithm solves the single-source shortest-paths problem inthe general case in which edge weights must be negative. F36.Given a weighted, directed graph G = (V, E) with source s and weight function w :E →R, the Bellman-Ford algorithm can not return a Boolean value indicatingwhether or not there is a negative-weight cycle that is reachable from the source.F37.Given a weighted, directed graph G = (V, E) with source s and weight function w :E →R, for the Bellman-Ford algorithm, if there is such a cycle, the algorithmindicates that no solution exists. If there is no such cycle, the algorithm produces the shortest paths and their weights. F38.Dijkstra's algorithm solves the single-source shortest-paths problem on a weighted,directed graph G = (V, E) for the case in which all edge weights are negative. F 39.Dijkstra's algorithm solves the single-source shortest-paths problem on a weighted,directed graph G = (V, E) for the case in which all edge weights are nonnegative.Bellman-Ford algorithm solves the single-source shortest-paths problem on aweighted, directed graph G = (V , E ), the running time of Dijkstra's algorithm is lower than that of the Bellman-Ford algorithm. T40. The steps for developing a dynamic-programming algorithm:1. Characterize thestructure of an optimal solution. 2. Recursively define the value of an optimalsolution. 3. Compute the value of an optimal solution in a bottom-up fashion. 4. Construct an optimal solution from computed information. T三 Each of n input elements is an integer in the range 0 to k , Design a linear running time algorithm to sort n elements.四Design a expected linear running time algorithm to find the i th smallest element of n elements using divide and conquer strategy.五Write the INSERT-SORT procedure to sort into non-decreasing order. Analyze the running time of it with RAM Model. What ’s the best-case running time, the worst-case running time and average case running time. Write the MERGE-SORTprocedure to sort into non-decreasing order. Give the recurrence for the worst-case running time T(n) of Merge sort and find the solution to the recurrence.六 What is an optimal Huffman code for the following set of frequencies, <a:45, b:13, c:12,d:16,e:9,f:5>七 The traveling-salesman problem (TSP): in the traveling-salesman problem, we are given a complete undirected graph G=(V,E) that has a nonnegative integer cost c(u,v) associated with each edge (u,v)∈E , and we must find a tour of G with minimum cost. The following is an instance TSP. Please compute a tour with minimum cost with greedy algorithm.∞∞∞∞∞69326911699252312514216214八Given items of different values and weights, find the most valuable set of items that fit in a knapsack of fixed weight C .For an instance of knapsack problem, n =8, C =110,value V ={11,21,31,33,43,53,55,65} weight W ={1,11,21,23,33,43,45,55}. Use greedy algorithms to solve knapsack problem.九Use dynamic programming to solve Assembly-line scheduling problem: A Motors Corporation produces automobiles that has two assembly lines, numbered i =1,2. Eachline has n stations, numbered j=1,2…n. We denote the j th station on line i by S ij. The following figure is an instance of the assembly-line problem with costs entry time e i, exit time x i, the assembly time required at station S ij by a ij, the time to transfer a chassis away from assembly line I after having gone through station S ij is t ij. Please compute the fastest time and construct the fastest way through the factory of the instance.十. The matrix-chain multiplication problem can be stated as follows: given a chain <A1,A2,…,An>of matrices, where for i=1,2…,n, matrix A i has dimensionP i-1 P i, fully parenthesize the product A1,A2,…,A n in a way that minimizes the number of scalar multiplication. We pick as our subproblems the problems of determining the minimum cost of a parenthesization of A i A i+1 A j for 1 ≤i ≤j ≤n. Let m[i, j] be the minimum number of scalar multiplications needed to compute the matrix A i..j; for the full problem, the cost of a cheapest way to compute A1..n would thus be m[1, n]. Can you define m[i, j] recursively? Find an optimal parenthesization of a matrix-chain product whose sequence of dimensions is <4,3,5,2,3>十一 In the longest-common-subsequence (LCS) problem, we are given two sequences X = <x1, x2, ...,xm> and Y = <y1, y2, ..., yn> and wish to find a maximum-length common subsequence of X and Y. Please write its recursive formula and determine an LSC of Sequence S1=ACTGATCG and sequence S2=CATGC. Please fill in the blanks in the table below.十二 Proof: Any comparison sort algorithm requires Ω(nlgn) comparisons in the worst case.How many leaves does the tree have? (叶节点的数目)–At least n! (each of the n!permutations if the input appears as some leaf) ⇒n! ≤l (至少n! 个,排列)–At most 2hleaves (引理,至多2h个)⇒n! ≤l ≤2h⇒h ≥lg(n!) = Ω(nlgn)十三Proof: Subpaths of shortest paths are shortest paths.Given a weighted, directed graph G = (V, E) with weight function w : E → R, let p=<v1,v2,..., v k> be a shortest path from vertex v1to vertex v k and, for any i and j such that 1 ≤i ≤j ≤k, let p ij= <v i, v i+1,..., v j>be the subpath of p from vertex v i to vertex v j . Then, p ij is a shortest path from v i to v j.十四Proof : The worst case running time of quicksort is Θ(n2)十五Compute shortest paths with matrix multiplication and the Floyd-Warshall algorithm for the following graph.十六 Write the MAX-Heapify() procedure to for manipulating max-heaps. And analyze the running time of MAX-Heapify().三(10分)1 CountingSort(A, B, k)2 for i=1 to k3 C[i]= 0;4 for j=1 to n5 C[A[j]] += 1;6 for i=2 to k7 C[i] = C[i] + C[i-1];8 for j=n downto 19 B[C[A[j]]] = A[j];10 C[A[j]] -= 1;四算法描述3分The best-case running time is T(n) = c1n + c2(n - 1) + c4(n - 1) + c5(n - 1) + c8(n - 1) = (c1 + c2 + c4 + c5 + c8)n - (c2+ c4 + c5 + c8). This running time can be expressed as an + b for constants a and b that depend on the statement costs c i; it is thus a linear function of n.This worst-case running time can be expressed as an2 + bn + c for constants a, b, and c that again depend on the statement costs c i; it is thus a quadratic function of n.分析2分算法描述2分Θ(1) if n = 1T(n) =2T(n/2) + Θ(n) if n > 1.递归方程和求解3分五7 RAND-SELECT(A, p, r, i) (5分)if p = r then return A[p]q ← RAND-PARTITION(A, p, r)k ← q – p + 1if i = k then return A[q]if i < kthen return RAND-SELECT(A, p, q – 1, i )else return RAND-SELECT(A, q + 1, r, i – k ) Randomized RANDOMIZED-PARTITION(A; p; r) (5分) { i ←RANDOM(p, r)exchange A[r] ← A[i]return PARTITION(A; p; r)}PARTITION(A; p; r){ x← A[r]i ←p-1for j ← p to r-1do if A[j] ≤ xthen i ←i+1exchange A[i] ↔A[j]exchange A[i+1] ↔ A[r]return i+1}六首先画出它对应的图,加上标号,假设从1出发,每次贪心选择一个权重最小的顶点作为下一个要去的城市。