20秋《算法与数据分析》作业2
北语18春《算法与数据分析》作业2

------------------------------------------------------------------------------------------------------------------------------ (单选题) 1: 分治法所能解决的问题一般具有的几个特征不包括A: 该问题的规模缩小到一定的程度就可以容易地解决B: 该问题可以分解为若干个规模较小的相同问题,即该问题具有最优子结构性质C: 利用该问题分解出的子问题的解不可以合并为该问题的解D: 原问题所分解出的各个子问题是相互独立的,即子问题之间不包含公共的子问题正确答案:(单选题) 2: 下面关于NP问题说法正确的是A: NP问题都是不可能解决的问题B: P类问题包含在NP类问题中C: NP完全问题是P类问题的子集D: NP类问题包含在P类问题中正确答案:(单选题) 3: 背包问题的贪心算法所需的计算时间为A: O(n2n)B: O(nlogn)C: O(2n)D: O(n)正确答案:(单选题) 4: 关于分支限界法的搜索策略描述错误的是A: 在扩展结点处,先生成其所有的儿子结点(分支)B: 从当前的活结点表中选择上一个扩展结点。
C: 为了有效地选择下一扩展结点,加速搜索的进程,在每一个活结点处,计算一个函数值(限界)D: 根据函数值,从当前活结点表中选择一个最有利的结点作为扩展结点,使搜索朝着解空间上有最优解的分支推进,以便尽快地找出一个最优解。
正确答案:(单选题) 5: 二分搜索算法是利用什么实现的算法A: 分治策略B: 动态规划法C: 贪心法D: 回溯法正确答案:(单选题) 6: 在下列算法中有时找不到问题解的是A: 蒙特卡罗算法B: 拉斯维加斯算法C: 舍伍德算法D: 数值概率算法正确答案:(单选题) 7: 在下列算法中得到的解未必正确的是A: 蒙特卡罗算法B: 拉斯维加斯算法C: 舍伍德算法D: 数值概率算法------------------------------------------------------------------------------------------------------------------------------ 正确答案:(单选题) 8: 下面不是分支界限法搜索方式的是A: 广度优先B: 最小耗费优先C: 最大效益优先D: 深度优先正确答案:(单选题) 9: 分支限界法与回溯法的相同点是A: 求解目标相同B: 搜索方式相同C: 对扩展结点的扩展方式相同D: 都是一种在问题的解空间树T中搜索问题解的算法正确答案:(单选题) 10: 优先队列式分支限界法选取扩展结点的原则是A: 先进先出B: 后进先出C: 结点的优先级D: 随机正确答案:(判断题) 1: 动态规划算法的两个基本要素是.最优子结构性质和重叠子问题性质。
北语网院20春《算法与数据分析》作业_1答案

(单选)1:下列随机算法中运行时有时候成功有时候失败的是A:数值概率算法
B:舍伍德算法
C:拉斯维加斯算法
D:蒙特卡罗算法
正确答案:C
(单选)2:最长公共子序列算法利用的算法是
A:分支界限法
B:动态规划法
C:贪心法
D:回溯法
正确答案:B
(单选)3:矩阵连乘问题的算法可由什么设计实现
A:分支界限算法
B:动态规划算法
C:贪心算法
D:回溯算法
正确答案:B
(单选)4:下列哪一种算法不是随机化算法
A:蒙特卡罗算法
B:.拉斯维加斯算法
C:.动态规划算法
D:.舍伍德算法
正确答案:C
(单选)5:贪心算法与动态规划算法的共同点是
A:重叠子问题
B:构造最优解
C:贪心选择性质
D:最优子结构性质
正确答案:D
(单选)6:下面哪种函数是回溯法中为避免无效搜索采取的策略A:递归函数
B:.剪枝函数
C:。
随机数函数
D:.搜索函数
正确答案:B
(单选)7:采用最大效益优先搜索方式的算法是
A:分支界限法。
《数据结构与算法分析》(C++第二版)【美】Clifford A.Shaffer著 课后习题答案 二

《数据结构与算法分析》(C++第二版)【美】Clifford A.Shaffer著课后习题答案二5Binary Trees5.1 Consider a non-full binary tree. By definition, this tree must have some internalnode X with only one non-empty child. If we modify the tree to removeX, replacing it with its child, the modified tree will have a higher fraction ofnon-empty nodes since one non-empty node and one empty node have been removed.5.2 Use as the base case the tree of one leaf node. The number of degree-2 nodesis 0, and the number of leaves is 1. Thus, the theorem holds.For the induction hypothesis, assume the theorem is true for any tree withn − 1 nodes.For the induction step, consider a tree T with n nodes. Remove from the treeany leaf node, and call the resulting tree T. By the induction hypothesis, Thas one more leaf node than it has nodes of degree 2.Now, restore the leaf node that was removed to form T. There are twopossible cases.(1) If this leaf node is the only child of its parent in T, then the number ofnodes of degree 2 has not changed, nor has the number of leaf nodes. Thus,the theorem holds.(2) If this leaf node is the child of a node in T with degree 2, then that nodehas degree 1 in T. Thus, by restoring the leaf node we are adding one newleaf node and one new node of degree 2. Thus, the theorem holds.By mathematical induction, the theorem is correct.32335.3 Base Case: For the tree of one leaf node, I = 0, E = 0, n = 0, so thetheorem holds.Induction Hypothesis: The theorem holds for the full binary tree containingn internal nodes.Induction Step: Take an arbitrary tree (call it T) of n internal nodes. Selectsome internal node x from T that has two leaves, and remove those twoleaves. Call the resulting tree T’. Tree T’ is full and has n−1 internal nodes,so by the Induction Hypothesis E = I + 2(n − 1).Call the depth of node x as d. Restore the two children of x, each at leveld+1. We have nowadded d to I since x is now once again an internal node.We have now added 2(d + 1) − d = d + 2 to E since we added the two leafnodes, but lost the contribution of x to E. Thus, if before the addition we had E = I + 2(n − 1) (by the induction hypothesis), then after the addition we have E + d = I + d + 2 + 2(n − 1) or E = I + 2n which is correct. Thus,by the principle of mathematical induction, the theorem is correct.5.4 (a) template <class Elem>void inorder(BinNode<Elem>* subroot) {if (subroot == NULL) return; // Empty, do nothingpreorder(subroot->left());visit(subroot); // Perform desired actionpreorder(subroot->right());}(b) template <class Elem>void postorder(BinNode<Elem>* subroot) {if (subroot == NULL) return; // Empty, do nothingpreorder(subroot->left());preorder(subroot->right());visit(subroot); // Perform desired action}5.5 The key is to search both subtrees, as necessary.template <class Key, class Elem, class KEComp>bool search(BinNode<Elem>* subroot, Key K);if (subroot == NULL) return false;if (subroot->value() == K) return true;if (search(subroot->right())) return true;return search(subroot->left());}34 Chap. 5 Binary Trees5.6 The key is to use a queue to store subtrees to be processed.template <class Elem>void level(BinNode<Elem>* subroot) {AQueue<BinNode<Elem>*> Q;Q.enqueue(subroot);while(!Q.isEmpty()) {BinNode<Elem>* temp;Q.dequeue(temp);if(temp != NULL) {Print(temp);Q.enqueue(temp->left());Q.enqueue(temp->right());}}}5.7 template <class Elem>int height(BinNode<Elem>* subroot) {if (subroot == NULL) return 0; // Empty subtreereturn 1 + max(height(subroot->left()),height(subroot->right()));}5.8 template <class Elem>int count(BinNode<Elem>* subroot) {if (subroot == NULL) return 0; // Empty subtreeif (subroot->isLeaf()) return 1; // A leafreturn 1 + count(subroot->left()) +count(subroot->right());}5.9 (a) Since every node stores 4 bytes of data and 12 bytes of pointers, the overhead fraction is 12/16 = 75%.(b) Since every node stores 16 bytes of data and 8 bytes of pointers, the overhead fraction is 8/24 ≈ 33%.(c) Leaf nodes store 8 bytes of data and 4 bytes of pointers; internal nodesstore 8 bytes of data and 12 bytes of pointers. Since the nodes havedifferent sizes, the total space needed for internal nodes is not the sameas for leaf nodes. Students must be careful to do the calculation correctly,taking the weighting into account. The correct formula looks asfollows, given that there are x internal nodes and x leaf nodes.4x + 12x12x + 20x= 16/32 = 50%.(d) Leaf nodes store 4 bytes of data; internal nodes store 4 bytes of pointers. The formula looks as follows, given that there are x internal nodes and35x leaf nodes:4x4x + 4x= 4/8 = 50%.5.10 If equal valued nodes were allowed to appear in either subtree, then during a search for all nodes of a given value, whenever we encounter a node of that value the search would be required to search in both directions.5.11 This tree is identical to the tree of Figure 5.20(a), except that a node with value 5 will be added as the right child of the node with value 2.5.12 This tree is identical to the tree of Figure 5.20(b), except that the value 24 replaces the value 7, and the leaf node that originally contained 24 is removed from the tree.5.13 template <class Key, class Elem, class KEComp>int smallcount(BinNode<Elem>* root, Key K);if (root == NULL) return 0;if (KEComp.gt(root->value(), K))return smallcount(root->leftchild(), K);elsereturn smallcount(root->leftchild(), K) +smallcount(root->rightchild(), K) + 1;5.14 template <class Key, class Elem, class KEComp>void printRange(BinNode<Elem>* root, int low,int high) {if (root == NULL) return;if (KEComp.lt(high, root->val()) // all to leftprintRange(root->left(), low, high);else if (KEComp.gt(low, root->val())) // all to rightprintRange(root->right(), low, high);else { // Must process both childrenprintRange(root->left(), low, high);PRINT(root->value());printRange(root->right(), low, high);}}5.15 The minimum number of elements is contained in the heap with a single node at depth h − 1, for a total of 2h−1 nodes.The maximum number of elements is contained in the heap that has completely filled up level h − 1, for a total of 2h − 1 nodes.5.16 The largest element could be at any leaf node.5.17 The corresponding array will be in the following order (equivalent to level order for the heap):12 9 10 5 4 1 8 7 3 236 Chap. 5 Binary Trees5.18 (a) The array will take on the following order:6 5 3 4 2 1The value 7 will be at the end of the array.(b) The array will take on the following order:7 4 6 3 2 1The value 5 will be at the end of the array.5.19 // Min-heap classtemplate <class Elem, class Comp> class minheap {private:Elem* Heap; // Pointer to the heap arrayint size; // Maximum size of the heapint n; // # of elements now in the heapvoid siftdown(int); // Put element in correct placepublic:minheap(Elem* h, int num, int max) // Constructor{ Heap = h; n = num; size = max; buildHeap(); }int heapsize() const // Return current size{ return n; }bool isLeaf(int pos) const // TRUE if pos a leaf{ return (pos >= n/2) && (pos < n); }int leftchild(int pos) const{ return 2*pos + 1; } // Return leftchild posint rightchild(int pos) const{ return 2*pos + 2; } // Return rightchild posint parent(int pos) const // Return parent position { return (pos-1)/2; }bool insert(const Elem&); // Insert value into heap bool removemin(Elem&); // Remove maximum value bool remove(int, Elem&); // Remove from given pos void buildHeap() // Heapify contents{ for (int i=n/2-1; i>=0; i--) siftdown(i); }};template <class Elem, class Comp>void minheap<Elem, Comp>::siftdown(int pos) { while (!isLeaf(pos)) { // Stop if pos is a leafint j = leftchild(pos); int rc = rightchild(pos);if ((rc < n) && Comp::gt(Heap[j], Heap[rc]))j = rc; // Set j to lesser child’s valueif (!Comp::gt(Heap[pos], Heap[j])) return; // Done37swap(Heap, pos, j);pos = j; // Move down}}template <class Elem, class Comp>bool minheap<Elem, Comp>::insert(const Elem& val) { if (n >= size) return false; // Heap is fullint curr = n++;Heap[curr] = val; // Start at end of heap// Now sift up until curr’s parent < currwhile ((curr!=0) &&(Comp::lt(Heap[curr], Heap[parent(curr)]))) {swap(Heap, curr, parent(curr));curr = parent(curr);}return true;}template <class Elem, class Comp>bool minheap<Elem, Comp>::removemin(Elem& it) { if (n == 0) return false; // Heap is emptyswap(Heap, 0, --n); // Swap max with last valueif (n != 0) siftdown(0); // Siftdown new root valit = Heap[n]; // Return deleted valuereturn true;}38 Chap. 5 Binary Trees// Remove value at specified positiontemplate <class Elem, class Comp>bool minheap<Elem, Comp>::remove(int pos, Elem& it) {if ((pos < 0) || (pos >= n)) return false; // Bad posswap(Heap, pos, --n); // Swap with last valuewhile ((pos != 0) &&(Comp::lt(Heap[pos], Heap[parent(pos)])))swap(Heap, pos, parent(pos)); // Push up if largesiftdown(pos); // Push down if small keyit = Heap[n];return true;}5.20 Note that this summation is similar to Equation 2.5. To solve the summation requires the shifting technique from Chapter 14, so this problem may be too advanced for many students at this time. Note that 2f(n) − f(n) = f(n),but also that:2f(n) − f(n) = n(24+48+616+ ··· +2(log n − 1)n) −n(14+28+316+ ··· +log n − 1n)logn−1i=112i− log n − 1n)= n(1 − 1n− log n − 1n)= n − log n.5.21 Here are the final codes, rather than a picture.l 00h 010i 011e 1000f 1001j 101d 11000a 1100100b 1100101c 110011g 1101k 11139The average code length is 3.234455.22 The set of sixteen characters with equal weight will create a Huffman coding tree that is complete with 16 leaf nodes all at depth 4. Thus, the average code length will be 4 bits. This is identical to the fixed length code. Thus, in this situation, the Huffman coding tree saves no space (and costs no space).5.23 (a) By the prefix property, there can be no character with codes 0, 00, or 001x where “x” stands for any binary string.(b) There must be at least one code with each form 1x, 01x, 000x where“x” could be any binary string (including the empty string).5.24 (a) Q and Z are at level 5, so any string of length n containing only Q’s and Z’s requires 5n bits.(b) O and E are at level 2, so any string of length n containing only O’s and E’s requires 2n bits.(c) The weighted average is5 ∗ 5 + 10 ∗ 4 + 35 ∗ 3 + 50 ∗ 2100bits per character5.25 This is a straightforward modification.// Build a Huffman tree from minheap h1template <class Elem>HuffTree<Elem>*buildHuff(minheap<HuffTree<Elem>*,HHCompare<Elem> >* hl) {HuffTree<Elem> *temp1, *temp2, *temp3;while(h1->heapsize() > 1) { // While at least 2 itemshl->removemin(temp1); // Pull first two treeshl->removemin(temp2); // off the heaptemp3 = new HuffTree<Elem>(temp1, temp2);hl->insert(temp3); // Put the new tree back on listdelete temp1; // Must delete the remnantsdelete temp2; // of the trees we created}return temp3;}6General Trees6.1 The following algorithm is linear on the size of the two trees. // Return TRUE iff t1 and t2 are roots of identical// general treestemplate <class Elem>bool Compare(GTNode<Elem>* t1, GTNode<Elem>* t2) { GTNode<Elem> *c1, *c2;if (((t1 == NULL) && (t2 != NULL)) ||((t2 == NULL) && (t1 != NULL)))return false;if ((t1 == NULL) && (t2 == NULL)) return true;if (t1->val() != t2->val()) return false;c1 = t1->leftmost_child();c2 = t2->leftmost_child();while(!((c1 == NULL) && (c2 == NULL))) {if (!Compare(c1, c2)) return false;if (c1 != NULL) c1 = c1->right_sibling();if (c2 != NULL) c2 = c2->right_sibling();}}6.2 The following algorithm is Θ(n2).// Return true iff t1 and t2 are roots of identical// binary treestemplate <class Elem>bool Compare2(BinNode<Elem>* t1, BinNode<Elem* t2) { BinNode<Elem> *c1, *c2;if (((t1 == NULL) && (t2 != NULL)) ||((t2 == NULL) && (t1 != NULL)))return false;if ((t1 == NULL) && (t2 == NULL)) return true;4041if (t1->val() != t2->val()) return false;if (Compare2(t1->leftchild(), t2->leftchild())if (Compare2(t1->rightchild(), t2->rightchild())return true;if (Compare2(t1->leftchild(), t2->rightchild())if (Compare2(t1->rightchild(), t2->leftchild))return true;return false;}6.3 template <class Elem> // Print, postorder traversalvoid postprint(GTNode<Elem>* subroot) {for (GTNode<Elem>* temp = subroot->leftmost_child();temp != NULL; temp = temp->right_sibling())postprint(temp);if (subroot->isLeaf()) cout << "Leaf: ";else cout << "Internal: ";cout << subroot->value() << "\n";}6.4 template <class Elem> // Count the number of nodesint gencount(GTNode<Elem>* subroot) {if (subroot == NULL) return 0int count = 1;GTNode<Elem>* temp = rt->leftmost_child();while (temp != NULL) {count += gencount(temp);temp = temp->right_sibling();}return count;}6.5 The Weighted Union Rule requires that when two parent-pointer trees are merged, the smaller one’s root becomes a child of the larger one’s root. Thus, we need to keep track of the number of nodes in a tree. To do so, modify the node array to store an integer value with each node. Initially, each node isin its own tree, so the weights for each node begin as 1. Whenever we wishto merge two trees, check the weights of the roots to determine which has more nodes. Then, add to the weight of the final root the weight of the new subtree.6.60 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15-1 0 0 0 0 0 0 6 0 0 0 9 0 0 12 06.7 The resulting tree should have the following structure:42 Chap. 6 General TreesNode 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15Parent 4 4 4 4 -1 4 4 0 0 4 9 9 9 12 9 -16.8 For eight nodes labeled 0 through 7, use the following series of equivalences: (0, 1) (2, 3) (4, 5) (6, 7) (4 6) (0, 2) (4 0)This requires checking fourteen parent pointers (two for each equivalence),but none are actually followed since these are all roots. It is possible todouble the number of parent pointers checked by choosing direct children ofroots in each case.6.9 For the “lists of Children” representation, every node stores a data value and a pointer to its list of children. Further, every child (every node except the root)has a record associated with it containing an index and a pointer. Indicatingthe size of the data value as D, the size of a pointer as P and the size of anindex as I, the overhead fraction is3P + ID + 3P + I.For the “Left Child/Right Sibling” representation, every node stores three pointers and a data value, for an overhead fraction of3PD + 3P.The first linked representation of Section 6.3.3 stores with each node a datavalue and a size field (denoted by S). Each child (every node except the root)also has a pointer pointing to it. The overhead fraction is thusS + PD + S + Pmaking it quite efficient.The second linked representation of Section 6.3.3 stores with each node adata value and a pointer to the list of children. Each child (every node exceptthe root) has two additional pointers associated with it to indicate its placeon the parent’s linked list. Thus, the overhead fraction is3PD + 3P.6.10 template <class Elem>BinNode<Elem>* convert(GTNode<Elem>* genroot) {if (genroot == NULL) return NULL;43GTNode<Elem>* gtemp = genroot->leftmost_child();btemp = new BinNode(genroot->val(), convert(gtemp),convert(genroot->right_sibling()));}6.11 • Parent(r) = (r − 1)/k if 0 < r < n.• Ith child(r) = kr + I if kr +I < n.• Left sibling(r) = r − 1 if r mod k = 1 0 < r < n.• Right sibling(r) = r + 1 if r mod k = 0 and r + 1 < n.6.12 (a) The overhead fraction is4(k + 1)4 + 4(k + 1).(b) The overhead fraction is4k16 + 4k.(c) The overhead fraction is4(k + 2)16 + 4(k + 2).(d) The overhead fraction is2k2k + 4.6.13 Base Case: The number of leaves in a non-empty tree of 0 internal nodes is (K − 1)0 + 1 = 1. Thus, the theorem is correct in the base case.Induction Hypothesis: Assume that the theorem is correct for any full Karytree containing n internal nodes.Induction Step: Add K children to an arbitrary leaf node of the tree withn internal nodes. This new tree now has 1 more internal node, and K − 1more leaf nodes, so theorem still holds. Thus, the theorem is correct, by the principle of Mathematical Induction.6.14 (a) CA/BG///FEDD///H/I//(b) CA/BG/FED/H/I6.15 X|P-----| | |C Q R---| |V M44 Chap. 6 General Trees6.16 (a) // Use a helper function with a pass-by-reference// variable to indicate current position in the// node list.template <class Elem>BinNode<Elem>* convert(char* inlist) {int curr = 0;return converthelp(inlist, curr);}// As converthelp processes the node list, curr is// incremented appropriately.template <class Elem>BinNode<Elem>* converthelp(char* inlist,int& curr) {if (inlist[curr] == ’/’) {curr++;return NULL;}BinNode<Elem>* temp = new BinNode(inlist[curr++], NULL, NULL);temp->left = converthelp(inlist, curr);temp->right = converthelp(inlist, curr);return temp;}(b) // Use a helper function with a pass-by-reference // variable to indicate current position in the// node list.template <class Elem>BinNode<Elem>* convert(char* inlist) {int curr = 0;return converthelp(inlist, curr);}// As converthelp processes the node list, curr is// incremented appropriately.template <class Elem>BinNode<Elem>* converthelp(char* inlist,int& curr) {if (inlist[curr] == ’/’) {curr++;return NULL;}BinNode<Elem>* temp =new BinNode<Elem>(inlist[curr++], NULL, NULL);if (inlist[curr] == ’\’’) return temp;45curr++ // Eat the internal node mark.temp->left = converthelp(inlist, curr);temp->right = converthelp(inlist, curr);return temp;}(c) // Use a helper function with a pass-by-reference// variable to indicate current position in the// node list.template <class Elem>GTNode<Elem>* convert(char* inlist) {int curr = 0;return converthelp(inlist, curr);}// As converthelp processes the node list, curr is// incremented appropriately.template <class Elem>GTNode<Elem>* converthelp(char* inlist,int& curr) {if (inlist[curr] == ’)’) {curr++;return NULL;}GTNode<Elem>* temp =new GTNode<Elem>(inlist[curr++]);if (curr == ’)’) {temp->insert_first(NULL);return temp;}temp->insert_first(converthelp(inlist, curr));while (curr != ’)’)temp->insert_next(converthelp(inlist, curr));curr++;return temp;}6.17 The Huffman tree is a full binary tree. To decode, we do not need to know the weights of nodes, only the letter values stored in the leaf nodes. Thus, we can use a coding much like that of Equation 6.2, storing only a bit mark for internal nodes, and a bit mark and letter value for leaf nodes.7Internal Sorting7.1 Base Case: For the list of one element, the double loop is not executed and the list is not processed. Thus, the list of one element remains unaltered and is sorted.Induction Hypothesis: Assume that the list of n elements is sorted correctlyby Insertion Sort.Induction Step: The list of n + 1 elements is processed by first sorting thetop n elements. By the induction hypothesis, this is done correctly. The final pass of the outer for loop will process the last element (call it X). This isdone by the inner for loop, which moves X up the list until a value smallerthan that of X is encountered. At this point, X has been properly insertedinto the sorted list, leaving the entire collection of n + 1 elements correctly sorted. Thus, by the principle of Mathematical Induction, the theorem is correct.7.2 void StackSort(AStack<int>& IN) {AStack<int> Temp1, Temp2;while (!IN.isEmpty()) // Transfer to another stackTemp1.push(IN.pop());IN.push(Temp1.pop()); // Put back one elementwhile (!Temp1.isEmpty()) { // Process rest of elemswhile (IN.top() > Temp1.top()) // Find elem’s placeTemp2.push(IN.pop());IN.push(Temp1.pop()); // Put the element inwhile (!Temp2.isEmpty()) // Put the rest backIN.push(Temp2.pop());}}46477.3 The revised algorithm will work correctly, and its asymptotic complexity will remain Θ(n2). However, it will do about twice as many comparisons, since it will compare adjacent elements within the portion of the list already knownto be sorted. These additional comparisons are unproductive.7.4 While binary search will find the proper place to locate the next element, it will still be necessary to move the intervening elements down one position in the array. This requires the same number of operations as a sequential search. However, it does reduce the number of element/element comparisons, and may be somewhat faster by a constant factor since shifting several elements may be more efficient than an equal number of swap operations.7.5 (a) template <class Elem, class Comp>void selsort(Elem A[], int n) { // Selection Sortfor (int i=0; i<n-1; i++) { // Select i’th recordint lowindex = i; // Remember its indexfor (int j=n-1; j>i; j--) // Find least valueif (Comp::lt(A[j], A[lowindex]))lowindex = j; // Put it in placeif (i != lowindex) // Add check for exerciseswap(A, i, lowindex);}}(b) There is unlikely to be much improvement; more likely the algorithmwill slow down. This is because the time spent checking (n times) isunlikely to save enough swaps to make up.(c) Try it and see!7.6 • Insertion Sort is stable. A swap is done only if the lower element’svalue is LESS.• Bubble Sort is stable. A swap is done only if the lower element’s valueis LESS.• Selection Sort is NOT stable. The new low value is set only if it isactually less than the previous one, but the direction of the search isfrom the bottom of the array. The algorithm will be stable if “less than”in the check becomes “less than or equal to” for selecting the low key position.• Shell Sort is NOT stable. The sublist sorts are done independently, andit is quite possible to swap an element in one sublist ahead of its equalvalue in another sublist. Once they are in the same sublist, they willretain this (incorrect) relationship.• Quick-sort is NOT stable. After selecting the pivot, it is swapped withthe last element. This action can easily put equal records out of place.48 Chap. 7 Internal Sorting• Conceptually (in particular, the linked list version) Mergesort is stable.The array implementations are NOT stable, since, given that the sublistsare stable, the merge operation will pick the element from the lower listbefore the upper list if they are equal. This is easily modified to replace“less than” with “less than or equal to.”• Heapsort is NOT stable. Elements in separate sides of the heap are processed independently, and could easily become out of relative order.• Binsort is stable. Equal values that come later are appended to the list.• Radix Sort is stable. While the processing is from bottom to top, thebins are also filled from bottom to top, preserving relative order.7.7 In the worst case, the stack can store n records. This can be cut to log n in the worst case by putting the larger partition on FIRST, followed by the smaller. Thus, the smaller will be processed first, cutting the size of the next stacked partition by at least half.7.8 Here is how I derived a permutation that will give the desired (worst-case) behavior:a b c 0 d e f g First, put 0 in pivot index (0+7/2),assign labels to the other positionsa b c g d e f 0 First swap0 b c g d e f a End of first partition pass0 b c g 1 e f a Set d = 1, it is in pivot index (1+7/2)0 b c g a e f 1 First swap0 1 c g a e f b End of partition pass0 1 c g 2 e f b Set a = 2, it is in pivot index (2+7/2)0 1 c g b e f 2 First swap0 1 2 g b e f c End of partition pass0 1 2 g b 3 f c Set e = 3, it is in pivot index (3+7/2)0 1 2 g b c f 3 First swap0 1 2 3 b c f g End of partition pass0 1 2 3 b 4 f g Set c = 4, it is in pivot index (4+7/2)0 1 2 3 b g f 4 First swap0 1 2 3 4 g f b End of partition pass0 1 2 3 4 g 5 b Set f = 5, it is in pivot index (5+7/2)0 1 2 3 4 g b 5 First swap0 1 2 3 4 5 b g End of partition pass0 1 2 3 4 5 6 g Set b = 6, it is in pivot index (6+7/2)0 1 2 3 4 5 g 6 First swap0 1 2 3 4 5 6 g End of parition pass0 1 2 3 4 5 6 7 Set g = 7.Plugging the variable assignments into the original permutation yields:492 6 4 0 13 5 77.9 (a) Each call to qsort costs Θ(i log i). Thus, the total cost isni=1i log i = Θ(n2 log n).(b) Each call to qsort costs Θ(n log n) for length(L) = n, so the totalcost is Θ(n2 log n).7.10 All that we need to do is redefine the comparison test to use strcmp. The quicksort algorithm itself need not change. This is the advantage of paramerizing the comparator.7.11 For n = 1000, n2 = 1, 000, 000, n1.5 = 1000 ∗√1000 ≈ 32, 000, andn log n ≈ 10, 000. So, the constant factor for Shellsort can be anything less than about 32 times that of Insertion Sort for Shellsort to be faster. The constant factor for Shellsort can be anything less than about 100 times thatof Insertion Sort for Quicksort to be faster.7.12 (a) The worst case occurs when all of the sublists are of size 1, except for one list of size i − k + 1. If this happens on each call to SPLITk, thenthe total cost of the algorithm will be Θ(n2).(b) In the average case, the lists are split into k sublists of roughly equal length. Thus, the total cost is Θ(n logk n).7.13 (This question comes from Rawlins.) Assume that all nuts and all bolts havea partner. We use two arrays N[1..n] and B[1..n] to represent nuts and bolts. Algorithm 1Using merge-sort to solve this problem.First, split the input into n/2 sub-lists such that each sub-list contains twonuts and two bolts. Then sort each sub-lists. We could well come up with apair of nuts that are both smaller than either of a pair of bolts. In that case,all you can know is something like:N1, N2。
算法设计与分析第二版课后习题及解答(可编辑)

算法设计与分析第二版课后习题及解答算法设计与分析基础课后练习答案习题1.14.设计一个计算的算法,n是任意正整数。
除了赋值和比较运算,该算法只能用到基本的四则运算操作。
算法求 //输入:一个正整数n2//输出:。
step1:a1; step2:若a*an 转step 3,否则输出a; step3:aa+1转step 2;5. a.用欧几里德算法求gcd(31415,14142)。
b. 用欧几里德算法求gcd(31415,14142),比检查min{m,n}和gcd(m,n)间连续整数的算法快多少倍?请估算一下。
a. gcd31415, 14142 gcd14142, 3131 gcd3131, 1618 gcd1618, 1513 gcd1513, 105 gcd1513, 105 gcd105, 43 gcd43, 19 gcd19, 5 gcd5, 4 gcd4, 1 gcd1, 0 1.b.有a可知计算gcd(31415,14142)欧几里德算法做了11次除法。
连续整数检测算法在14142每次迭代过程中或者做了一次除法,或者两次除法,因此这个算法做除法的次数鉴于1?14142 和 2?14142之间,所以欧几里德算法比此算法快1?14142/11 ≈1300 与2?14142/11 ≈ 2600 倍之间。
6.证明等式gcdm,ngcdn,m mod n对每一对正整数m,n都成立.Hint:根据除法的定义不难证明:如果d整除u和v, 那么d一定能整除u±v;如果d整除u,那么d也能够整除u的任何整数倍ku.对于任意一对正整数m,n,若d能整除m和n,那么d一定能整除n和rm mod nm-qn;显然,若d能整除n和r,也一定能整除mr+qn和n。
数对m,n和n,r具有相同的公约数的有限非空集,其中也包括了最大公约数。
故gcdm,ngcdn,r7.对于第一个数小于第二个数的一对数字,欧几里得算法将会如何处理?该算法在处理这种输入的过程中,上述情况最多会发生几次?Hint:对于任何形如0mn的一对数字,Euclid算法在第一次叠代时交换m和n, 即gcdm,ngcdn,m并且这种交换处理只发生一次.8.a.对于所有1≤m,n≤10的输入, Euclid算法最少要做几次除法?1次b. 对于所有1≤m,n≤10的输入, Euclid算法最多要做几次除法?5次gcd5,8习题1.21.农夫过河P?农夫W?狼 G?山羊 C?白菜2.过桥问题1,2,5,10---分别代表4个人, f?手电筒4. 对于任意实系数a,b,c, 某个算法能求方程ax^2+bx+c0的实根,写出上述算法的伪代码可以假设sqrtx是求平方根的函数算法Quadratica,b,c//求方程ax^2+bx+c0的实根的算法//输入:实系数a,b,c//输出:实根或者无解信息If a≠0D←b*b-4*a*cIf D0temp←2*ax1←-b+sqrtD/tempx2←-b-sqrtD/tempreturn x1,x2else if D0 return ?b/2*ael se return “no real roots”else //a0if b≠0 return ?c/belse //ab0if c0 return “no real numbers”else return “no real roots”5. 描述将十进制整数表达为二进制整数的标准算法a.用文字描述b.用伪代码描述解答:a.将十进制整数转换为二进制整数的算法输入:一个正整数n输出:正整数n相应的二进制数第一步:用n除以2,余数赋给Kii0,1,2,商赋给n第二步:如果n0,则到第三步,否则重复第一步第三步:将Ki按照i从高到低的顺序输出b.伪代码算法 DectoBinn//将十进制整数n转换为二进制整数的算法//输入:正整数n//输出:该正整数相应的二进制数,该数存放于数组Bin[1n]中i1while n!0 doBin[i]n%2;nintn/2;i++;while i!0 doprint Bin[i];i--;9.考虑下面这个算法,它求的是数组中大小相差最小的两个元素的差.算法略对这个算法做尽可能多的改进.算法 MinDistanceA[0..n-1]//输入:数组A[0..n-1]//输出:the smallest distance d between two of its elements 习题1.3考虑这样一个排序算法,该算法对于待排序的数组中的每一个元素,计算比它小的元素个数,然后利用这个信息,将各个元素放到有序数组的相应位置上去.a.应用该算法对列表”60,35,81,98,14,47”排序b.该算法稳定吗?c.该算法在位吗?解:a. 该算法对列表”60,35,81,98,14,47”排序的过程如下所示:b.该算法不稳定.比如对列表”2,2*”排序c.该算法不在位.额外空间for S and Count[]4.古老的七桥问题第2章习题2.17.对下列断言进行证明:如果是错误的,请举例a. 如果tn∈Ogn,则gn∈Ωtnb.α0时,Θαgn Θgn解:a这个断言是正确的。
【北语网院】18秋《算法与数据分析》作业_2(答案)

【北京语言大学】18秋《算法与数据分析》作业_2试卷总分:100 得分:100第1题,0-1背包问题的回溯算法所需的计算时间为A、O(n2n)B、O(nlogn)C、O(2n)D、O(n)正确答案:第2题,<span style="font-size:14px;font-family:宋体">动态规划算法的基本要素为</span>A、<span style="font-size:14px;font-family:宋体">最优子结构性质与贪心选择性质</span>B、<span style="font-size:14px;font-family:宋体">重叠子问题性质与贪心选择性质</span>C、<span style="font-size:14px;font-family:宋体">最优子结构性质与重叠子问题性质</span>D、<span style="font-size:14px;font-family:宋体">预排序与递归调用</span>正确答案:第3题,实现合并排序利用的算法是A、分治策略B、动态规划法C、贪心法D、回溯法正确答案:第4题,合并排序算法是利用A、分治策略B、动态规划法C、贪心法D、回溯法正确答案:第5题,实现棋盘覆盖算法利用的算法是A、分治法B、动态规划法C、贪心法D、回溯法正确答案:第6题,实现最长公共子序列利用的算法是A、分治策略B、动态规划法C、贪心法D、回溯法正确答案:第7题,最长公共子序列算法利用的算法是A、分支界限法B、动态规划法C、贪心法D、回溯法正确答案:第8题,在下列算法中有时找不到问题解的是A、蒙特卡罗算法B、拉斯维加斯算法C、舍伍德算法D、数值概率算法正确答案:第9题,回溯法解旅行售货员问题时的解空间树是A、子集树B、排列树C、深度优先生成树D、广度优先生成树正确答案:第10题,下面是贪心算法的基本要素的是A、重叠子问题B、构造最优解C、贪心选择性质D、定义最优解正确答案:第11题,回溯法是一种既带有系统性又带有跳跃性的搜索算法。
北京语言大学智慧树知到“计算机科学与技术”《算法与数据分析》网课测试题答案2

北京语言大学智慧树知到“计算机科学与技术”《算法与数据分析》网课测试题答案(图片大小可自由调整)第1卷一.综合考核(共15题)1.蒙特卡罗算法是以下的哪种?()A.分支界限算法B.概率算法C.贪心算法D.回溯算法2.最长公共子序列算法利用的算法是()。
A.分支界限法B.动态规划法C.贪心法D.回溯法3.下列哪一种算法不是随机化算法?()A.蒙特卡罗算法B.拉斯维加斯算法C.动态规划算法D.舍伍德算法4.回溯法搜索状态空间树是按照什么的顺序?()A.中序遍历B.广度优先遍历C.深度优先遍历D.层次优先遍历5.使用回溯法进行状态空间树裁剪分支时一般有两个标准:约束条件和目标函数的界,N皇后问题和0/1背包问题正好是两种不同的类型,其中同时使用约束条件和目标函数的界进行裁剪的是0/1背包问题,只使用约束条件进行裁剪的是N皇后问题。
()A.错误B.正确6.哈弗曼编码的贪心算法所需的计算时间为()。
A.O(n2n)B.O(nlogn)C.O(2n)D.O(n)7.下面是贪心算法的基本要素的是()。
A.重叠子问题B.构造最优解C.贪心选择性质D.定义最优解8.使用分治法求解不需要满足的条件是()。
A.子问题必须是一样的B.子问题不能够重复C.子问题的解可以合并D.原3、问题和子问题使用相同的方法解9.回溯法解旅行售货员问题时的解空间树是()。
A.子集树B.排列树C.深度优先生成树D.广度优先生成树10.分支限界法主要有队列式(FIFO)分支限界法和优先队列式分支限界法。
()A.错误B.正确11.算法的复杂性没有时间复杂性和空间复杂性之分。
()A.错误B.正确12.解决0/1背包问题可以使用动态规划、回溯法和分支限界法,其中不需要排序的是动态规划,需要排序的是回溯法,分支限界法。
()A.错误B.正确13.舍伍德算法总能求得问题的一个解。
()A.错误B.正确14.程序是算法用某种程序设计语言的具体实现。
()A.错误B.正确15.计算一个算法时间复杂度通常可以计算循环次数、基本操作的频率或计算步。
【奥鹏】[北语]《算法与数据分析》作业2‖满分参考
![【奥鹏】[北语]《算法与数据分析》作业2‖满分参考](https://img.taocdn.com/s3/m/d0e9e3e5c5da50e2534d7fbe.png)
《算法与数据分析》作业2以下不可以使用分治法求解的是选项A:棋盘覆盖问题选项B:选择问题选项C:归并排序选项D:0/1背包问题正确选项 :D实现最长公共子序列利用的算法是选项A:分治策略选项B:动态规划法选项C:贪心法选项D:回溯法正确选项 :B广度优先是什么的一种搜索方式选项A:分支界限法选项B:动态规划法选项C:贪心法选项D:回溯法正确选项 :A下列哪一种算法是随机化算法选项A:贪心算法选项B:.回溯法选项C:.动态规划算法选项D:.舍伍德算法正确选项 :D实现合并排序利用的算法是选项A:分治策略选项B:动态规划法选项C:贪心法选项D:回溯法正确选项 :A下面是贪心算法的基本要素的是选项A:重叠子问题选项B:构造最优解选项C:贪心选择性质选项D:定义最优解正确选项 :C下面关于NP问题说法正确的是选项A:NP问题都是不可能解决的问题选项B:P类问题包含在NP类问题中选项C:NP完全问题是P类问题的子集选项D:NP类问题包含在P类问题中正确选项 :B下列算法中不能解决0/1背包问题的是选项A:贪心法选项B:动态规划选项C:回溯法选项D:分支限界法正确选项 :A下列是动态规划算法基本要素的是选项A:定义最优解选项B:构造最优解选项C:算出最优解选项D:子问题重叠性质正确选项 :D衡量一个算法好坏的标准是选项A:运行速度快选项B:占用空间少选项C:时间复杂度低选项D:代码短正确选项 :C矩阵连乘问题的算法可由动态规划设计实现。
选项A:错误选项B:正确正确选项 :B回溯法搜索解空间树时,常用的两种剪枝函数为约束函数和限界函数。
选项A:错误选项B:正确正确选项 :B大整数乘积算法是用分治法来设计的。
选项A:错误选项B:正确正确选项 :B。
东师《算法分析与设计》20春在线作业2答案19

(单选题)1: 图中有关路径的定义是()。
A: 由顶点和相邻顶点序偶构成的边所形成的序列
B: 由不同顶点所形成的序列
C: 由不同边所形成的序列
D: 上述定义都不是
正确答案: A
(单选题)2: ()是一个基本完整的开发工具集,它包括了整个软件生命周期中所需要的大部分工具,如UML工具、代码管控工具、集成开发环境等等。
A: VS
B: VM
C: Dev-C++
D: IDE
正确答案: A
(单选题)3: 下列数据结构中,属于非线性结构的是()。
A: 循环队列
B: 带链队列
C: 二叉树
D: 带链栈
正确答案: C
(单选题)4: 下列叙述中正确的是()。
A: 顺序存储结构的存储一定是连续的,链式存储结构的存储空间不一定是连续的
B: 顺序存储结构只针对线性结构,链式存储结构只针对非线性结构
C: 顺序存储结构能存储有序表,链式存储结构不能存储有序表
D: 链式存储结构比顺序存储结构节省存储空间
正确答案: A
(单选题)5: 十六进制中最大的数码是()。
A: 16
B: 15
C: F
D: E
正确答案: C
(单选题)6: 二进制,就表示某一位置上的数运算时是逢()进一位。
A: 2
B: 8
C: 9
D: 10
正确答案: A
(单选题)7: 在程序代码编辑框外(一般都是程序代码的最左侧)双击,就成功设置了一个断。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
20秋《算法与数据分析》作业2
采用贪心算法的最优装载问题的主要计算量在于将集装箱依其重量从小到大排序,故算法的时间复杂度为
A:O(n2n)
B:O(nlogn)
C:O(2n)
D:O(n)
答案:B
在下列算法中有时找不到问题解的是
A:蒙特卡罗算法
B:拉斯维加斯算法
C:舍伍德算法
D:数值概率算法
答案:B
最长公共子序列算法利用的算法是
A:分支界限法
B:动态规划法
C:贪心法
D:回溯法
答案:B
下列算法中通常以深度优先方式系统搜索问题解的是A:备忘录法
B:动态规划法
C:贪心法
D:回溯法
答案:D
Strassen矩阵乘法是利用什么实现的算法
A:分治策略
B:动态规划法
C:贪心法
D:回溯法
答案:A
以深度优先方式系统搜索问题解的算法称为
A:分支界限算法。