An Approximate Algorithm for Resource Allocation Using Combinatorial Auctions, presented at

An Approximate Algorithm for Resource Allocation Using Combinatorial Auctions, presented at
An Approximate Algorithm for Resource Allocation Using Combinatorial Auctions, presented at

An Approximate Algorithm for Resource Allocation using Combinatorial

Auctions

Viswanath Avasarala, Himanshu Polavarapu, Tracy Mullen

School of Information Science and Technology

Pennsylvania State University

{vavasarala, tmullen}@https://www.360docs.net/doc/8f14124492.html,

himanshu@https://www.360docs.net/doc/8f14124492.html,

Abstract

Combinatorial Auctions (CAs), where users bid on combination of items, have emerged as a useful tool for resource allocation in distributed systems. However, two main difficulties exist to the adoption of CAs in time-constrained environments. The first difficulty involves the computational complexity of winner determination. The second difficulty entails the computational complexity of eliciting utility valuations for all possible combinations of resources to different tasks. To address both issues, we developed a new algorithm, Seeded Genetic Algorithm (SGA) for finding high quality solutions quickly. SGA uses a novel representational schema that produces only feasible solutions. We compare the winner determination performance of our algorithm with Casanova, another local stochastic search procedure, on typically hard-to-solve bid distributions. We show that SGA converges to a better solution than Casanova for large problem sizes. However, for many bid distributions, exact winner determination using integer programming approaches is very fast, even for large problem sizes. In these cases, SGA can still provide significant time savings by eliminating the requirement for formulating all possible bids.

1. Introduction

An auction can be defined as “a market institution with an explicit set of rules determining resource allocation and prices on the basis of bids from the market participants” [1]. Market-oriented programming uses market mechanisms like auctions to build computational economies to solve distributed resource allocation problems. In standard auction mechanisms, individual goods are auctioned independently of each other. These mechanisms can lead to inefficient allocations when agent utilities for various goods show strong complementarities. Combinatorial auctions allow agents to bid on a combination of items, and express any complementarities directly. For example, consider resource allocation in a sensor network, where the available resources must be allocated to different target tracks. To maintain a target track, not only should measurements be scheduled over the various sensors, but bandwidth for measurement communication to the fusion system should also be reserved. Scheduling measurements will not be of any use to the consumers unless bandwidth to communicate the measurements is also available. Combinatorial auctions can be used in such environments to allow bidders to express such synergistic relationships between goods. Combinatorial auctions have been used for resource allocation in a number of domains with strict real time constraints, like sensor management (SM) [2, 3], supply chain management [4] and computer grids [5]. However, two issues have acted as stumbling blocks to the adaptation of CA in time constrained environments. The first is the computational complexity of finding the optimal allocation in the combinatorial auction, aka winner determination problem, which is NP-hard [6]. The second is the computational complexity of eliciting valuations for all possible combination of resources to different tasks/users. In this paper, we propose approximate algorithms that reduce the overall computational complexity of using CA’s for resource allocation to polynomial run times.

Recent years have seen great strides in the research of winner determination in combinatorial auctions. Depending on the distribution from which the bids are generated, either the algorithm CABOB [7] or Integer Programming approaches using a standard commercial software package such as CPLEX [8], have been found to give the best real-time performance. For many realistic distributions, CPLEX and CABOB perform extremely well with average running times of less than a second for problem sizes with thousands of bids. However, for certain complex bid distributions, neither CPLEX nor CABOB’s real-time performance may be adequate in environments with strict real-time constraints. For example, we tested CPLEX and our implementation of CABOB on these “hard” bid distributions” (see [7] for description of various bid distributions) with a problem size of 1000 bids and recorded up to 10,000 seconds running time. Note that our version of CABOB may not be as efficient as the original, but we are mainly looking at order-of-magnitude effects. Casanova, a local stochastic search procedure, is an approximate winner determination algorithm, based on Novelty+ [9, 10]. Casanova gives good quality solutions, very quickly even for fairly large problem instances on a variety of bid

distributions. However, in our experiments, we found that Casanova’s performance falls significantly with increase in problem sizes.

Standard winner determination algorithms require bids on sets of resources as input. But bid-formulation is also potentially computationally expensive [11, 12]. For example, if there are n tasks and m goods, n*(2m -1) bids may have to be formulated for computing the optimal allocation. Researchers have experimented with many approaches to tackle the complexity of bid formulation aka preference elicitation (see [13] for an exhaustive survey). However, except under very restrictive assumptions, preference elicitation is exponential in complexity. Particularly, in technical domains like sensor fusion where human participation in bidding process is absent, the cognitive limitations that prevent buyers from bidding on exponentially-many bundles are also not present. In these domains, restricted preference elicitation techniques do not work.

To overcome the above mentioned shortcomings, we formulated a novel genetic algorithm based search procedure, SGA (Seeded Genetic algorithm), for approximate winner determination in combinatorial auctions. This paper is organized as follows. Section two presents the algorithm details. In the results section, we compare the performance of SGA with other winner determination algorithms, both in terms of real-time performance and optimality. The final section describes our conclusions and directions for future work.

2. SGA description

This section provides an explanation of the winner determination algorithm and provides a brief overview of some previous algorithms, before discussing SGA in detail.

2.1. Winner determination for resource allocation using CAs

Assume that the auctioneer has a set of m goods G = {g 1,…, g m } to sell, and a set of n bids B ={b 1,… b n }. Bidders offer bids of the form B i = , where b i is the bundle of goods and p i is the price the bidder is willing to pay for. The winner determination problem is to find an allocation of goods that maximizes the overall utility, given the constraint that each good can be sold to no more than one bid. Formally, this problem can be characterized as [6]:

1|max

..1,{1..}j

n j j j j j i S p x s t x i m =∈≤?∈∑∑

where x j is 1 if the bid is accepted to the final allocation and 0 otherwise.

This problem can be solved as a mixed integer programming problem using standard MIP software like CPLEX. Another example of exact winner determination procedure is CABOB, which uses a specialized depth first search that branches on bids. CABOB constructs a bid-graph with each bid in the set of bids B as the vertex of the graph and edges between vertices only when they have items in common. CABOB then uses a separate depth-first search (DFS) to find the optimal solution. By selecting an initial bid as being IN a possible allocation, then remaining bids with items that do not overlap this bid can also be IN the allocation and are considered for allocation in a particular search path of the DFS. CABOB bounds the DFS by pruning paths that have already been dominated.

The approximate winner determination search procedure, Casanova, is based on local stochastic search. Casanova starts with an empty allocation and bids are chosen from the unsatisfied bids randomly and added to the allocation vector. The probability that a bid is chosen is determined using its normalized score, calculated as the bid price divided by the product of the number of goods the bid consumes, the bid’s age, and the number of steps since it has last been chosen to be added to the allocation vector.

2.2. SGA

Genetic algorithms (GAs) are stochastic and polynomial in the number of bids, rather than exponential. To design a fast genetic algorithm based algorithm for combinatorial auctions, we initially experimented with a binary string representation for the GA chromosome. In this representation, a solution is coded as a binary string with length equal to the number of bids in the problem instance. A binary string is decoded into an allocation vector as follows: a bid is labeled winning , if the corresponding string bit is one and as losing , otherwise. However, this schema produces infeasible solutions where multiple bids containing a common item are marked as winning. The genetic algorithm has poor convergence, in spite of the use of penalty functions to penalize infeasible allocations. To overcome this problem, we devised a novel representational schema based on a ranking scheme that produces only feasible solutions that is described below.

2.2.1. Representational Schema. In the ranking-based representation schema, a chromosome is represented by a string of numbers, where each number corresponds to the rank of the corresponding bid. The solution represented by a chromosome is decoded in the following manner:

1. Initialize solution with the lowest ranking bid.

2. Select bid with the next highest rank. Add it to

the solution if it does not create an infeasible solution, otherwise, proceed to the next highest ranking bid.

3.Continue till no bid can be added without

creating an infeasible solution.

For example, consider the following scenario with 4 bids and 4 items (table 1). Decoding the allocation vector associated with the chromosome “2-1-3-4” is done as follows. The initial allocation vector is empty. Since, the lowest ranking bid is bid 2 (with rank 1), bid 2 is added to the allocation vector. The next lowest ranking bid is bid 1 (rank 2). However, bid 1 has item a, which is also present in bid 2. Since bid 2 has already been labeled as winning, bid 1 is labeled as losing. The procedure is repeated until the status of the highest ranking bid (bid 4 in this case) is determined. Thus, the above chromosome corresponds to a solution where bids 2 and 4 are marked as winning and the rest are marked as “losing”.

Chromosome : 2-1-3-4

Bid No. Items State

1 {a} Lose

2 {a,b} Win

3 {b,c} Lose

4 {c,d} Win

Table 1. Example Scenario

2.2.2. SGA operators. We used the standard crossover, mutation and tournament selection operators in the genetic algorithm. The crossover operator produces two offspring chromosomes by swapping randomly chosen crossover segments between two parent chromosomes. The mutation operator changes the rank of a randomly chosen bid to a random value. The tournament selection operator replaces all the chromosomes in a given group with equal number of replicas of the chromosome with the highest fitness from the group.

The starting population used in SGA, consists of strings in which the integers, 1 to the number of bids, are randomly allocated to the different bids. In addition to these operators, to prevent degeneracy in the chromosome population, we added an operator called regenerator. To understand the requirement for this operator, consider the crossover operation between the following two chromosomes of length four:

a. chromosome A: 2--1--3---4

b. chromosome B: 3--2-- 4---1

If the crossover segment is of length two and the crossover point is chosen to be two, the offspring chromosomes produced from the crossover operation are

c. chromosome C1: 2--2--4---4

d. chromosome C2: 3--1--3---1

As shown, mutation and crossover operators lead to having multiple copies of the same bid in a chromosome. These chromosomes can be decoded using heuristics. For example, assign a unique number bidID, to each bid. If two bids have the same rank, only add the bid with the lower bidID to the allocation vector. However, as the genetic algorithm evolves and the number of generation increases, the degeneracy in the chromosomes increases. The genetic algorithm might then prematurely converge, since the chromosomes lose the strength to produce high quality solutions with limited variability in the bid ranks. To avoid this problem, we devised the regenerator operator. The regenerator takes a degenerate chromosome and produces a healthy chromosome, in which no rank is repeated. The algorithm used by the regenerator operator is as follows:

i = 0; r = 1;

do while (r < no. of bids) {

i = i + 1;

B = set of bids with rank = = i;

while ( B is not empty) {

b = bid with lowest bidID from B

rank of b = r;

remove b from B;

r = r+ 1;

}}

The computational complexity of the regenerator operator is linear in the length of the chromosome. The regenerator operator ensures that each bid in the chromosomes has a unique rank and these ranks are distributed between one and the number of bids in the problem instance. This has an added advantage. Regeneration makes the computational complexity of chromosome sorting for the purpose of decoding the allocation associated with it linear in the length of the chromosome. This greatly increases the speed of the genetic algorithm.

2.3. Seeding the GA

If an initial high quality solution is available, the solution can be used to seed the genetic algorithm by initializing some of the initial random population to the high quality solution. A seed chromosome, seed, is created from an initial solution as follows

n = size of initial solution.

for each bid b

if(b belongs to the initial solution)

seed[b] = random number between n

and (no of bids + 1)

else

seek[b] = random number between 1

and (n + 1)

Regenarate(seed)

If the GA is seeded with a high quality initial solution, it converges very rapidly, often improving on the initial seed. Seeding has two utilities:

1. If a faster, but less efficient approximate winner determination algorithm is available, the solution provided by this algorithm can be used as a seed for the GA, enabling it to converge more rapidly.

2. Seeding can be used for approximate incremental winner determination. Assume that the auctioneer has calculated an optimal (or a high quality solution) for the bids it has received. If certain bids are retracted or new bids are added (as can happen in the SM scenario), the auctioneer can use the original solution as a seed to obtain high quality solutions to the new problem very quickly. Linear programming can be used to obtain a quick lower bound for the winner determination problem [7]. For certain bid distributions, this lower bound gives a very quality solution. For example, for CAT distributions and weighted random distribution [7], the linear programming solution is usually close in value to the optimal allocation.

2.4 Avoiding Explicit Bid Formulation

Consider a resource allocation problem with m resources R = {r 1… r m } and n tasks {t 1…t n }. Let Θ = {1θ,2θ,...}, be the power-set of R. To use standard combinatorial auction based approach for allocating resources to the various tasks, bids of the form b ij where u ij is the utility of allocating resource set i θ to task t j should be formulated as a preprocessing step. This is infeasible when the number of items to be allocated is large and the real-time constraints are strict. The representation schema used by SGA can be modified to avoid explicit bid formulations. The representation schema used by SGA, for these problems, is as follows: The chromosome is represented by a string of length m . Each chromosome member is a number between 1 and n . For example, consider a three-task, four-item example. The chromosome “2-1-3-2” represents an allocation in which the bid on task II with items one and four, bid on task I with item 2 and bid on task III with item 4 are labeled as ‘winning’. For evaluating the fitness of the chromosome, only the utilities of these three bids need to be calculated.

If the size of the population used in the SGA is 1λand the number of generations that SGA is run for is 2λ, the total number of bids that need to be evaluated for this representation schema is 1λ*2λ*m. For the standard formulation, n(2m -1) utility computations are required. In a distributed environment, where task utility information is not available to the resource allocating algorithm, SGA involves the users submitting 1λ*2λ*m bids to the

auctioneer. To prevent this communication load, users could be made to submit their utility function,)(S i ? as their bid at the start of the auction, where )(S i ?is the i -th user’s reported utility for resource set S . This bidding procedure is similar to General Vickrey Auction mechanism [14]. Approximate techniques like function-estimation neural-networks can be used to evaluate )(S i ? in polynomial run-time, when exact formulation is computationally expensive. For illustration of how machine learning algorithms have been used to estimate )(S i ? in sensor networks, using domain-specific knowledge, refer to [15]. It should be noted that there is no straightforward method to adapt Casanova to avoid an exhaustive bid formulation stage.

3. Results

We performed two sets of experiments to validate the performance of SGA. The first set of experiments compares the performance of SGA and Casanova on hard bid distributions, which are difficult for exact winner determination algorithms (see section 3.1 for details). The second set of experiments compares SGA’s performance to CPLEX on soft bid distributions, which are easy for exact winner determination techniques (see section 3.2 for details).

3.1 Comparison with Casanova

For our experiments on hard distributions, we selected two distributions from Sandholm’s bid distributions [8] that proved to be the toughest for the exact winner determination algorithms, which are

a. Uniform (n, m , λ) distribution which consists of n bids, each with λ items chosen without replacement from the m items. Price is chosen randomly from a uniform distribution on [0, 1]. For our experiments, we used m = n/10 and λ = 5, as in [7].

b. Bounded (n, m , λ1 , λ2 ) where the problem consists of n bids. The number of items λ is randomly chosen between a lower bound λ1and an upper bound λ2. The price is chosen from a uniform distribution on [0, λ] . For our experiments, we used m = n/10 and λ1 = 1 and λ2 = 5, as in [7]

These distributions were the hardest for both CABOB and CPLEX. For example, CPLEX took an average of 4300 (757) seconds on a 2.8 GHz Pentium IV processor for problem sizes of 900 bids for uniform (bounded) distributions. For larger problems, the run time was much higher. For a sample run, a problem size of 1000 bids

with uniform distribution took more than 13 CPU hours. The parameters used for SGA are shown in table 2.

Population size ( > 1400 bids) 6500 Population size (<= 1400 bids) 6000 Tour size

2 Crossover probability 0.995 Mutation probability

0.02

Table 2. SGA parameters

Our implementation of CABOB was slower than CPLEX for the above distributions. This is consistent with the reported results for CABOB and CPLEX [7]. We used the same parameters for all the experiments, except the population size. For problem sizes with less than 1400 bids, we used a population size of 6000 chromosomes and for larger problems, we used a population size of 6500. For Casanova, we used the parameters suggested by the authors in the original paper. However, we found that changing the walk probability to 0.4 gives better results and hence wp = 0.4 was used. We tested both Casanova and SGA on large problem sizes, varying the number of bids from 1000 to 2000 bids.

To accurately compare SGA’s real-time performance to the optimal solution requires first finding the solution using an exact winner determination algorithm. However, given our large problem sizes, the exact winner determination algorithms took days to find the optimal solution. Instead, we adopt a statistical approach to estimate the quality of the solutions generated by SGA and Casanova. We postulated that the average optimality for a particular distribution and a given problem size (fixed number of bids and items) is directly proportional to the number of items in the auction. We tested this hypothesis for problem sizes of 500 to 800 bids for both distributions using the following approach. For each problem size, we generated 20 random problems and calculated the average value of the optimal solution. The R-square of the least squares regression line with the problem size (number of bids) as the estimator and the

average optimality as the regressor was greater than 0.99 for both the distributions. Hence,

the linear relation between the estimator and the regressor is fairly strong. Figure 1 (Figure 2) shows the regression line and the optimality values of all the points generated for the uniform (bounded) distribution. We did not notice any particular trend in the variation of the optimality from the mean optimality based on problem size. For calculating the average optimality of a given problem size (?size ), we extrapolate the regression lines in Figure 1 (Figure 2).

Figure 1. Regression line for average optimality versus

problem size for uniform distribution.

Figure 2. Regression line for average optimality versus problem size for bounded distribution.

This extrapolation is reasonable, since there was no increasing trend in variation of the optimality from the mean optimality with increase in problem sizes. Figures 3 (Figure 4) gives the average estimated optimality, obtained using SGA and Casanova on uniform (bounded) bid distribution. To generate these results, SGA and Casanova were run on 50 different randomly selected problems, for each problem size. The average optimality of the problems (?size ), is calculated by extrapolation using Figure 1 (Figure 2). For calculating the percentage optimality yielded by SGA and Casanova, we divide the mean outcome of the approximate algorithms for a particular problem size with ?size . Since these results are not exact and depend on the robustness of the extrapolation method used, we also show the 95% confidence interval of the optimal solution for each problem size. The 95% confidence interval is calculated using the 95% confidence interval for ?size . This is calculated assuming that standard deviation of ?size does not depend on problem size (which is consistent with our experiment results). Figures 1 and 2 show that Casanova is better for smaller problems, while SGA gives better results for larger problem sizes. This is because Casanova

search is primarily memory-less. It discards solutions generated in the past periodically and restarts search every few seconds. Though the highly greedy approach used by Casanova is advantageous for smaller problems, a memoryless search does not work as well for larger problem sizes. Since SGA and Casanova are stochastic algorithms, we also show the scatter plot of the results obtained for a particular size for bounded distribution in Figure 5. For a large percentage of problems, we find that SGA gives better results than Casanova.

Figure 3. Estimated percentage optimality (with their 95% confidence intervals) versus problem size for uniform distribution. We used a cut-off time of 200 CPU-Sec on 2.8 GHz Pentium IV processor.

Figure 4. Estimated percentage optimality (with their 95% confidence intervals) versus problem size for bounded distribution. We used a cut-off time of 200 CPU-Sec on 2.8 GHz Pentium IV processor

126000

128000

130000

132000

134000

136000

138000

140000

142000

144000

126000128000130000132000134000136000138000140000142000144000

Revenue for Casanova

R

e

v

e

n

u

e

f

o

r

S

G

A

Figure 5. Correlation of revenue obtained by SGA and Casanova for bounded distribution with problem size of 2000 bids. The line shows of ratio of 1 when comparing the revenues.

3.1.1. Real-Time Performance. We compared the real-time performance of SGA and Casanova for the uniform bid distribution with problem size = 2000 bids (Figure 6). We found similar results for the bounded distribution. The highly greedy approach used by Casanova, results in better real-time performance initially. SGA starts with a randomly initialized population and evolution of good quality solutions requires a fair amount of computation. However, SGA is better than Casanova given longer time scales and converges to better optimality. The advantages of SGA and Casanova can be combined by using Casanova to seed SGA. Casanova can be made to run for sufficient time, till it reaches a plateau in its performance. The high quality solutions provide by Casanova can then be used to seed SGA. Figure 6 shows the real-time performance of SGA and Casanova when run separately. Figure 7 shows the comparative real time performance of Casanova and SGA, when Casanova is used to seed SGA (at t = 70 CPU-Secs).

Figure 6. Real-time performance of GA and Casanova on a 2.8GHz Pentium-IV processor (averaged over 20 runs)

Figure 7. Real-time performance of SGA (seeded with Casanova) and Casanova on a 2.8GHz Pentium-IV processor (averaged over 20 runs)

As Figure 7 shows, the combined algorithm has better, or at least equivalent, real-time performance for all times when compared to Casanova and also converges to better optimality. It is interesting to note that SGA as a stand-alone algorithm has better overall convergence than SGA seeded with Casanova. This happens because the high quality solutions provided by Casanova lead to premature convergence in SGA.

3.2 Comparison to CPLEX

For many realistic bid distributions, exact winner determination using CPLEX based IP formulation is very fast, even for large problems [7, 8] to test performance of SGA on such “soft” bid distributions, we used the resource allocation problems generated by MASM [2, 3]. MASM is a market-based sensor resource allocation mechanism where the sensor manager generates combinatorial bids for every possible combination of sensors for each task. Due to space constraints, details of MASM and the underlying resource allocation problem are not explained, but are available in [15]. The time-taken for bid-formulation and for winner determination using CPLEX are given in Figures 8 and 9 respectively. Clearly, the bottleneck of using CAs in this domain is the bid formulation step that maps from tasks to sensor resources.

For SGA, a population size of 150 and the number of generations equal to (#bids/1000) was used. The other parameters are as shown in table 2. Figure 10 shows the comparison of the total time required for resource allocation using the CPLEX-based approach and SGA-based approach. Figure 11 shows the average optimality SGA achieved using CPLEX. Clearly, for a modest loss of optimality, SGA provides significant savings in real-time requirements of the allocation algorithms.

Figure 8. Time required for bid formulation (averaged over 10 runs)

Figure 9. Time required for winner determination using CPLEX (averaged over 10 runs)

Figure https://www.360docs.net/doc/8f14124492.html,parison of time requirements of SGA and CPLEX-based optimization (averaged over 10 runs)

Figure 11. Optimality achieved by SGA (averaged over 10 runs)

4. Conclusions

The successful application of combinatorial auctions to environments with very strict real-time constraints is contingent on the development of adequate winner determination algorithms. Recent years have seen great strides in this area. Optimal winner determination for fairly large problem instances is now possible within a fraction of a second for large problems over various bid distributions. However, if the problem instances are too large or if the bid distributions are too difficult for the complete algorithms, approximate real-time algorithms are alternatives.

We implemented SGA for real-time approximate winner determination in combinatorial auctions and compared its performance with a local stochastic search procedure, Casanova. For the tested problem instances, we found that SGA, on average, converges to better solutions for the larger problem instances. Thus, SGA is a better candidate solution for pruning search in complete winner determination algorithms. Regarding real-time performance, we found that Casanova performs better in the initial stages but is later outperformed by SGA. Thus, the choice between the two algorithms depends on the real-time demands of the application domain. However, Casanova can be used to seed SGA to generate a hybrid algorithm that is better or at least similar to Casanova, for all time ranges.

The main advantage of SGA is that it does not require the set of all possible bids as an input. As a result, SGA provides significant time-savings over exact winner determination algorithms by avoiding the cost of bid formulation, as illustrated by the results from MASM.

A key issue with SGA is its use in strategic environments. Auction methodologies like GVAs are not necessarily incentive compatible when approximate winner determination techniques are used [16]. Also, in strategic environments, privacy concerns might prevent agents from revealing their valuation information [14]. This could be a problem if SGA requires users to submit their utility functions as bids. We are currently investigating techniques to add incentive compatible measures to approximate winner determination algorithms.

10. References

[1] P. R. Wurman, M. P. Wellman, W. E. Walsh, “A

parameterization of Auction Design Space”, Games

and Economic Behavior, Vol. 35, 2001, pp 304-338.

[2] V.Avasarala, T.Mullen., D.L.Hall., and A.Garga,

“MASM: market architecture or sensor management

in distributed sensor networks”, SPIE Defense and

Security Symposium, Orlando FL, ,2005, pp 5813 –

30.

[3] T.Mullen, V.Avasarala,, D.L. Hall, “Customer-

Driven Sensor Management”, Intelligent Systems,

IEEE [see also IEEE Intelligent Systems and Their

Applications] Volume 21, Issue 2, March-April

2006, pp 41 – 49.

[4] W.E.Walsh, M. Wellman and F.Ygge, “

Combinatorial Auctions for supply chain formation”,

In Proc. of ACM Conf on Electronic Commerce

(EC'00), Oct 2000, pp 260-269.

[5] A.Das, D.Grosu, “A Combinatorial Auction-Based

Protocols for Resource Allocation in Grids”, 19th

IEEE International Parallel and Distributed

Processing Symposium (IPDPS'05), 2005.

[6] M.Rothkopf, A.Pekec, and R.Harstad,

“Computationally manageable combinatorial

auctions”, Management Science, Vol. 44 No.8, 1998,

pp 1131-1147.

[7] T.Sandholm., S.Subhash,, A.Gilpin, and D. Levine,

“CABOB: A fast optimal algorithm for winner

determination in combinatorial auctions”, proc.

IJCAI, 2001, pp. 1002-1008.

[8] A.Andersson, T.Mattias, Y.Fredrik, ”Integer

programming for combinatorial auction winner

determination”, In Proc. Fourth International Conf.

Multi-Agent Systems (ICMAS). IEEE Computer

Society, Boston, MA, 2000, pp 39–46.

[9] H. Hoos, “On the run-time behavior of stochastic

local search algorithms for SAT”, In Proc. AAAI-99,

1999, pp 661–666.

[10] H. Hoos and C. Boutilier, “Solving combinatorial

auctions using stochastic local search”, In Proc.

AAAI-00, 2000, pp 22–29.

[11] D. Parkes, “ Optimal auction design for agents with

hard valuation problems”, In Agent-Mediated

Electronic Commerce workshop, Stockholm,

Sweden, 1999.

[12] https://www.360docs.net/doc/8f14124492.html,rson and T.Sandholm, ”Costly valuation

computation in auctions”, In Theoretical Aspects of Rationality and Knowledge VIII, Siena, Italy, 2001, pp 169-182.

[13] V. Conitzer and T. Sandholm and P. Santi,

“Combinatorial Auctions with k-wise dependent

Valuations”, In Proc. of the National Conference On Artificial Intelligence (AAAI), Pittsburgh, PA,

2005, pp 248-254.

[14] M. Rothkopf, T. Teisberg, and E.Kahn. Why are

Vickrey auctions rare? Journal of Political Economy

Vol. 98, No. 1, 1990, pp.94–109.

[15] V.Avasarala, “ Multi-agent systems for data-rich,

information-poor environments”, PhD Thesis.

College of IST, Pennsylvania State University, 2006.

[16] Lehmann, D., L. O’Callaghan, and Y. Shoham, “

Truth Revelation in rapid, approximately efficient

combinatorial auctions”, In Proc. 1st ACM Conf. on

Electronic Commerce (EC-99), 1999, pp. 96–102.

国密算法(国家商用密码算法简介)

国家商用密码算法简介 密码学是研究编制密码和破译密码的技术科学,起源于隐秘消息传输,在编码和破译中逐渐发展起来。密码学是一个综合性的技术科学,与语言学、数学、电子学、声学、信息论、计算机科学等有着广泛而密切的联系。密码学的基本思想是对敏感消息的保护,主要包括机密性,鉴别,消息完整性和不可否认性,从而涉及加密,杂凑函数,数字签名,消息认证码等。 一.密码学简介 密码学中应用最为广泛的的三类算法包括对称算法、非对称算法、杂凑算法。 1.1 对称密码 对称密码学主要是分组密码和流密码及其应用。分组密码中将明文消息进行分块加密输出密文区块,而流密码中使用密钥生成密钥流对明文消息进行加密。世界上应用较为广泛的包括DES、3DES、AES,此外还有Serpent,Twofish,MARS和RC6等算法。对称加密的工作模式包括电码本模式(ECB 模式),密码反馈模式(CFB 模式),密码分组链接模式(CBC 模式),输入反馈模式(OFB 模式)等。1.2 非对称密码 公钥密码体制由Diffie和Hellman所提出。1978年Rivest,Shamir和Adleman提出RAS密码体制,基于大素数分解问题。基于有限域上的离散对数问题产生了ElGamal密码体制,而基于椭圆曲线上的离散对数问题产生了椭圆曲线密码密码体制。此外出现了其他公钥密码体制,这些密码体制同样基于困难问题。目前应用较多的包括RSA、DSA、DH、ECC等。 1.3杂凑算法 杂凑算法又称hash函数,就是把任意长的输入消息串变化成固定长的输出串的一种函数。这个输出串称为该消息的杂凑值。一个安全的杂凑函数应该至少满足以下几个条件。 1)输入长度是任意的; 2)输出长度是固定的,根据目前的计算技术应至少取128bits长,以便抵抗生日攻击; 3)对每一个给定的输入,计算输出即杂凑值是很容易的; 4)给定杂凑函数的描述,找到两个不同的输入消息杂凑到同一个值是计算上不可行的,或给定 杂凑函数的描述和一个随机选择的消息,找到另一个与该消息不同的消息使得它们杂凑到同一个值是计算上不可行的。 杂凑函数主要用于完整性校验和提高数字签名的有效性,目前已有很多方案。这些算法都是伪随机函数,任何杂凑值都是等可能的。输出并不以可辨别的方式依赖于输入;在任何输入串中单个比特

基于国密算法、数字证书的安全问题理解

基于国密算法、数字证书的安全问题理解 1. 国密算法发展背景 随着金融安全上升到国家安全高度,近年来国家有关机关和监管机构站在国家安全和长远战略的高度提出了推动国密算法应用实施、加强行业安全可控的要求。摆脱对国外技术和产品的过度依赖,建设行业网络安全环境,增强我国行业信息系统的“安全可控”能力显得尤为必要和迫切。 密码算法是保障信息安全的核心技术,尤其是最关键的银行业核心领域长期以来都是沿用3DES、SHA-1、RSA等国际通用的密码算法体系及相关标准,为从根本上摆脱对国外密码技术和产品的过度依赖。2010年底,国家密码管理局公布了我国自主研制的“椭圆曲线公钥密码算法”(SM2算法)。为保障重要经济系统密码应用安全,国家密码管理局于2011年发布了《关于做好公钥密码算法升级工作的通知》,要求“自2011年3月1日期,在建和拟建公钥密码基础设施电子认证系统和密钥管理系统应使用SM2算法。自2011年7月1日起,投入运行并使用公钥密码的信息系统,应使用SM2算法。” 2. 国密算法 安全的本质是算法和安全系统 保证安全最根本的方法是基础软件和基础硬件都是自己控制,目前我国无法短期国产化的情况下,数据加密是最好的方式。如果加密算法以及实现都是外国提供的,安全性从何说起,所以我国国家密码局发布了自主可控的国密算法 国密算法:为了保障商用密码的安全性,国家商用密码管理办公室制定了一系列密码标准,包括SM1(SCB2)、SM2、SM3、SM4、SM7、SM9、祖冲之密码算法(ZUC)那等等。 其中SM1、SM4、SM7、祖冲之密码(ZUC)是对称算法;SM2、SM9是非对称算法;SM3是哈希算法。 目前SM1、SM7算法不公开,调用该算法时,需要通过加密芯片的接口进行调用 3. 国密算法的现状 虽然在SSL VPN、数字证书认证系统、密钥管理系统、金融数据加密机、签名验签服务器、智能密码钥匙、智能IC卡、PCI密码卡等产品上改造完毕,但是目前的信息系统整体架构中还有操作系统、数据库、中间件、浏览器、网络设备、负载均衡设备、芯片等软硬件,由于复杂的原因无法完全把密码模块升级为国产密码模块,导致整个信息系统还存在安全薄弱环节。 4. 浏览器与SSL证书 SSL证书的作用就是传输加密 如果网站安装了SSL证书,就启用了https访问,那么你的访问就更加安全了。

国密算法应用流程

RJMU401国密算法应用流程 一、国密芯片RJMU401数据加密传输、身份认证及数据完整性保证 1、传输信道中的数据都采用SM4分组加密算法,保证数据传输时数据的机密性; 2、使用散列算法SM3保证数据的完整性,以防止数据在传输的过程中被篡改; 3、使用非对称算法SM2的私钥签名来保证数据的不可抵赖性,确保数据是从某一个确 定的车载用户端发出; 4、具体流程如下: a、用户数据使用SM3进行散列运算得到数据摘要,再使用非对称算法SM2进行 摘要签名; b、同时使用对称算法SM4的密钥对数据摘要进行加密并传输给安全模块; c、使用同一个对称算法SM4密钥对用户数据进行加密,并将加密后的密文传输给 监控端; d、监控端收到数据密文后,使用对应的密钥进行对称算法SM4解密,并使用散列 算法对解密后的数据进行运算得到数据摘要1; e、监控端对收到的摘要签名进行对称算法SM4解密,再经过非对称算法解密得到 最初的数据摘要2; f、对比数据摘要1和数据摘要2,若两者相等则认为传输数据具备完整性;否则 认为数据出错; 图1、数据加密传输、数据完整性及签名认证流程 补充说明: 1、需要有一主机发起认证指令,监控端收到对应指令后,会产生一个随机数(会话密 钥),可用该随机数作为对称加密SM4的单次密钥,用于加密传输的数据; 2、此SM4的会话密钥不会明文传输,监控端查找对应车载用户端的公钥进行加密,传 给对应的车载用户端,车载用户端收到数据后,用自己的SM2私钥解密,即可得到此次会话密钥。(会话密钥采用非对称密钥机制进行传输) 3、每一个车载用户端存放一个或者两个SM2密钥对,可采用CA证书形式。证书在车 载用户端生产时候预置进安全芯片RJMU401,监控端存储所有的车载用户端的SM2密钥对(证书)。

国密算法

国密算法(SM1/2/4)芯片用户手册(UART接口) 注意:用户在实际使用时只需通过UART口控制国密算法芯片即可,控制协议及使用参考示例见下面 QQ:1900109344(算法芯片交流)

目录 1.概述 (3) 2.基本特征 (3) 3.通信协议 (3) 3.1.物理层 (3) 3.2.链路层 (4) 3.2.1.通讯数据包定义 (4) 3.2.2.协议描述 (4) 3.3.数据单元格式 (5) 3.3.1.命令单元格式 (5) 3.3.2.应答单元格式 (5) 3.4.SM1算法操作指令 (6) 3.4.1.SM1 加密/解密 (6) 3.4.2.SM1算法密钥导入指令 (6) 3.5.SM2算法操作指令 (7) 3.5.1.SM2_Sign SM2签名 (7) 3.5.2.SM2_V erify SM2验证 (7) 3.5.3.SM2_Enc SM2加密 (8) 3.5.4.SM2_Dec SM2解密 (9) 3.5.5.SM2_GetPairKey 产生SM2密钥对 (9) 3.5.6.SM2算法公钥导入 (10) 3.6.SM4算法操作指令 (10) 3.6.1.SM4加密/解密 (10) 3.6.2.SM4算法密钥导入指令 (11) 3.7.校验/修改Pin指令 (11) 3.8.国密算法使用示例(Uart口命令流) (12) 3.8.1.SM1算法操作示例 (12) 3.8.2.SM2算法操作示例 (13) 3.8.3.SM4算法操作示例 (14) 3.9.参考数据 (15) 3.9.1.SM1参考数据 (15) 3.9.2.SM2参考数据 (15) 3.9.3.SM4参考数据 (17)

国密算法芯片

国密算法芯片 用户手册 注意:用户在实际使用时需要通过UART口控制国密算法芯片,控制协议见下面说明,芯片本身只包含其中某几个算法,需要在购买时说明。 通过UART口发命令即可,方便用户使用,价格便宜 QQ:2425053909(注明加密芯片)

目录 1.概述 (2) 2.基本特征 (2) 3.通信协议 (2) 3.1.物理层 (2) 3.2.链路层 (3) 3.2.1. 通讯数据包定义 (3) 3.2.2. 协议描述 (3) 3.3.数据单元格式 (4) 3.3.1. 命令单元格式 (4) 3.3.2. 应答单元格式 (4) 4.国密芯片加解密指令 (5) 4.1.SM1算法操作指令 (5) 4.2.SM4算法操作指令 (5) 4.3.SM7算法操作指令 (6) 4.4.SSF33算法操作指令 (7) 4.5.SM3算法操作指令 (7)

1.概述 本文档适用于使用国密算法芯片进行终端开发的用户。终端开发者通过发送串口命令的方式操作国密芯片进行数据交换,国密产品应用开发。通过阅读本文档,终端开发者可以在无需考虑国密算法实现细节情况下,借助国密芯片来迅速改造现有系统使之适合国密应用。 2.基本特征 芯片的基本特征见下表: 3.通信协议 3.1.物理层 国密芯片采用系统供电方式,电压5V或者3.3V。国密芯片串口与系统MCU 串口相连,异步全双工通讯,波特率默认为115200bps。数据格式为1位起始位、8位数据位和1位停止位,无校验位。 系统MCU向国密芯片发送命令时,在同一个命令内,相连两个发送字符之间的间隔不应大于10个字符时间,否则国密芯片可能会认为命令超时导致无任何响应。

基于国密算法的文件安全系统研究与实现

基于国密算法的文件安全系统研究与实现 徐学东1,季才伟2 (1.长春工程学院机电学院,吉林长春,130012;2.长春易申软件有限公司,吉林长春,130012) 摘要:针对网络环境下电子文件的安全需求,研究基于国密算法的文件安全系统。提出C/S架构的系统模型,针对公共信道交互的安全性问题,提出“新鲜性标识符+挑战应答模式”的认证及密码协议方案,并完成了算法、密钥体系安全性设计和产品开发,目前已并通过了国密测试。 关键词:国密算法;公共信道;挑战应答模式;密钥体系 Research on the file security system based on the National commercial cipher algorithm Xu Xuedong1,Ji Caiwei2 (1.School of Mechanical&Electrical Engineering Changchun institute of Technology,ChangChun,130012; 2.Changchun E-sun Software Co.,ChangChun,130012) Abstract:According to the security requirement of the electronic file in the network environment,Analysis the file security system based on the national commercial cipher algorithm. Present C/S structure of the system model, “the fresh identifier + challenge response mode”authentication and cryptographic protocol is put forward to solve the problem of the safety of public channel interaction,and gives the algorithm and key security system design and specific implementation process.At present, has passed the testing of National commercial cipher algorithm. Keywords:National commercial cipher algorithm;Common channel;Challenge response mode;Key system 0 引言 重要电子文件传输、存储等过程中存在非法访问、窃取的风 险,需要通过密码产品进行保护。我国《商用密码管理条例》规定, 对信息进行加密保护或认证所使用的密码技术和产品,只能使用 经国家密码管理局审批认定的商用密码产品。商密产品必须基于 国密商用算法,且需通过国密产品检测获得产品型号。 国密商用算法是指国密 SM 系列算法。在国密产品检测中, 除了应用国密算法,基于PKI的加密体系,公共信道双向认证、算 法和密钥管理安全性和性能均有很高的要求。研究基于国密算法 的电子文件安全系统的设计方案,确保可信认证及无漏洞交互, 达到国密产品检测的要求。 1 系统架构设计 网络环境下,电子文件一般采取集中管理、分布使用的管理 方式。基于SC/SS架构模型,系统由服务端和客户端构成。总体 架构如图1。服务端负责用户的身份认证,权限控制,电子文件的 加密存储和行为审计。客户端负责文件的解密,监控文件的使用。 两者共同完成用户身份认证、会话协商密钥和客户端系统的安全 监控、防破解功能。 系统的加解密环节则需要在服务端和客户端分别实现。服务 端密码运算采用国密局审批通过的SJK1238加密卡,负责与客户 端协商会话密钥,验证客户端身份,对文档进行加解密。客户端采 用SJK1104 USBkey,负责与服务端协商会话密钥时的数字签名、 通讯数据加解密及客户端文档解密。为确保性能及密钥的安全 性,密码运算采用硬件加密。具体算法为:公钥密码算法为SM2, 杂凑算法为SM3,对称密码算法为SM4。 2 公共信道认证及密码协议 由于客户端和安全服务器的交互必须通过公共信道,其安全 设计至关重要。传统通过公共信道实现基于本地身份的交互认证 问题的研究已经较为深入。无密钥的安全传输机制也取得一些成 果。本设计的目标是简洁、高效、确保认证防止内部攻击。 2.1 新鲜性标识符概念 基于细粒度新鲜性的认证及密码协议是通过公共信道交互 的有效方案。其中新鲜性的概念是:协议运行期间,新鲜性标示 符N在产生时间t0之间没有被使用过。如果交互双方都相信新 鲜性标识符重复概率很低,攻击者在协议运行期间找到一个使用 过的标识符是实践上不可行的,则该新鲜性标识符只有若干合法 的主体拥有。 2.2 基于新鲜性标识符的挑战应答模式 挑战应答(Challenge/Response)模式是常用的认证方式,就 是每次认证时认证服务器端都给客户端发送一个"挑战"字串, 客户端程序收到这个"挑战"字串后,做出相应的"应答",认 证服务器将应答串与自己的计算结果比较,判断是否通过认证。 根据Dolev-Yao模型,每次交互过程双方均采用“新鲜性挑战”字 串,如加密算法完善,不同消息之间的格式不能相同,每个主体能 区分消息是否是由自己产生,则基于该新鲜性标识符进行双向认 证的过程就是安全的。 本系统客户端与服务器通讯采用标准双向认证协议,使用挑 战应答模式协商会话密钥,交互双方在交互过程中各产生一个新 鲜性随时数,作为签名验证的标识符。协商成功后通讯的数据均 使用会话密钥进行加密。协商过程如下: 基金项目:吉林省科技攻关计划项目(20140204060SF);吉林省教育厅科研规划项目(吉教科合字[2014]第337号)DOI:10.16520/https://www.360docs.net/doc/8f14124492.html,ki.1000-8519.2016.18.029

国密算法高速加密芯片

国密算法 --TF32A09系列芯片 赵玉山1371 899 2179 芯片概述: TF32A09系列芯片采用独有的数据流加解密处理机制,实现了对高速数据流同步加解密功能,在加解密速度上全面超越了国内同类型芯片。 TF32A09系列芯片集成度高、安全性强、接口丰富、加解密速度快,具有极高的性价比。该系列芯片可广泛的应用于金融、电子政务、电子商务、配电、视频加 密、安全存储、工业安全防护、物联网安全防护等安全领域。 芯片架构:

关键特性: ?CS320D 32位安全内核,外部总线支持8位/16位/32位访问; ?工作频率可达到100MHz; ?64k Byte ROM,可将成熟固件或受保护代码掩膜到ROM,密码算法使用MPU保护; ?20kByte 片内SRAM,从容完成高速数据处理; ?512kByte Nor Flash,满足不同客户应用要求; ?拥有两个USB—OTG 接口,可根据应用需求设置成Host或Device ; ?集成多种通信接口和多种信息安全算法(SM1、SM2、SM3、SM4、3DES、RSA等),可实现高度整合的单芯片解决方案; ?支持在线调试,IDE 调试环境采用CodeWarrior。 内部存储器: ?20KB SRAM ?64KB ROM ?512KB FLASH 芯片外部接口:

技术参数: 1. 工作电压: 2.4V~ 3.6V 2. 频率:最高可达100MHZ 3. 外形尺寸:TF32A9FAL1(LQFP176,20X20X1.4mm ) TF32A9FCL1(LQFP80,10X10X1.4mm ) TF32A9FDL1(LQFP64,10X10X1.4mm ) 芯片优势: ? 自主设计,国产安全芯片 ? 专利设计,加密传输速度快 ? 拥有两个USB —OTG 接口,高速流加密 ? 接口丰富 ? 集成国密算法 外设模块 176Pin 芯片 80Pin 芯片 64Pin 芯片 USB1 ● ● ● USB2 ● ● 7816-1(智能卡) ● ● ● 7816-2(智能卡) ● ● SPI1 ● ● ● SPI2 ● UART1 ● ● ● UART2 ● ● NFC(nandFlash) ● ● I2C ● ● ● KPP(键盘) ● GPIO 32 16 2

车联网安全之国密算法与其他算法的区别

车联网安全之国密算法与其他算法的区别 概述: 在车联网中,不管是端到云、还是云到车,每个环节、每个节点的信息安全保障都离不开加解密算法的支持,加解密算法的合理使用以及其计算能力直接影响车联网的性能和用户的体验。 今天和大家讨论一下国密算法与其他加解密算法(即国际算法是美国的安全局发布,是现今最通用的商用算法)的差别,以便于车联网的设计者们在不同的环节、节点采用最优的算法,从而提高产品的性价比。 国密算法: 即国产密码算法,是国家密码局制定标准的一系列算法,其中包括了对称加密算法,椭圆曲线非对称加密算法,杂凑算法等,在金融领域目前主要使用公开的SM2、SM3、SM4三类算法,分别是非对称算法、哈希算法和对称算法。在车联网中,目前还没有车厂采用国密算法,但作者认为随着互联网安全作为国家属性的重要度提升,未来国密算法将是车联网安全领域的发展趋势。 与国际算法差别: SM1是国家密码管理部门审批的分组密码算法,分组长度和密钥长度都为128比特,算法安全保密强度及相关软硬件实现性能与AES相当,该算法不公开,仅以 IP核的形式存在于芯片中; 图1:SM1与AES的比较 SM2算法和RSA算法都是公钥密码算法,SM2算法是一种更先进安全的算法,在我们国家商用密码体系中被用来替换RSA算法。SM2性能更优更安全:密码复杂度高、处理速度快、硬件性能消耗更小。

图2:SM2与RSA的比较 注: 1. 亚指数级算法复杂度低于指数级别的算法。 2. RSA秘钥生成速度较慢。例:主频1.5G赫兹的话,RSA需要2-3秒的,这在车联网中是根本无法接受的,而SM2只需要几十毫秒。 SM3是摘要加密算法,该算法适用于商用密码应用中的数字签名和验证以及随机数的生成,是在SHA-256基础上改进实现的一种算法。SM3算法采用Merkle-Damgard结构,消息分组长度为512位,摘要值长度为256位。 图3:SM3与Sha256的比较 SM4分组密码算法是我国自主设计的分组对称密码算法,用于实现数据的加密/解密运算,以保证数据和信息的机密性,是专门为无线局域网产品设计的加密算法。

相关主题