Stabilizing Motion Tracking Using Retrieved Motion Priors

Stabilizing Motion Tracking Using Retrieved Motion Priors
Stabilizing Motion Tracking Using Retrieved Motion Priors

Stabilizing Motion Tracking Using Retrieved Motion Priors

Andreas Baak1,Bodo Rosenhahn2,Meinard M¨u ller1,Hans-Peter Seidel1 1Saarland University&MPI Informatik,Saarbr¨u cken,Germany

2Leibniz University Hannover,Institut

Abstract

In this paper,we introduce a novel iterative motion track-

ing framework that combines3D tracking techniques with

motion retrieval for stabilizing markerless human motion

capturing.The basic idea is to start human tracking without

prior knowledge about the performed actions.The resulting

3D motion sequences,which may be corrupted due to track-

ing errors,are locally classi?ed according to available mo-tion categories.Depending on the classi?cation result,a retrieval system supplies suitable motion priors,which are then used to regularize and stabilize the tracking in the next iteration step.Experiments with the HumanEVA-II bench-mark show that tracking and classi?cation are remarkably improved after few iterations.

1.Introduction

Markerless Motion Capturing(Mocap)is an active?eld of research in computer vision and graphics[9]with appli-cations in animation(games,avatars),medicine or sports science.The goal is to determine the3D positions and orientations as well as the joint angles of a human actor from image data.In such a tracking scenario,it is common to assume as input a sequence of multiview images of the performed motion as well as a surface mesh of the actor’s body.Since the pose and joint parameters are usually un-known and have to be computed from the image data,one typically has to cope with high-dimensional search spaces (typically more than30dimensions for a full body model). To make the tracking problem feasible,the manifold of all virtual possible con?gurations is often reduced to a lower-dimensional subspace.One possibility is to explicitly pre-vent self-occlusions and to impose?xed joint angle limits as suggested in[7,21].Another option is to directly learn a mapping from the image or silhouette space to the space of pose con?gurations[1,17].A very popular strategy for restricting the search space is dimensionality reduction,ei-ther by linear or by nonlinear projection methods.In[18], the low-dimensional space is obtained via PCA and the mo-tion patterns in this space are structured in a binary tree.In [20]

pose

tion

A recent research strand integrates further sources of in-formation in the motion capture process,e.g.,by captur-ing light sources[2]or by using physical models and forces arising from a ground plane[5,25].A common problem with learning-based approaches is that the user needs a good guess on the type of pattern to be expected.For instance, if the user knows that the subject is performing a walk-ing pattern,suited training data is selected and integrated in the tracking system.Current probabilistic learning ap-proaches are limited in their ability to handle large training sets.Only recently,local regression methods have been pro-posed that allow for coping with a large number of motion patterns within a tracking scenario[23].In activity recog-nition,many works rely on2D descriptors or image silhou-ettes,such as presented in[8,22].To the best of our knowl-edge,no approach uses activity recognition for stabilizing a tracking framework yet.

In this paper,we introduce a tracking framework that combines methods for markerless motion capturing with a retrieval component in an iterative fashion,see Fig.1.We start the iteration without applying any prior knowledge on the actions to be performed.The tracking system takes a multiview image sequence(‘Video’)and returns a sequence of joint positions over time(‘3D Mocap’).In our system, we rely on a region-based approach performing joint pose estimation and optimized region splitting similar to[16]. However,our general framework also allows for applying other tracking techniques as presented in[3,6,14].

Due to noise,occlusions,and other ambiguities in the image data,tracking may fail for parts of the sequence re-

Figure2.Tracking without priors may lead to pose deformations. sulting in corrupted poses,see Fig.2.However,despite of these errors,the overall rough course or at least parts of the motion may still be recognized to a reasonable degree.In the next step,the3D mocap tracking results are locally clas-si?ed according to available motion categories.In our ap-proach,these categories are encoded in form of class motion templates(MTs)as introduced in[10].MTs show a high de-gree of robustness under spatial and temporal deformations, while revealing consistent aspects of a motion category.In particular,it turns out that typically occurring tracking er-rors do not have a signi?cant in?uence on the classi?cation result using MTs.For example,after recognizing a walking cycle in the tracked sequence,this increase of knowledge about the tracked sequence can be used to allocate a simple prior such as‘left foot moves to the front’.Such priors are integrated in the tracking procedure as regularization terms, and the tracking step is repeated to yield an enhanced track-ing result.Iterating such a procedure,as shown by our ex-periments with the HumanEV A-II benchmark,remarkably improves the result after few iterations.

The remainder of the paper is organized as follows.We ?rst summarize the tracking procedure(Sect.2)and de-scribe the retrieval component(Sect.3).In Sect.4,we explain how to fuse the retrieval results with our tracking procedure.To the best of our knowledge,such an iterative tracking procedure using retrieved motion priors has not yet been considered before and constitutes the main contribu-tion of our paper.Besides stabilization of3D tracking we also gain a classi?cation which closes the gap between sym-bolic labels on the motion patterns and the underlying ac-tions.The experiments are presented in Sect.5.A summary can be found in Sect.6.

2.Tracking Procedure

The input of our tracking procedure is a data stream of multi-view images(obtained by a set of calibrated and syn-chronized cameras)as well as a surface mesh of the subject to be tracked(obtained by a body laser scanner).We fur-ther assume that the mesh is rigged so that all mesh points are associated in a?xed way to the joints of an underly-ing kinematic chain.Then,the tracking problem consists of computing the con?guration parameters(joint angles as well as root orientation and translation)of the kinematic chain from the given image data.Here,the surface mesh should be transformed with the con?guration parameters in such a way that the projection of the mesh covers the ob-served subject in the images as accurately as possible.

2.1.Kinematic Chains

The subject to be tracked is modeled by a so-called kine-matic chain,which is generally used to model a?exibly linked rigid body such as a human skeleton[4].In the fol-lowing,we use homogeneous coordinates to represent3D points and exponential functions of twists to represent rigid body motions.The con?guration of a kinematic chain can then be described by a consecutive evaluation of exponen-tial functions of twists,see[4].More precisely,let x∈R3 be a3D coordinate of a joint in the neutral con?guration (standard pose)of the kinematic chain.Let X=(x1)be the respective homogeneous coordinate and de?neπas the associated projection withπ(X)=x.Furthermore,letξbe a rigid body motion,which can be represented asξ= exp(θ?ξ)with a twist?ξandθ∈R.The overall con?guration of the kinematic chain is speci?ed by a rigid body motion ξ=exp(θ?ξ)encoding the root orientation and translation as well as a sequenceξ1=exp(θ1?ξ1),...,ξn=exp(θn?ξn)

of rigid body motions encoding the joint angles.Note that the twists?ξ1,...,?ξn are?xed for a speci?c kinematic chain. Thus,the con?guration of a?xed kinematic chain is speci-?ed by the following(6+n)free parameters:

χ:=(ξ,Θ)withΘ:=(θ1,...,θn).(1) In other words,the con?guration parameter vectorχcon-sists of the6degrees of freedom for the rigid body motion ξand the joint angle vectorΘ,see also[15].Now,for a given point x on the kinematic chain,we de?ne J(x)?{1,...,n}to be the ordered set that encodes the joint trans-formations affecting x.Then,for a given con?guration pa-rameter vectorχ:=(ξ,Θ),the point x is transformed ac-cording to

Y=exp(θ?ξ)

j∈J(x)

exp(θj?ξj)X.(2) 2.2.Pose Estimation

In our setup,the vectorχis unknown and has to be de-termined from the image data.In the following,instead of regarding points on the kinematic chain,we use points on the surface mesh.As the mesh is rigged,the mesh points are directly associated to a joint.Given a set of3D surface mesh points x i,i∈I,we assume for the moment that one knows corresponding2D coordinates of these points within a given image.In Sect.2.3,we describe how to obtain such

correspondences.Furthermore,we represent each 2D point as a reconstructed projection ray given in 3D Pl¨u cker form L i =(n i ,m i )[13].For pose estimation,the basic idea is to apply the (unknown)rigid body motions on 3D points x i ac-cording to χand to claim incidence with the reconstructed projection rays.Due to the properties of Pl¨u cker lines,this incidence can be expressed as

π exp(θ?ξ)

j ∈J (x i )

exp(θj ?ξj ) ×n i ?m i =0.(3)To simultaneously account for the incidences of all points

x i ,i ∈I ,one minimizes the following term in a least-squares sense:

argmin χ

X i

???“π“exp(θ?ξ)

Y j ∈J (x i )

exp(θj ?ξj )X i ”×n i ”?m i ???22

(4)

To solve for the unknown parameters in the exponential functions,we linearize each function by using the ?rst two

elements of the respective Taylor series:exp(θ?ξ)≈1l +θ?ξ

.This leads to three linear equations with 6+n unknowns for each exponential function.In case of many correspon-dences (i.e.,in case there are many mesh points x i with cor-respondences),one obtains an over-determined linear sys-tem of equations,which can be solved in the least squares sense.The approximation errors introduced by the lin-earization step are handled by applying an iterative com-putation scheme,see [15]for details.

2.3.Region-based Pose Tracking

In our framework we use the region-based tracking ap-proach as presented in [16],to which we refer to details.However,alternatively,one may also use other techniques as presented in [3,6,14].

The concept is to estimate pose parameters χsuch that the projection of the resulting surface mesh optimally splits the image into a foreground (subject)and a background re-gion.Here,the splitting is regarded as optimal if suitable image features (color,texture)are maximally dissimilar in the two regions with regard to estimated density functions,see [16].Starting with a ?rst estimate χ,the transformed mesh points y i (see Eq.(2))are projected onto image points p i yielding correspondences in a natural way.One then con-siders only the visible image points p i that lie on the contour line separating foreground and background.Next,these points are shifted inwards or outwards (orthogonal to the contour line)according to force vectors so that the result-ing points,say q i ,better explain the color distributions of the foreground and background regions,see Fig.3.Finally,using the points q i with the correspondent mesh points x i (obtained from the transformed mesh points y i )one is then in the situation for applying the minimization (4)to obtain an improved estimation of the pose parameters.The entire process is iterated until convergence,see [16]for

details.

Figure 3.Example forces (enlarged force vectors,green)acting on

the contour line of the projected surface mesh.

3.Retrieval Component

In this section,we review the concept of motion tem-plates (MT)in Sect.3.1,explain our MT-based classi?ca-tion procedure in Sect.3.2and ?nally describe the alloca-tion of priors in Sect.3.3.

3.1.Motion Templates

We now summarize the main idea of motion templates referring to [10]for details.As underlying feature represen-tation,we use the concept of relational features that capture semantically meaningful Boolean relations between spec-i?ed points of the kinematic chain underlying the mocap data,see [11].In the following,we use a set of f =41relational features where the ?rst 39features are de?ned as in [10],and the last two features express whether the right/left foot is moving to the front.Then,a given mo-cap sequence is converted into a sequence of f -dimensional Boolean feature vectors in a framewise fashion.Denoting the number of frames by K ,we think of the resulting se-quence as a feature matrix X ∈{0,1}f ×K .An example is shown in Fig.4(b),where,for the sake of clarity,we display a subset comprising only 6of the f =41features.

In our scenario,we assume that each motion category is given by a class C consisting of γ∈N logically related ex-ample motions.To learn a motion class representation that grasps the essence of the class,we compute a semantically meaningful average over the γfeature matrices of training examples.Here,to cope with temporal variations in the ex-ample motions,we use an iterative warping and averaging algorithm [10],which converges to an output matrix X C re-ferred to as a motion template (MT)for the class C .After a subsequent quantization step,one obtains a quantized MT with values over the set {0,1,?}as indicated by Fig.4(a).The length (number of columns)of the MT corresponds to the average length of the training motions.The black/white regions in a class MT indicate periods in time (horizontal axis)where certain features (vertical axis)consistently as-sume the same values zero/one in all training motions,re-spectively.By contrast,gray regions (corresponding to the

100

150

200

250

300

100

150

200

250

300

100

150

200

250

300

131415162526

131415162526

131415162526

131415162526

(a)

(b)

(c)

(d)

Figure 4.(a):Marked motion template for motion class ‘walk2StepsLstart’.The feature numbers correspond to the fea-tures used in [10].The annotation for the relational constraint ‘leftFootOnGround’is indicated by the green rectangle.(b):Fea-ture matrix for the subsegment of D EV A consisting of the ?rst three walking cycles.The second walking cycle (frames 160to 240)has not been tracked correctly.(c):Feature matrix (b)with allocated priors (green rectangles).(d):Feature matrix after regu-larized tracking using the priors of (c).

wildcard character ?)indicate inconsistencies mainly re-sulting from variations in the training motions.In other words,the black/white regions encode characteristic aspects that are shared by all motions,whereas the gray regions represent the class variations coming from different realiza-tions.For further details,we refer to [10].

3.2.Classi?cation Procedure

Given a mocap sequence D and a speci?c motion class C ,we now de?ne a distance function that reveals all motion subsegments of D correlating to C .Let X ∈{0,1,?}f ×K be a quantized class MT of length K and Y ∈{0,1}f ×L the feature matrix of D of length L .We de?ne for k ∈[1:K ]and ∈[1:L ]a local cost measure c Q (k, )between the k -th column X (k )of X and the -th column Y ( )of Y .Let I (k ):={i ∈[1:f ]|X (k )i =?},where X (k )i denotes the i th entry of the k th column of X .Then,if |I (k )|>0,we set

c Q

(k, )=1

|I (k )|

i ∈I (k )

|X (k )i ?Y ( )i |,

(5)

otherwise we set c Q (k, )=0.In other words,c Q (k, )only accounts for the consistent entries of X with X (k )i ∈{0,1}and leaves the other entries unconsidered.Based on this local distance measure and a subsequence variant of dynamic time warping (DTW),one obtains a distance func-tion ΔC :[1:L ]→R ∪{∞}as described in [10]with the following interpretation:a small value ΔC ( )for some ∈[1:L ]

indicates the presence of a motion subsegment of D that is similar to the motions in C starting at a suitable

(a)

(b)

(c)

C EV A (a)‘walk2StepsLstart’,(b)‘jog2StepsLstart’,and (c):Combined distance function Δmin obtained by minimizing (a)and (b).

frame index a < and ending at frame index .Here,the starting frame index a can be recovered by a simple back-tracking within the DTW procedure.In other words,look-ing for all local minima in ΔC below a suitable matching threshold τ>0one can identify all subsegments of D that are similar to the class MT.As example,Fig.5(a)shows a distance function based on the quantized MT for the class ‘walk2StepsLstart’for D =D EV A .Note that each of the ?rst ?ve local minima (frames 0to 400)reveals the end of a walking cycle starting with the left foot.

Now,let C 1,...C P be the available motion classes,where p ∈[1:P ]denotes the class label of class C p .Then,given a mocap sequence D of length L ,the classi?cation task is to identify all motion subsegments within D that be-long to one of the P classes.To this end,we compute a distance function Δp :=ΔC p for each class C p and mini-mize the resulting functions over p ∈[1:P ]to obtain a single function Δmin :[1:L ]→R ∪{∞}:

Δmin ( ):=min p ∈[1:P ]

Δp ( ),

(6)

∈[1:L ].Furthermore,we store for each frame the min-imizing index p ∈[1:P ]yielding a function Δarg :[1:L ]→[1:P ]de?ned by:

Δarg ( ):=argmin p ∈[1:P ]

Δp ( ).

(7)

The function Δarg yields the local classi?cation of the mo-cap sequence D by means of the class labels p ∈[1:P ].Fig.5shows an example-based on P =2motion classes.

3.3.Allocation of Priors

We now explain how to generate suitable motion priors,which can then be used to regularize the tracking process.Recall that a class motion template X C explicitly encodes characteristic motion aspects (corresponding to black/white regions)that are typically shared by motions of class C .We select some of these aspects by marking suitable entries within the template X C .These entries are also referred to as MT priors .As an example,consider Fig.4(a),where the

X p

Y

Figure 6.Pose priors are allocated by looking for subsequences in Y that all align to the same MT X p .

entries of row 26between columns 22and 58are marked by the green rectangle.Feature 26expresses whether the right foot rests (black,value 0)or assumes a high velocity (white,value 1).Since all entries have the value 0within the green rectangle,this MT prior basically expresses that the right foot rests (stays on the ground)during this phase of the motion.Note that the MT priors are part of the database knowledge and do not depend on the sequence to be tracked.Now,let D be a mocap sequence of length L ob-tained from some previous tracking procedure and let Y ∈{0,1}f ×L be the corresponding feature matrix.The goal is to automatically transfer suitable MT priors to the tracked sequence to obtain what we refer to as tracking priors .Let C 1,...C P be the available motion classes with correspond-ing motion templates X p =X C p ,p ∈[1:P ],each equipped with suitable MT priors.We compute the func-tions Δmin and Δarg as described in Sect.3.2.Recall that a local minimum ∈[1:L ]of Δmin close to zero indicates the presence of a motion subsegment of D (starting at a suit-able frame index a < and ending at frame index )that corresponds to motion class Δarg ( )∈[1:P ].Therefore,we ?x a quality threshold τ>0and look for all essen-tial local minima ∈[1:L ]with Δmin ( )<τ.(Here,essential means that we only consider one local minimum within a suitable temporal window to avoid local minima being too close to each other.)Using the same DTW pro-cedure as in Sect.3.2,we then derive an alignment between the motion template X p with p :=Δarg ( )and the feature subsequence of Y ranging from a to .Fig.4shows an example,where the alignment is indicated by the red ar-rows.Note that such an alignment establishes temporal cor-respondences between semantically related frames and thus allows for transferring the MT priors within X p to corre-sponding regions within Y ,see Fig.4(c).These regions,in the following referred to as tracking priors ,are then used for regularization in the tracking procedure (Sect.4).

As an additional stabilizing factor,we take further ad-vantage out of several subsequences of Y that all align to the same MT X p .The idea is to use X p as a kind of mediator to generate additional priors from the multiply aligned subse-quences.We explain this idea by means of a simple example consisting of two subsegments as indicated by Fig.6.Here,each circle denotes a correspondence (x , )between frame x in X p and frame ∈[1:L ]of Y .In this example,essen-

Figure 7.Integration of the constraint foot on ?oor .Points on the sole are pushed onto the ?oor.

tial local minima were found for frames 7and 12(matching to the subsequences ranging from frames 3to 7and from 9to 12).Now,suppose that the green alignment has a cost close to zero (Δmin (12)≈0).In practice,such an align-ment corresponds to a subsequence of Y that does not con-tain tracking errors.By contrast,suppose that the subse-quence corresponding to the red alignment contains some tracking errors resulting in higher alignment cost.Then,the idea is to use the poses of the “green subsequence”,re-ferred to as pose priors ,to stabilize the tracking of the “red subsequence”.The correspondence of poses between the subsequences is established via the alignments to X p ,see Fig.6.For example,frame 9of Y yields a pose prior for frame 3of Y since both frames are aligned to the ?rst frame of X p (indicated by the dashed arrow line).To put it in sim-ple words,we ?rst detect the presence of repetitions within Y by means of the MT-based local classi?cation and then generate pose priors from the established correspondences.

4.Integration of Allocated Priors in Tracking

As explained in Sect.3.3,a retrieval component is used to allocate two types of priors to the tracking sequence.In the following,we show how the allocated priors are inte-grated into a subsequent tracking iteration.

Tracking priors provide information about certain move-ment behaviors of body parts within a certain motion con-text.As an example,we consider the tracking prior “left foot should be on the ?oor for a certain frame”.We use soft constraints to integrate this information in the tracking framework,where the in?uence of a prior can be controlled by a weighting parameter.In particular,soft constraints are formulated as additional equations that are included in the minimization step (4).To implement the example tracking prior,all points y s ,s ∈S ?I ,on the sole of the foot (the plantar)are projected onto the ground plane,yielding the points z s .Then,we claim incidence y s ?z s =0for s ∈S,to push the sole onto the ground plane,see https://www.360docs.net/doc/681675153.html,ing Eq.(2)to express y s by the underlying kinematic chain,we integrate the set of equations

π exp(θ?ξ)

j ∈J (y s )

exp(θj ?ξj )X s ?z s =0,s ∈S (8)into the minimization step (4).Note that the unknowns are

the same as for (4),as z s are considered as constants for one frame.In a similar manner it is straightforward to integrate motion dynamics like arms swinging forward or backwards.

initial result frame

310prior:initial result

iteration 1frame 310

Figure https://www.360docs.net/doc/681675153.html,ing pose priors to regularize tracking.

Unlike tracking priors,pose priors denote that a certain joint angle con?guration Θat frame 1∈[1:L ]should also be assumed in frame 2∈[1:L ]\{ 1}.For example,consider Fig.8where the generated pose prior suggests to take Θof frame 1=157for frame 2=310.To this end,equations similar to Eq.(8)can be integrated into the mini-mization step (4)to regularize the joint angle con?guration at frame 2towards Θin the subsequent tracking iteration.

5.Experiments

In our experiments,we used the Human EV A-II bench-mark [19].Here,a surface model,calibrated multiview im-age sequences of four cameras,and background images are provided.Note,that our region-based pose tracking does not rely on background subtraction and therefore the back-ground information is not used in our method.Instead,we rely on the image data,projection matrices and a mesh model.Due to color similarities and the sparse number of cameras,tracking is challenging and the results are likely to be corrupted if no priors are involved.Tracking results (as 3D marker positions)can be uploaded to a server at Brown University for evaluation.As the sequence has been cap-tured in parallel with a marker-based tracking system (us-ing a Vicon system),an automated script can evaluate the accuracy of a tracking result in terms of relative errors in millimeters.In the Human EV A-II sequence S 4,three dif-ferent actions are performed consecutively,lasting for 6.7s (400frames at 60Hz )each.A non-professional actor walks in a circle,jogs in a circle,and then balances on each foot.We decided for this sequence for several reasons:Firstly,it is a public available benchmark,which allows a quantitative comparison to other existing approaches.Secondly,the se-quence contains three different patterns and we want to test,whether our system is able to classify and single out the involved motions correctly (walking and jogging).Thirdly,walking and jogging are similar patterns,which allows us to get a good feeling about the sensitivity of our approach in classifying similar patterns.Fourthly,the balancing part is not in the database,which means the algorithm should not perform a classi?cation at all,so that the tracking is only

20040060080010001200

walkL

walkR walkToJogL walkToJogR

jogL jogR

00.01

0.02

0.03

0.04

0.05

0.06

0.07

200400

60080010001200

FootGroundL FootGroundR FootToFrontL FootToFrontR HandToFrontL HandToFrontR ElbowBentL ElbowBentR

(a)

(b)

(c)

20040060080010001200

walkL

walkR walkToJogL walkToJogR

jogL jogR

00.01

0.02

0.03

0.04

0.05

0.06

0.07

20040060080010001200

FootGroundL FootGroundR FootToFrontL FootToFrontR HandToFrontL HandToFrontR ElbowBentL ElbowBentR

(d)

(e)

(f)

Figure 9.(a):Δmin for the initial tracking result.Essential local minima are marked by a cyan dot.(b):Corresponding distance functions ΔC for six MTs shown in a color coded fashion.Values greater than τ=0.7are drawn in white.(c):Allocated tracking priors.(d)-(f):Corresponding plots after the fourth iteration.

driven from the image data without any priors.All these aspects can be covered by this sequence.

The database knowledge that is used by the retrieval sys-tem is generated in a preprocessing step.To this end,we as-sembled a total of 232short 3D motion capture clips,which we manually cut out from the freely available HDM05mo-cap database [12](obtained from a Vicon system).The mo-cap clips of an average length of 1.1s were categorized into P =6different motion categories,which are ‘walk two steps’,‘jog two steps’,and ‘change from walk to jog’,each for starting with the left and right foot,respectively.

After extracting the relational features for each example motion at a sampling rate of 60Hz ,we computed a quan-tized motion template classi?er for each of the P motion classes and marked suitable regions in the quantized MTs as MT priors,see Fig.4(a).In our scenario,we marked MT priors corresponding to ‘left/right foot is on ground’,‘left/right foot moves to front’,‘left/right hand moves to front’,and ‘left/right elbow is bent’.Note that the set of mo-tion templates along with the MT priors,which constitutes our database knowledge,is independent of the sequence to be tracked and has to be generated only once.

In the initialization step,tracking is performed without

Figure10.Improvements obtained by our iterative tracking proce-

dure.Top:Result without prior knowledge(initialization).Mid-

dle:Result after the?rst iteration.Bottom:Result after the third

iteration.The frames from left to right show examples of the walk-

ing,jogging,and balancing part(frames210,750and1170).Sev-

eral tracking errors(see arms and legs)are corrected.

using any regularizing priors.The resulting tracking se-

quence is then locally classi?ed according to the precom-

puted MTs.In our experiments,a quality threshold of

τ=0.07turned out to be a robust choice.In Fig.9(a),

we show the resultingΔmin.Essential local minima be-

lowτ=0.07are marked by a cyan dot.Note that for

the walking part of the benchmark sequence(frames1to

400),Δmin assumes lower values than for the jogging part

(frames400to800),which indicates that the walking part

contains less tracking errors than the jogging part.Note

also that for the balancing part(frames830to1200),Δmin

is far aboveτrevealing a strong difference to walking or

jogging patterns.The functionΔarg assigns the essential lo-

cal minima to appropriate motion categories,see Fig.9(b).

Furthermore,each motion subsequence induced by a local

minimum is aligned to the corresponding MT.Based on

these local alignments,suitable tracking priors are allocated

for the next iteration,see Fig.9(c).Fig.9(d)-(f)show the

corresponding distance functions and allocated priors in the

fourth iteration.Note that the local minima ofΔmin shown

in Fig.9(d)have gained a substantial qualitative boost.Fur-

thermore,the occurrence of the different motion categories

are revealed in a much more distinctive way in the fourth

iteration,compare(e)and(b)of Fig.9.This all indicates a

stabilization of the tracking procedure over the iterations.

We now discuss the actual improvements in the tracking

results achieved by our novel iterative approach.Fig.10

shows representative poses overlayed with the tracking re-

sult(indicated by the yellow meshes)after the initial step,

the?rst iteration,and the third iteration.As seen,the ini-

tial tracking result contains various serious tracking errors

(a)

(b)

Figure11.Framewise tracking error(in millimeters)in the initial-

ization(red),?rst(blue),and third iteration(black).(a):Without

image noise.(b):With Gaussian noise(40pixels standard devia-

tion)added to each frame.

no additional

noise+40px image noise

?σmax?σmax

Initial step79.126.2165.975.228.2184.0

Iteration151.818.5134.165.024.2144.4

Iteration347.912.8105.851.015.5139.0

Table1.Improvements of the tracking quality over various itera-

tions.Average errors,standard deviations,and maximal errors(in

millimeter)over all1257frames of the sequence are shown.

(a)(b)

Figure12.Gaussian noise has been added to each image.Pose

overlay of frame500after the initial tracking(a)and after the

third iteration(b).During the iterations,several tracking errors

(see right arm and both legs)are corrected.

such as a swap of legs or an incorrect in?ection of the arms.

These errors are corrected within few iterations.The abso-

lute difference of the3D joint positions of the tracking result

and the ground truth positions are indicated by Table1and

by Fig.11.These numbers were obtained by the automated

evaluation system supplied by Brown University[19].Dur-

ing the iterations,the average error is reduced from79mm

to48mm after few iterations,see Table1.The signi?cant

improvements are also indicated by Fig.11(a).

In a second experiment we added Gaussian noise(stan-

dard deviation:40pixels)to each frame.Two example

frames after the initialization and the third iteration are

shown in Fig.12.During iterations,the average error

dropped from75mm to51mm,see Table1and Fig.11(b).

These results demonstrate the stabilizing effects achieved

by our iterative tracking approach.Note that our framework

requires that a sequence is tracked several times.Currently,

our tracking implementation requires7s per frame result-

ing in2.5h for the entire1257frames.After tracking,the

classi?cation and allocation steps require15s.

6.Summary

In this paper,we introduced an iterative tracking ap-proach that dynamically integrates motion priors retrieved from a database to stabilize tracking.Intuitively,our idea is to pursue a joint bottom-up and top-down strategy in the sense that we start with a rough initial tracking which is then improved by incorporating high-level motion cues.These motion cues are allocated upon a local classi?cation of the initial tracking result.In addition to stabilization,the lo-cal classi?cation can also be used for automatic motion an-notation.By means of the HumanEV A-II benchmark,we showed that even simple motion priors lead to signi?cant improvements in the tracking.There are still limitations in our approach.In particular,the presence of strong tracking errors may lead to a confusion in the local classi?cation; misallocated priors may then worsen the tracking error.In future work,we plan to develop techniques that can cope with such situations,e.g.,by integrating statistical con?-dence measures for the classi?cation and by simultaneously considering alternative motion priors. Acknowledgments.The?rst and second author are funded by the German Research Foundation(DFG CL64/5-1,RO 2497/6-1),the third author by the Cluster of Excellence on Multimodal Computing and Interaction.

References

[1] A.Agarwal and B.Triggs.Recovering3D human pose from monoc-

ular images.IEEE Transactions on Pattern Analysis and Machine Intelligence,28(1):44–58,Jan.2006.

[2] A.Balan,L.Sigal,M.Black,and H.Haussecker.Shining a light on

human pose:On shadows,shading and the estimation of pose and shape.In Proc.Intern.Conf.on Computer Vision,2007.

[3]M.Bray,P.Kohli,and P.Torr.Posecut:Simultaneous segmenta-

tion and3d pose estimation of humans using dynamic graph-cuts.

In A.Leonarids,H.Bishof,and A.Prinz,editors,Proc.9th Euro-pean Conference on Computer Vision,Part II,volume3952of LNCS, pages642–655,Graz,May2006.Springer.

[4] C.Bregler,J.Malik,and K.Pullen.Twist based acquisition and

tracking of animal and human kinetics.International Journal of Computer Vision,56(3):179–194,2004.

[5]M.Brubaker,D.J.Fleet,and A.Hertzmann.Physics-based person

tracking using simpli?ed lower-body dynamics.In Conference of Computer Vision and Pattern Recognition(CVPR),Minnesota,2007.

IEEE Computer Society Press.

[6]S.Dambreville,A.Yezzi,R.Sandhu,and A.Tannenbaum.Robust

3d pose estimation and ef?cient2d region-based segmentation from a3d shape prior.In Proc.10th European Conference on Computer Vision,LNCS.Springer,2008.

[7]L.Herda,R.Urtasun,and P.Fua.Implicit surface joint limits to

constrain video-based motion capture.In T.Pajdla and J.Matas, editors,Proc.8th European Conference on Computer Vision,volume 3022of LNCS,pages405–418,Prague,2004.Springer.

[8]J.Liu and M.Shah.Learning human actions via information maxi-

mization.In Conference of Computer Vision and Pattern Recognition (CVPR),Anchorage,2008.IEEE Computer Society Press.

[9]T.B.Moeslund,A.Hilton,and V.Kr¨u ger.A survey of advances in

vision-based human motion capture and https://www.360docs.net/doc/681675153.html,puter Vision and Image Understanding,104(2):90–126,2006.

[10]M.M¨u ller and T.R¨o der.Motion templates for automatic classi?-

cation and retrieval of motion capture data.In Proc.of the2006 ACM SIGGRAPH/Eurographics Symposium on Computer Anima-tion,pages137–146.ACM Press,2006.

[11]M.M¨u ller,T.R¨o der,and M.Clausen.Ef?cient content-based re-

trieval of motion capture data.ACM Trans.Graph.,24(3):677–685, 2005.

[12]M.M¨u ller,T.R¨o der,M.Clausen,B.Eberhardt,B.Kr¨u ger,and

A.Weber.Documentation:Mocap Database https://www.360docs.net/doc/681675153.html,puter

Graphics Technical Report CG-2007-2,Universit¨a t Bonn,June2007.

http://www.mpi-inf.mpg.de/resources/HDM05. [13]R.Murray,Z.Li,and S.Sastry.Mathematical Introduction to

Robotic Manipulation.CRC Press,Baton Rouge,1994.

[14] B.Rosenhahn,T.Brox,and J.Weickert.Three-dimensional shape

knowledge for joint image segmentation and pose tracking.Interna-tional Journal of Computer Vision,73(3):243–262,2007.

[15] B.Rosenhahn,C.Schmaltz,T.Brox,J.Weickert,D.Cremers,and

H.-P.Seidel.Markerless motion capture of man-machine interaction.

In Conference of Computer Vision and Pattern Recognition(CVPR).

IEEE Computer Society Press,2008.

[16] C.Schmaltz,B.Rosenhahn,T.Brox,D.Cremers,J.Weickert,L.Wi-

etzke,and G.Sommer.Region-based pose tracking.In J.Mart′?,J.M.

Bened′?,A.M.Mendonc?a,and J.Serrat,editors,Pattern Recognition and Image Analysis,volume4478of LNCS,pages56–63,Girona, Spain,June2007.Springer.

[17]G.Shakhnarovich,P.Viola,and T.Darell.Fast pose estimation with

parameter sensitive hashing.In Proc.International Conference on Computer Vision,pages750–757,Nice,France,Oct.2003.

[18]H.Sidenbladh,M.J.Black,and L.Sigal.Implicit probabilistic

models of human motion for synthesis and tracking.In A.Heyden,

G.Sparr,M.Nielsen,and P.Johansen,editors,Proc.European Con-

ference on Computer Vision,volume2353of LNCS,pages784–800.

Springer,2002.

[19]L.Sigal and M.Black.Humaneva:Synchronized video and motion

capture dataset for evaluation of articulated human motion.Techni-cal Report CS-06-08,Brown University,USA,2006.Available at https://www.360docs.net/doc/681675153.html,/humaneva/.

[20] C.Sminchisescu and A.Jepson.Generative modeling for continu-

ous non-linearly embedded visual inference.In Proc.International Conference on Machine Learning,2004.

[21] C.Sminchisescu and B.Triggs.Estimating articulated human motion

with covariance scaled sampling.International Journal of Robotics Research,22(6):371–391,2003.

[22] D.Tran and A.Sorokin.Human activity recognition with metric

learning.In D.A.Forsyth,P.H.S.Torr,and A.Zisserman,edi-tors,10th European Conference on Computer Vision,ECCV(1),vol-ume5302of Lecture Notes in Computer Science,pages548–561.

Springer,2008.

[23]R.Urtasun and T.Darrell.Local probabilistic regression for activity-

independent human pose inference.In Conference of Computer Vi-sion and Pattern Recognition(CVPR),Anchorage,2008.IEEE Com-puter Society Press.

[24]R.Urtasun,D.J.Fleet,and P.Fua.3D people tracking with Gaus-

sian process dynamical models.In Conference of Computer Vision and Pattern Recognition(CVPR),pages238–245.IEEE Computer Society Press,2006.

[25]M.V ondrak,L.Sigal,and O.C.Jenkins.Physical simulation for

probabilistic motion tracking.In Conference of Computer Vision and Pattern Recognition(CVPR),Anchorage,2008.IEEE Computer So-ciety Press.

超纯水机标准操作规程

陕西百盛园生物科技信息有限公司 文件标题超纯水机标准管理制度页数 4 文件编码BSY-SB-BG-020 分发部门 颁发部门 起草人日期 审核人日期 批准人日期执行日期 变更记载 修订号批准日期执行日期更变原因及目的 1.目的 建立一个超纯水机的使用维护保养程序。 2.范围 适用于超纯水机。 3.责任人 操作人员、设备管理员及生产负责人。 4.内容 4.1 适应范围: 采用半透体螺旋式膜分离去除水中的可溶性固体、有机物、胶体物质及细菌,使用于满足于工艺用水要求的场所。 4.2 技术要求: 水温4°C—45°C pH范围4—9 硬度 17mg/L(以CaCO3计) 浊度 SDI<5 总溶解性固体含量 TDS<1000mg/L 且原水必须符合以下要求:

铁:<0.1mg/L 有机物:<1mg/L 4.3 防护措施 4.3.1 搬运时,必须切断电源,开启移动滑轮锁。 4.3.2 搬运时必须缓慢移动,严禁倾斜倒置。 4.3.3 如有必要,须在四周粘贴泡沫板,以防划伤。 4.3.4 新安置地点必须坚实。 4.4 操作规程 4.4.1 首先打开原水供水水源阀门,若有加压泵,则启动加压泵。 4.4.2设备若是第一次开机运行,则应打开保安过滤器前的排污口,肉眼观察原水干净后,关闭排污口,使原水进入保安过滤器。 4.4.3打开浓水调节阀,将泵后节流阀调整到适中状态,装有清洗口和清洗阀的设备,应先检查各阀是否按规定处于开或闭状态。 4.4.4 待精密过滤器滤后压力表上升0.05MPa,启动主机电源开关。 4.4.5设备会按照逆渗透系统主机动作原理开始工作。 4.4.6 主机运转后,逐渐开启泵后节流阀,调整浓水调节阀,使纯水和浓水比例达到额定指标,而后再调整泵后节水阀,使纯水产水量达到额定指标,浓水调节阀和泵后节流阀配合使用调整,满足以下条件: 4.4.6.1 系统压力不应超过额定值,低压不应超过 1.55MPa,超低压不应超过1.05MPa。 4.4.6.2 回收率不应大于(逆渗透系统技术指标和运行参数)中规定的范围。4.4.6.3产水量按原水水温计算后应达到计算值,在操作过程中应注意,泵后调节阀的作用是调整高压泵供给给RO系统的进水压力和流量,它的关闭和开启会影响RO系统压力和产水量及浓水排放量。浓水调节阀的作用是调整RO系统的压力,从而达到调整纯水和浓水的比例。调整产水量,系统的压力越高,产水量越大(在规定范围内调整)。 4.4.7 本设备具有原水低压保护功能,当原水供水不足,压力下降到一定值时,压力开关会自动关闭RO系统,达到保护高压泵的目的,当压力水恢复时,设备自动恢复工作。设备自动停机或恢复启动后,操作者应及时调整设备运行参数。

薄膜材料制备原理技术及应用

平均自由程:气体分子在两次碰撞的时间里走过的平均距离。(λ=1 nπd )。 气体分子通量:气体分子对单位面积表面的碰撞频率,也即 单位面积上气体分子的通量。(Φ=nv a 4=N a p 2πMRT 此结果又称为 克努森方程) 流导:真空管路中气体的通过能力。 真空泵的抽速:定义,S p=Q p p为真空泵入口处的气体压力,Q为单位时间内通过真空泵入口的气体流量。 Pvd:利用某物理过程,实现物质原子从源物质到薄膜的可控转移过程。 Cvd:利用气态的先驱反应物,通过原子、分子间化学反应的途径生成固态薄膜的技术。 在完整的单晶衬底上延续生长单晶薄膜的方法被称为外延生长。 旋片式机械真空泵、罗茨泵以及涡轮分子泵是机械式气体输运泵的典型例子,而油扩散泵则属于气流式气体输运泵。捕获式真空泵包括低温吸附泵、溅射离子泵等。 电偶规、电阻规(皮拉尼规)原理:以气体的热导率随气体的压力变化为基础而设计。缺点:在测量区间内指示值呈非线性,测量结果与气体种类有关,零点漂移严重。优点:结构简单,使用方便。 真空体系的构成:真空室、测量室、阀门……

阴影效应:蒸发出来的物质将被障碍物阻挡而不能沉积到衬底上。害处:破坏薄膜沉积的均匀性、受到蒸发源方向性限制,造成某些位置没有物质沉积。利用:目的性使用一定形状的掩膜,实现薄膜选择性的沉积。 单质、化合物蒸发存在的问题及解决:成分偏差,易于蒸发的组元优先蒸发将造成该组元的不断贫化,进而蒸发率不断下降。解决;1,使用较多的物质作为蒸发源,即尽量减少组元成分的相对变化率。2,向蒸发源不断少量添加被蒸发物质(使物质组元得到瞬间同步蒸发)3,利用加热至不同温度的双蒸发源或多蒸发源的方法,分别控制和调节每个组元的蒸发速率。 蒸发沉积纯度取决于:1蒸发源物质的纯度2加热装置,坩埚等可能造成的污染3真空系统中的残留气体。 电子束蒸发装置:电阻加热装置有来自坩埚,加热元件以及各种支撑部件可能的污染,其热功率及温度也有一定的限制,不适合高纯度或难熔物质的蒸发。电子束蒸发发可以克服。缺点:电子束的绝大部分能量被坩埚的水冷系统带走,其热效率较低。过高的加热功率也会对整个薄膜沉积系统形成较强的热辐射 等离子体:一种电离气体,是离子、电子和高能粒子的集合,整体显中性。它是一种由带电粒子组成的电离状态,亦称为物质的第四态。

jenkins简单使用

Jenkins简单使用 目录 关于项目创建 (2) 关于自动部署到容器 (5) 利用Jenkins提供的deploy plugin自动部署 (5) 利用tomcat-maven-plugin自动部署 (6) 关于把WEB项目打成jar包自动部署 (8)

关于项目创建 点击首页的“创建一个新任务”。 输入项目名称,并选择Maven项目(因我们的项目都是Maven项目,所以此处选此项) 点击“OK”,会进入配置页面。 下面只讲到了部分的配置,如果没有特殊需求其它配置保持默认即可。 首先是“丢弃旧的构建”选项,如若勾选此选线可以看到如图界面。

“丢弃旧的构建”主要是用来配置构建历史保存几个版本,或者说是保存多少时间。 “源码管理”选项中配置对应的SCM,我们用的是SVN,所以此处选择“Subversion”,并填入仓库的Url,如图: 如果没有按照“关于配置”配置Maven相关参数,配置页面中的build项处会显示如图错误: “构建触发器”选项用来配置什么时候会进行构建项目。 Build whenever a SNAPSHOT dependency is built:当此项目所依赖的项目在jenkins中被构建Build after other projects are built:在某个项目被构建后,构建此项目 Build periodically:按照指定的时间间隔进行自动构建,不管代码有没有变更。 Poll SCM:按照指定的时间间隔对SCM进行检测,如果代码库有更新则拉取后进行构建。

如图: “pre steps”:build命令之前执行的操作。可以写脚本。 “build”:build命令相关配置。Root POM:项目中pom.xml所在的路径,此路径是相对于workspace的相对路径。Goals and options:可以填写,build命令后跟的参数,如:clean install (先clean在install),clean install -Dmaven.test.skip=true(清除以前的包,重新打包,并跳过测试) “post steps”:build命令之后执行的操作。同pre steps。同样可以写脚本。 注:脚本中可以引用的变量,参见官方文档: https://https://www.360docs.net/doc/681675153.html,/display/JENKINS/Building+a+software+project 最后点击“保存”。 可以点击如图按钮测试一下自己的配置: 构建完成后,可以点击如图红框内的蓝色小按钮查看控制台输出:

超纯水机操作规程

Master-S15UV型超纯水机操作规程 一、使用指南 1.1 开机准备 打开自来水进水阀门,插上电源插头,按下电源开关和工作模式开关的“一”档,机器即开始工作,等待仪器自检冲洗完毕,开始取水。 1.2 取水 如需用RO水或UP超纯水,按下控制面板上的“RO”、“UP”取水键即可打开出水开关,取水完毕,再次按下“RO”、“UP”取水键即可关闭出水。 1.3 待机 开机状态下,如不用取水,机器制造的RO水输送至压力水桶,直至水桶水满,系统停机进入待机状态。此状态下按下任何一个取水键,机器会自动重新启动制造纯水。 1.4 预准备模式 如第二日一早需要大量纯水,可在下班前保持电源和水源开启,把工作模式开关的“二”档,仪器屏幕不亮但仪器会自动制造纯水往压力水桶里送,直至水桶水满,则仪器进入待机状态。 1.5 关机 关闭电源开关,关闭进水阀门。

二、超纯水保持最佳水质的方法 2.1超纯水取水后很容易遭到环境污染,所以使用前取水即取即用方式吋最合适的。只有把超纯水与环境接触时间缩到极短,才能够获得纯度极高的超纯水。2.2在配置高纯度的化学试剂时,尽量不要使用长时间储存桶中存放的超纯水,因为储存桶经过长时间使用后,会因杂质、微生物的污染而造成水质的劣化,像这种水,在使用时已经不再是超纯水。 2.3纯水储存桶最好安装空气过滤器.防止环境因素造成的污染。 2.4储水桶请勿放置在日光直射处,水温上升,容易造成微生物繁殖。特别是半透明储水桶,也会因为日光通透而造成藻类繁殖。 2.5超纯水取水吋一定要将初期的出水放掉:以获得稳定的水质。 2.6取水时让超纯水顺着容器侧壁流入.尽量不要让气泡产生.可降低空气污染。 2.7请不要在终端滤器后再连接软管,使用直接取水的方式才能获得纯度高的超纯水。 2.8长吋间不用纯水吋,应将压力水桶中的RO水全部放掉以防止污染。 2.9超纯水机若长时间不使用,再次使用时应把初期纯水充分放掉以确保水质。 三、仪器的维护保养与注意事项 3.1 仪器保养:注意及时更换滤芯确保长期稳定的纯净水质,按使用时间定期更换滤芯,5微米PP深层滤芯约2-6个月更换,精密活性炭滤芯约6个月,KDF

晶体材料制备原理与技术

中国海洋大学本科生课程大纲 课程属性:公共基础/通识教育/学科基础/专业知识/工作技能,课程性质:必修、选修 一、课程介绍 1.课程描述: 晶体材料制备原理与技术是综合应用物理、化学、物理化学、晶体化学、材料测试与表征等先修课程所学知识的应用型专业课程,主要讲授晶体材料制备过程的基本原理和典型的晶体材料制备技术,为学生从事晶体材料制备工作提供理论基础和技术基础。 2.设计思路: 晶体材料是高新技术不可或缺的重要材料,晶体材料制备是材料科学与工程专业相关的重要生产领域。作为一门以拓展学生知识面为目的的选修课程,本课程分为三大部分:首先介绍典型的晶体材料制备方法和技术,通过课下查阅资料和课堂讨论加深学生对常见方法和技术的理解。此部分教师讲解和学生课堂讨论并重。然后介绍晶体材料制备过程中的一般原理,此部分主要由教师进行课堂讲授。最后,由学生自主查阅晶体材料制备最新文献,了解晶体材料制备技术最新进展,通过课下研读、课上汇报、讨论、教师点评等教学活动,加深学生对本课程中所学知识的理解及相关知识的综合运用。 - 3 -

3. 课程与其他课程的关系: 晶体材料制备原理与技术是综合应用物理、化学、物理化学、晶体化学、材料测试与表征等先修课程所学知识的应用型专业课程,是材料制备与合成工艺课程相关内容的细化和深入。 二、课程目标 本课程的目标是拓宽材料科学与工程专业学生的知识面,掌握晶体材料制备一般原理,了解晶体材料制备常见技术,加深对物理、化学、晶体化学以及材料表征等先修课程知识的理解,加强文献检索能力,学会分析晶体材料制备中遇到的问题,提高解决生产问题的能力,为毕业后从事晶体材料制备等生产和研究工作打下基础。 三、学习要求 晶体材料制备原理与技术是一门综合了物理、化学、物理化学、晶体化学、材料测试与表征等多学科知识的综合性课程。为达到良好的学习效果,要求学生:及时复习先修课程相关内容,按时上课,上课认真听讲,积极查阅资料,积极参与课堂讨论。本课程将包含较多的资料查阅、汇报、讨论等课堂活动。 四、教学进度 - 3 -

使用JIRA和Jenkins进行项目管理

使用JIRA和Jenkins进行项目管理 (仅供参考) 1使用JIRA进行项目跟踪管理 1.1JIRA项目管理流程 1.1.1概述 项目的软件开发流程主要围绕实现一个个业务功能需求和非功能需求的需求分析、设计、开发、测试、发布验收,而参与人员最多的开发和测试环节是流程最容易出问题的环节,为有效使用JIRA进行项目管理,我们设计了以需求为主导的JIRA表单和流程(如下图)。 对应于软件过程的管理流程,本项目JIRA对应设置了以下的IssueType(问题类型)和3大管理流程: 【说明】 【需求单】:在需求分析、概要设计、详细设计阶段,将产生对一个需求的具体描述和实现设计描述交付到开发阶段,在JIRA中,体现为一份 需求单,这些交付件全部作为需求单的附件,需求单的来源包括: -需求阶段的原始需求,以一个业务功能为一份需求,通常在一周左右可以开发完成,例如“用户新增和查询功能”; -系统优化和变更:如果一些变更无法对应一份原始需求,需要创建一份新的需求单

?【子任务单】在开发阶段,一份需求往往需要三四天甚至长得多的时间 才能完成,开发完成后也存在不断的优化和改进,因此,围绕需求在JIRA 上设置了以下的管理跟踪对象子任务单(SubIssueType) -开发任务单: -程序员拿到需求后,组长应该协调开发人员将需求分解为开发任务,在JIRA上创建任务单; -设计问题单: -程序员拿到需求中的设计进行评估时,如果发现设计文档或者需求有bug,应该记录在案以便协调设计小组完善,在JIRA上创建设计 问题单; -变更单 -但设计和需求人员需要对已经提交的需求和设计提交变更时,例如增加一个字段、变更原型样式、变更接口方法,均需要提交变更单; -评审BUG单 -主要是开发组长或者结对开发程序员在评审BUG时,将评审的BUG 记录为评审BUG; -测试BUG单 -主要针对前期开发阶段的冒烟测试,测试人员对已经实现的功能进行测试,保证流程能够跑得通,如果发现BUG则创建测试BUG单; ?【测试问题单】 -主要针对无法对应到一份需求产生的BUG ?流程设置说明 -根据参与者、小组分工,设置以下流程 -需求跟踪流程 -参与人员包括需求分析员、设计者、开发组长、程序员、测试组长、测试员、用户参与,只与需求单关联,但目前其他用户参与的流程 主要由开发组长完成。 -任务跟踪流程 -主要是开发组长和程序员两级人员参与,与开发任务单、设计问题单、变更单、评审BUG单,便于开发小组进行状态监控,部分单尽 管涉及到设计人员,但为降低流程协调工作量,均由开发人员在面 对面解决后对流程进行操作 -BUG跟踪流程 -主要是测试人员和开发组间的流程跟踪。 详细的流程图如下: 1.1.2需求跟踪流程 【流程重点说明】 -开发人员必须在接受到任务后点击“开始处理”,以便跟踪哪些任务正在处理中;任务完成后点击“完成”; -小组长在代码评审后,使用JIRA的批量流程操作功能,将完成开发的进行发布,在JIRA上点击“发布测试”; -测试部分分为两个环节:冒烟测试和集成测试;

超纯水系统操作说明书

水处理设备(超纯水系统) 操 作 说 明 书

目录 一、超纯水设备工艺流程图: (2) 二、工艺流程说明: (2) 1.原水箱 (2) 2.原水泵 (2) 3.多介质过滤器 (3) 4.活性碳过滤器 (3) 5.阻垢剂加药系统 (3) 6.软化器 (4) 7.精密保安过滤器 (4) 8.高压泵 (4) 9.两级反渗透RO机 (5) 10、二级纯水箱 (12) 11、EDI输送泵 (12) 12、前置紫外杀菌器 (13) 13、0.22μ微滤系统 (13) 14、EDI装置 (13) 15、EDI超纯水箱 (17) 16、输送泵 (17) 17、核级树脂 (17) 18、后置紫外线杀菌器 (18) 19、终端0.22μ微滤系统 (19) 三、设备操作指南: (19)

四、设备维护与保养:(以原水水质与纯水水质而定) (19) 附表1:水处理设备运行记录表 (21) 附表2:水处理设备维修保养记录表 (22) 附录3:售后服务承诺 (23) 一、超纯水设备工艺流程图: 二、工艺流程说明: 1.原水箱 原水箱作为储水装置,调节系统进水量与原水泵抽送量之间的不平衡,避免原水泵启停过于频繁,箱内设置液位,原水进水阀根据液位高低进行自动补水,原水泵根据水池液位情况自动启停。 操作:原水箱顶部设置手动及自动电动进水阀,可进行手动及自动补水; 手动补水时不受液位控制,只能手动控制。自动补水阀补水时受液位控制,

当水箱液位降到设定中液位时,自动阀开启自动补水;当水箱液位达到设定高液位时,自动阀关闭停止补水,从而达到自动的性能。 2.原水泵 作用:原水泵将原水增压后输送到下道工序,保证多介质过滤器、活性炭过滤的操作压力及运行流量。 操作:原水泵可分手动和自动操作,自动运行时,原水泵将与原水箱液位联动,原水箱液位低时原水泵停止运行,中水位时重新启动;手动操作时除原水箱液位液位不与原水泵连锁外,其他和自动一样;其他有关说明及注意事项详见水泵说明书。 3.多介质过滤器 作用:在水质预处理系统中,多介质过滤器压力容器内不同粒径的石英砂按一定级配装填,经絮凝的原水在一定压力下自上而下通过滤料层,从而使水中的悬浮物得以截留去除,多介质过滤器能够有效去除原水中悬浮物、细小颗粒、全价铁及胶体、菌藻类和有机物。其出水SDI15(污染指数)小于等于5,完全能够满足反渗透装置的进水要求。 操作:多介质过滤器的反洗操作采用自动控制器,过滤器应定期清洗。冲洗周期一般为5~7个工作日,具体将根据进水浊度而定。 4.活性碳过滤器 功能:在水质预处理系统中,活性炭过滤器能够吸附前级过滤中无法去除的余氯以防止后级反渗透膜受其氧化降解,同时还吸附从前级泄漏过来的小分

09192230材料现代制备技术

09192230材料现代制备技术 Modern Technology for Material Preparation 预修课程:物理化学 面向对象:材料科学与工程专业学生 课程简介: 本课程讲述各种材料合成与制备的原理、方法和技术。针对现代信息社会对材料发展的需求,着重介绍各种新型制备技术的基本概念、制备原理、特征,以及其在各类材料制备中的应用。涉及材料包括超微粒子等零维材料,纤维、纳米线棒等一维材料,薄膜和块体材料(晶态和非晶态材料)。 教学大纲: 一、教学目的与基本要求: 教学目的:材料制备技术是材料科学不可或缺的组成部分。现代科学技术的发展对材料提出了各种各样的新要求,进而推动了材料制备技术的发展。本课程旨在介绍各种材料的合成与制备的原理、方法和技术,使学生掌握各类新型材料的制备方法。 基本要求:通过《材料现代制备技术》的学习,使学生了解各种无机材料的制备原理,初步掌握零维、一维纳米材料,纤维,薄膜,以及三维材料的各种制备方法和技术。 二、主要内容及学时分配: 1. 绪论 材料现代制备方法特点1学时 溶胶凝胶技术3学时 等离子体技术2学时 激光技术概论2学时 2. 零维材料的制备 超微粒子的形成机理和制备4学时 3. 一维材料的制备 纳米棒、线、管的形成机理和制备方法2学时 纤维材料的制备2学时 4. 二维材料的制备

薄膜的物理气相沉积法制备原理和应用4学时 化学气相沉积法制备原理和应用3学时 三束技术与薄膜制备2学时 液相法薄膜制备(浸渍提拉法成膜,旋转涂膜,LB膜,自组装膜)3学时 5. 三维材料的制备 非晶态材料的形成机理及制备方法2学时 晶体生长机理及制备2学时 推荐教材或主要参考书: 《材料现代制备技术》,自编讲义 参考书:郑昌琼,冉均国主编《新型无机材料》,科学出版社,2003 C.N.R. Rao, F.L. Deepak, Gautam Gundiah, A. Govindaraj,Inorganicnanowires,Progress in Solid State Chemistry 31 (2003)

Jenkins+Jmeter环境搭建操作手册

Jenkins+Jmeter环境搭建操作手册 一、环境&工具 Jmeter:本地的Jmeter 版本最好与Jenkins上的是一致的 查看Jenkins服务器上的Jmeter版本: 上传脚本工具:SVN 或者Git 。这2中工具作用均用来实现将你本地的脚本上传至Jenkins 服务器。(Jenkins服务器是不会运行你本地的脚本~~) 二、账号准备 Jenkins 账号:自己在Jenkins上注册就行啦 SVN / Git 账号:可在项目組内申请 三、环境搭建 3.1 测试脚本的上传 本文拿SVN举例。 S1、SVN在本地创建存储目录(不做详细介绍),将要自动运行的脚本文件夹放置该目录下

S3、提交:选中文件,右击,选择”Commit",显示绿色的勾后,及上传成功

3.2 Jenkins的项目构建环境配置S1 . 登录Jenkins S3. 创建任务(自动化任务)

S5. 设置源码管理路径

S7. 构建环境:每次构建前删除上一次运行的workspace

cd /usr/locallogs/jenkins/workspace/dhp_test/dhp_test1 JENKINS进入到路径中(存放sh脚本的路径) chmod 777 BookingcomRes.sh修改文件执行权限 bash BookingcomRes.sh运行文件 /usr/local/bin/sendmail.sh "test report" "yanan.fan@https://www.360docs.net/doc/681675153.html," "EMAIL CONTENT" /usr/locallogs/jenkins/workspace/dhp_test/dhp_test1/report/Test*.csv 将运行结果写到CSV文件中并通过邮件的方式发送到我的邮箱

超纯水机作业指导书

EU-K1-TF分子生物型超纯水机 操作、维护和核查作业指导书 1.目的 为保证实验室超纯水系统的正常运转,确保其出具的数据准确、可靠,特制定本规程。 2.适用范围 本仪器是一种操作方便,迅速快捷的水处理设备,适用于实验室纯水的制取。 3.职责 检验人员应严格执行本操作规程,质量监督员对执行情况进行监督。 4.技术特性 4.1适用场地

靠近水源、避开强烈阳光、放置于水平位置; 4.2技术参数 4.2.1 进水为:市政自来水。 4.2.2 适用水压:1.0—4.0kg∕cm2。 4.2.3 使用水温:5—45℃. 4.2.4 进水TOC值:﹤2.0ppm 4.2.5 电源电源和频率:220v、50HZ 4.2.6 功率:50~80W 5.操作步骤 5.1 打开进水水源,插上电源插头,主机即开始工作,进入全自动制水程序。 5.2 取用纯水时,点击制水界面取纯水按钮取水。取水完毕,再次点击即可。 5.3 停止取水时,主机仍继续工作制造RO反渗透水并输送至储水桶,储水桶满时,主机自动停机。 5.4 下班前,务必关闭纯水机电源,切断自来水水源,以避免意外发生。

6.维护和保养 6.1 更换纯化柱。 6.2 更换超纯化柱。 6.3 更换终端过滤器。 6.4 超滤柱的消毒清洗。 6.5 清洗进水过滤网。 以上事宜均由服务商派专业维护人员进行维护保养。 7. 注意事项 7.1 超纯水取水时,应尽可能缩短与环境接触的时间,以获取纯的较高的超纯水。 7.2 超纯水取水时一定要将初期的出水放掉,并让纯水顺着容器侧壁流入,尽量不要让气泡产生。 7.3 长时间不用纯水时,应将压力储水桶中的RO水全部放掉以防污染。 7.4 超纯水机若长期不使用,再次使用时应把初期纯水充分放掉以确保水质。

一步步搭建jenkins持续集成平台

一步步搭建jenkins持续集成平台 持续集成作为最先进的项目实践之一,逐渐在受到天朝软件公司的重视,笔者从事近1年半时间的相关工作,也无法游刃有余,用了很久jenkins了,也没有做个入门介绍给大家,惭愧,最近在迁移,顺便重新搞下,记录以飨读者. 【持续集成相关工具集】: CI-Server(Jenkins/Hudson.....) 代码管理工具(SVN/git...) java框架(maven) 覆盖率工具(c++:gcov java:maven cobertura插件) 静态扫描插件(jenkins插件) 覆盖率报表合并工具 jenkins二次开发api apache +php +codeiginter 配置 mysql +python 用来管理数据库 python-dev 下载链接 ........... 笔者将来会专门在持续集成板块介绍相关的工具集合 【安装Jenkins配置启动】: apache-tomcat-6.0.37-src.tar.gz + jenkins.1.556.war 自己搜索下吧 tomcat/bin下全部chmod +x ./* 把jenkins.war 拷贝到tomcat/webapps下 启动tomcat/bin 下startup.sh 查看8080端口是否启动 浏览吧:http://192.168.1.xxx:8080/jenkins 若想从局域网别的机器访问,则修改tomcatxxx/cong/server.xml Host name="xxx.xxx.xxx.xxx" Engin name="xxx.xxx.xxx.xxx" 同时设置防火墙(局域网其他机器打不开时可以试试) iptables -I INPUT -p tcp --dport 8080 -J ACCEPT iptables -I OUTPUT -p tcp --dport 8080 -J ACCEPT 【jenkins重启】 cd tomcat/bin/ catalina.sh stop kill pid(java) catalina.sh bin 【增加Slave节点】 1.salve初始化帐号(例:主10.129.145.112 新Slave:10.209.23.90) useradd jenkins -m -d /data/home/jenkins #创建jenkins帐号 2.拷贝jenkin主server上的slave.jar包/usr/local/tomcat/webapps/jenkins/WEB-INF/slave.jar 到新slave的/data/home/jenkins/slave.jar 3.配置: 1).系统管理->节点管理->新建节点10.129.145.112:8081/jenkins/computer/new

微生物实验室常用仪器配置

微生物实验室常用仪器配置 1 超净工作台(已有)微生物的培养都是在特定培养基中进行无菌培养,那么无菌培养必然需要超净工作台提供一个无菌的工作环境 2、培养箱(已有)培养箱有多种类型,它的作用在于为微生物的生长提供一个适宜的环境。生化培养箱只能控制温度,可作为一般细菌的平板培养;霉菌培养箱可以控制温度和湿度,可作为霉菌的培养;CO2培养箱适用于厌氧微生物的培养。 3、天平(已有待完善)天平用于精确称量各类试剂。实验室常用的是电子天平,电子天平按照精度不同有不同的级别。 4、微生物均质器(无)用于从固体样品中提取细菌。用微生物均质器制备微生物检测样本具有样品无污染、无损伤、不升温、不需要灭菌处理,不需洗刷器皿等特点,是微生物实验中使用较为方便的仪器 5、菌落计数器(无)菌落计数仪可协助操作者计数菌落数量。通过放大,拍照,计数等方式准确的获取菌落的数量。有些高性能的菌落计数器还可连接电脑完成自动计数的操作。 微波炉/ 电炉用于溶液的快速加热,微生物固体培养基的加热溶化。 7、高压灭菌锅(已有坏)微生物学所用到的大部分实验物品、试剂、培养基都应严格消毒灭菌。灭菌锅也有不同大小型号,有些是手动的,有些是全自动的。用户需要根据自己的需要选购。 8、移液器(无)液体量器用于精密量取各类液体。常见的液体量器有量筒、移液管、微量取液器、刻度试管、烧杯。 9、低温冰箱(已有)冰箱是实验室保存试剂和样品必不可少的仪器。微生物学实验中用到的试剂有些要求是 4 度保存,有些要求是负20 度保存,实验人员一定要看清试剂的保存条件,放置在恰当的温度下保存。 11、摇床(已有)摇床是实验室常用的一种仪器,在微生物实验操作过程中,液体培养基培养细菌时需要在特定温度下振荡使用。 12、纯水装置 13、生物显微镜(无)由于微生物体积较小,所以在观察时需要借助生物显微镜。生物显微镜用于微生物和微小物品结构,形态等的观察。 16、恒温干燥箱 17、恒温水浴锅(已有------ 其他实验室) 水浴锅是一种控温装置,水浴控温对于样品来说比较快速且接触充分。有些微生物反应需要在37度,42度,56度下水浴进行,所以恒温水浴锅可以提供需要的温度。 (18)酸度计(无) 用于配置试剂时精确测量PH 值,从而保证配置的溶液的精确性。有时也需要利用pH 计测定样品溶液的酸碱度。 (19)离心机(无)用于收集微生物菌体以及其他沉淀物。离心机有冷冻和常温之分。 有些样品由于在常温下不太稳定,需要低温环境,要视样品的种类而定。 20 接种仪器:试管(有)、接种针、牛角勺(无)、药勺、接种环、接种钩,解剖刀、培养皿(无)、镊子、酒精灯、接种锄、玻璃刮刀(无)

(完整word版)先电云计算开发服务平台用户手册-XianDian-Paas-v2.1

云计算开发服务平台 用户手册 版本:先电paas-v2.1 发布日期:2017年4月21日 南京第五十五所技术开发有限公司

版本修订说明 修订版本修订时间修订说明 Cloud-paas-v1.2 2014年3月7日云计算开发服务平台用户手册。 Cloud-paas-v1.3 2015年11月8日新增框架说明,增加框架结构图。 Cloud-paas-v1.3.1 2016年1月18日修订GRE网络下的PaaS平台搭建 Cloud-paas-v1.4 2016年4月12日软件包修改mongodb和ActiveMQ安装脚本Cloud-paas-v2.0 2016年12月15日升级Docker作为服务平台底层 Cloud-paas-v2.0.5 2017年3月13日更新国际化 Cloud-paas-v2.1 2017年4月21日Jenkins结合gogs实现持续化集成

目录 1、Docker基础架构与环境说明 (6) 1.1 Docker架构及基本组件 (6) 1.2、系统要求 (10) 1.3、设备说明 (10) 1.3.1、网络说明 (11) 1.3.2、基础环境配置 (11) 2、容器服务管理平台Rancher安装搭建 (12) 2.1、Docker软件包安装配置 (12) 2.2、配置Docker 配置文件 (12) 2.3、启动服务 (12) 2.4、配置镜像仓库 (12) 2.5、镜像、容器服务基本操作 (13) 2.5.1 获取镜像操作 (13) 2.5.2 容器操作 (15) 2.5.3 终止容器 (18) 2.5.4 进入容器 (18) 2.5.5 容器内部操作 (19) 2.5.6 查看容器日志及相关操作 (20) 2.5.7 导出和导入容器 (23) 2.5.8 删除容器 (24) 2.6、下载镜像 (24) 2.6.1 Server节点 (24) 2.6.2 client节点 (24) 2.7、启动容器服务 (24) 3、访问站点服务 (24) 3.1、浏览器访问 (24) 3.2、添加账号 (25) 3.3、添加主机 (26)

设备检测实验室设备配置建议最新

第十八章无线电设备检测实验室设备配置建议 一、概述: 无线电设备检测工作是无线电管理的一个重要方面,随着科学技术水平的飞速发展,无线电通信应用愈来愈广泛,各种无线电通信设备迅速增加,新技术、新体制、新设备不断涌现,越来越多的无线电通信技术在我国得以应用。为了使有限的频谱资源能够科学地、有效地开发和利用,防止无线电设备本身产品质量不合格产生的各种有害干扰,从源头上减少无线电干扰信号,维护空中电波秩序,确保各种无线电设备正常进行,必须加强对各类无线电设备的管理,应积极开展无线电设备检测工作。 现阶段在我国应用的通信技术体制非常广泛,目前 TDMA(GSM900/DCS1800/GPRS)、CDMA(cdmaOne)、PHS(小灵通)为代表的通信体制正在被中国移动、中国联通、中国电信、中国网通广泛的应用。同时,第三代(3G)移动通信系统(cdma2000、W-CDMA、TD-S-CDMA、LAS-CDMA 等)也正在紧锣密鼓的研究开发之中,在不远的将来可能得到应用。在当前的数字时代,除电信运营企业全部采用数字调制技术实现运营网络的数据、语音、图象等的无线传输外,其它应用场合也越来越多地应用数字调制技术,如DBS直接广播卫星通信、数字集群通信、Bluetooh蓝牙数据传输、WirelessLAN无线区域通信等。同时,各种新的系统如LMDS,WLAN,BLUETOOTH等不断投入使用。必将对无线电设备检测工作提出了更高的要求。我们认为,作为无线电频谱的管理部门,监管的重点首先是无线发射机系统的射频指标,如:发射总功率、频率准确度、占用带宽、杂散发射

等。在此基础上,可适当进行其他指标的测试,辅助运营商进行系统检测。其主要测试项目应包括: 1.频率准确度 2.最大输出功率 3.发射机互调 4.杂散发射 5.占用带宽 OBW、邻道功率ACP 6.相位误差 另外,针对GSM系统,可增加调制及开关的频谱,功率时间曲线的测试;针对CDMA系统和扩频系统,可增加矢量幅度误差(EVM),码域功率和幅度统计特性(CCDF)等测试项目。 为了确保检测工作的准确性、权威性,必须建立一个合格的检测实验室。它必须具备有效的质量管理体系、完全满足被测设备技术指标的测量仪器、经过培训的专业技术人员。以下重点讨论检测实验室需配备的仪器。 二、检测实验室仪器仪表的配置应原则: 1.仪器功能强:应提供符合国家标准规范所要求的主要项目和指标。 2.测量基准高:测量结果应具备权威性,以便对网络设备和用户终端进行认证、验收、日常抽查、事故判别和质量检测。 3.通用性能好:应尽量基于单台设备平台对各种体制网络设备(TDMA、CDMA、PHS、CDMA2000、W-CDMA等)进行测量。 4.方便携带性:可以方便地在野外现场搭建测量系统。运输、搬移方便,抗震动和适应恶劣环境的能力强。

材料制备科学与技术

第七章单晶材料的制备 名词解释: 1.熔盐生长法(助熔剂法、高温溶液法、熔盐法)是在高温下从熔融盐溶剂中生长晶体的方法。 填空题: 1.气相生长法可以分为三类:升华法、蒸汽输运法、气相反应法 2.气体输运过程因其内部压力不同而主要有三种可能的方式(输运取决于什么东西?压力的三个级别):①当压力<102Pa时,输运速度主要决定于原子速度②在102--3*105Pa之间的压力范围内,分子运动主要由扩散确定③当压力>3*105Pa 时,热对流对确定气体运动及其重要。 3.溶液中生长晶体的具体方法主要有:降温法、流动法(温差法)、蒸发法、凝胶法 4.水热法生长单晶体的设备装置是:高压釜 5.晶体与残余物的溶液分离开的方法有:倒装法和坩埚倾斜法。 6.从熔体中生长单晶体的典型方法大致有以下几种:(大类别和小类别都要写) ①正常凝固法a、晶体提拉法b、坩埚下降法(垂直式、水平式)c、晶体泡生法d、弧熔法②逐区溶化法a、水平区熔法b、浮区法c、基座法d、焰熔法③掺钕钇铝石 )晶体的B-S法生长 榴石(Nd:YAG)晶体的提拉生长④硒镓银(AgGaSe 2 7.提拉法生长单晶体的加热方式有: 8.坩埚下降法即(B—S方法)的分类:垂直式、水平式 简答题:(具体点) 1.气相生长法的分类 ①升华法:是将固体在高温区升华,蒸汽在温度梯度的作用下向低温区输运结晶的一种生长晶体的方法。 ②蒸汽输运法:是在一定的环境(如真空)下,利用运载气体生长晶体的方法,通常用卤族元素来帮助源的挥发和原料的输运,可以促进晶体的生长。 ③气相反应法:即利用气体之间的直接混合反应生成晶体的方法。 2.在气相系统中,通过可逆反应生长时,输运可以分为哪三个阶段? ①在原料固体上的复相反应。 ②气体中挥发物的输运。 ③在晶体形成处的复相逆向反应。 3.气体输运过程因其内部压力不同而主要有哪三种可能的方式? ①当压力<102Pa时,输运速度主要决定于原子速度。 ②在102--3*105Pa之间的压力范围内,分子运动主要由扩散确定。 ③当压力>3*105Pa时,热对流对确定气体运动及其重要。 4.溶液中生长晶体的具体方法主要有哪几种?每种方法的基本原理是什么? ①降温法基本原理是利用物质较大的正溶解度温度系数,在晶体生长的过程中逐渐降低温度,使析出溶质不断在晶体上生长。

Jenkins安装部署及操作说明文档

Jenkins部署及操作手册1Jenkins工作原理 2Jenkins安装 2.1软件包/插件

2.2部署 2.2.1J DK安装 下载JDK1.8版本进行安装,安装后进行系统环境变量配置: 2.2.2A NT安装 下载绿色版apache-ant-1.9.6拷贝至安装目录下(如: D:\tools\apache-ant-1.9.6),配置系统环境变量: 2.2.3M aven安装 下载绿色版apache-maven-3.3.9拷贝至安装目录下(如: D:\tools\apache-maven-3.3.9),配置系统环境变量: 2.2.4T omcat安装 下载绿色版Tomcat8拷贝至安装目录(如:D:\tools\tomcat8-jenkins),配置D:\tools\tomcat8-jenkins\conf\server.xml文件,添加URIEncoding="UTF-8"

2.2.5J enkins安装 下载jenins.war包,拷贝至tomat的webapps目录下(如: D:\tools\tomcat8-jenkins\webapps\),配置系统环境变量: (C:\Users\Administrator\.jenkins) ●启动tomcat,启动结束后,打开IE浏览器输入:http://127.0.0.1:8080/jenkins, 提示输入密码进行下一步插件的安装,安装插件有两种方式选择,一种是按它提供的建议方式安装插件,另外一种方式是用户指定选择安装插件。插件安装过程中需要等待较长时间。 ●插件安装:登录Jenkins,在系统管理页面打开插件管理,选择可选插件选项 卡,勾选需要安装的插件。 ●设置用户注册:登录Jenkins,在系统管理页面打开Configure Global Security, 访问控制安全域勾选允许用户注册。

微生物实验室常用仪器配置

微生物实验室常用仪器配置 微生物学实验室是生物学领域的一个基本实验室,对于一个完备的微生物学实验室,我们需要配置哪些仪器呢?环凯为您的微生物学实验仪器配置提供如下参考。 1、超净工作台 微生物的培养都是在特定培养基中进行无菌培养,那么无菌培养必然需要超净工作台提供一个无菌的工作环境。 2、培养箱 培养箱有多种类型,它的作用在于为微生物的生长提供一个适宜的环境。生化培养箱只能控制温度,可作为一般细菌的平板培养;霉菌培养箱可以控制温度和湿度,可作为霉菌的培养;CO2培养箱适用于厌氧微生物的培养。 3、天平 天平用于精确称量各类试剂。实验室常用的是电子天平,电子天平按照精度不同有不同的级别。 4、微生物均质器

用于从固体样品中提取细菌。用微生物均质器制备微生物检测样本具有样品无污染、无损伤、不升温、不需要灭菌处理,不需洗刷器皿等特点,是微生物实验中使用较为方便的仪器。 5、菌落计数器 菌落计数仪可协助操作者计数菌落数量。通过放大,拍照,计数等方式准确的获取菌落的数量。有些高性能的菌落计数器还可连接电脑完成自动计数的操作。 6、微波炉/电炉 用于溶液的快速加热,微生物固体培养基的加热溶化。 7、高压灭菌锅 微生物学所用到的大部分实验物品、试剂、培养基都应严格消毒灭菌。灭菌锅也有不同大小型号,有些是手动的,有些是全自动的。用户需要根据自己的需要选购。 8、移液器 液体量器用于精密量取各类液体。常见的液体量器有量筒、移液管、微量取液器、刻度试管、烧杯。 9、低温冰箱 冰箱是实验室保存试剂和样品必不可少的仪器。微生物学实验中用到的试剂有些要求是4度保存,有些要求是负20度保存,实验人员一定要看清试剂的保存条件,放置在恰当的温度下保存。 10、生物安全柜 微生物实验中涉及的试剂和样品微生物有些是有毒的,对于操作人员来说伤害较大。为了防止有害悬浮微粒、气溶胶的扩散,可以利用生物安全柜对操作人员、样品及样品间交叉感染和环境提供安全保护。 11、摇床 摇床是实验室常用的一种仪器,在微生物实验操作过程中,液体培养基培养细菌时需要在特定温度下振荡使用。 12、纯水装置

2012.3.18材料制备原理-课后作业题

第1章习题与思考题 1.1溶胶-凝胶合成 1、名词解释:(1)溶胶;(2)凝胶 参考答案(列出了主要内容,根据具体情况自己总结,下同!): 1、溶胶:是具有液体特征的胶体体系,是指微小的固体颗粒悬浮分散在液相中,不停地进行布朗运动的体系。分散粒子是固体或者大分子颗粒,分散粒子的尺寸在1~100nm之间,这些固体颗粒一般由103~109个原子组成。 凝胶(Gel):凝胶是具有固体特征的胶体体系,被分散的物质形成连续的网络骨架,骨架孔隙中充满液体或气体,凝胶中分散相含量很低,一般在1%~3%之间。 2、说明溶胶-凝胶法的原理及基本步骤。 答:溶胶-凝胶法是一种新兴起的制备陶瓷、玻璃等无机材料的湿化学方法。其基本原理是:易于水解的金属化合物(无机盐或金属醇盐)在某种溶剂中与水发生反应,经过水解与缩聚过程逐渐凝胶化,再经干燥烧结等后处理得到所需材料,基本反应有水解反应和聚合反应。这种方法可在低温下制备纯度高、粒径分布均匀、化学活性高的单多组分混合物(分子级混合),并可制备传统方法不能或难以制备的产物,特别适用于制备非晶态材料。 溶胶-凝胶法制备过程中以金属有机化合物(主要是金属醇盐)和部分无机盐为前驱体,首先将前驱体溶于溶剂(水或有机溶剂)形成均匀的溶液,接着溶质在溶液中发生水解(或醇解),水解产物缩合聚集成粒径为1nm左右的溶胶粒子(sol),溶胶粒子进一步聚集生长形成凝胶(gel)。有人也将溶胶-凝胶法称为SSG法,即溶液-溶胶-凝胶法。 3、简述溶胶-凝胶制备陶瓷粉体材料的优点。 答:①制备工艺简单、无需昂贵的设备; ②对多元组分体系,溶胶-凝胶法可大大增加其化学均匀性; ③反应过程易控制,可以调控凝胶的微观结构; ④材料可掺杂的范围较宽(包括掺杂量及种类),化学计量准确,易于改性; ⑤产物纯度高,烧结温度低 1.2水热与溶剂热合成 1、名词解释:(1)水热法;(2)溶剂热法。 水热法:是指在特制的密闭反应器(高压釜)中,采用水溶液作为反应体系,通过对反应体系加热、加压(或自生蒸气压),创造一个相对高温、高压的反应环境,使得通常难溶或不溶的物质溶解,并且重结晶而进行无机合成与材料处理的一种有效方法。 溶剂热法:将水热法中的水换成有机溶剂或非水溶媒(例如:有机胺、醇、氨、四氯化碳或苯等),采用类似于水热法的原理,以制备在水溶液中无法长成,易氧化、易水解或对水敏感的材料。 2、简述水热与溶剂热合成存在的问题? 答:(1)水热条件下的晶体生长或材料合成需要能够在高压下容纳高腐蚀性溶剂的反应器,需要能被规范操作以及在极端温度压强条件下可靠的设备。由于反应条件的特殊性,致使水热反应相比较其他反应体系而言具有如下缺点: a 无法观察晶体生长和材料合成的过程,不直观。 b 设备要求高耐高温高压的钢材,耐腐蚀的内衬、技术难度大温压控制严格、成本高。 c 安全性差,加热时密闭反应釜中流体体积膨胀,能够产生极大的压强,存在极大的安全隐患。

相关文档
最新文档