Security Requirements Model for Grid Data Management Systems

合集下载

数据库管理员英文求职信

数据库管理员英文求职信

Dear Hiring Manager,I am writing to express my interest in the Database Administratorposition at your esteemed organization. With a solid background in database management and a passion for ensuring data integrity and security, I believe I would be a valuable asset to your team.I have obtained a Bachelor's degree in Computer Science and have since gained extensive experience in working with various database management systems such as MySQL, PostgreSQL, and MongoDB. During my professional journey, I have honed my skills in database design, installation, configuration, performance tuning, and data backup and recovery.In my previous role as a Database Administrator at XYZ Corporation, I was responsible for managing a team of database administrators and ensuring the smooth operation of our database systems. I successfully implemented several initiatives that improved the efficiency andreliability of our databases, including the development of a comprehensive backup and recovery plan, the implementation of database performance monitoring tools, and the establishment of data security protocols.One of the key strengths I possess is my ability to effectively communicate with stakeholders across different levels of an organization.I have experience in collaborating with business analysts, developers, and IT managers to understand their requirements and provide solutions that meet their needs. This has enabled me to develop a strong understanding of both technical and business aspects of database management.In addition to my technical expertise, I am also a strong advocate for continuous learning and professional development. I regularly attend industry conferences and workshops to stay up-to-date with the latest trends and technologies in database management. This commitment to self-improvement has allowed me to expand my knowledge and skills, enabling me to contribute effectively to any organization I work with.I am confident that my combination of technical skills, experience, and commitment to professional growth make me a highly suitable candidatefor the Database Administrator position at your organization. I am eager to contribute to the success of your team and help drive the growth and efficiency of your database systems.Thank you for considering my application. I would welcome the opportunity to discuss how my skills and experience align with the requirements of the position in more detail. Please feel free to contact me at your earliest convenience to schedule an interview.Yours sincerely,[Your Name]。

Cyber Security — Systems Security Management标准(CIP

Cyber Security — Systems Security Management标准(CIP

A. Introduction1.Title: Cyber Security — Systems Security Management2.Number: CIP-007-13.Purpose: Standard CIP-007 requires Responsible Entities to define methods, processes,and procedures for securing those systems determined to be Critical Cyber Assets, as well asthe non-critical Cyber Assets within the Electronic Security Perimeter(s).Standard CIP-007should be read as part of a group of standards numbered Standards CIP-002 through CIP-009.Responsible Entities should interpret and apply Standards CIP-002 through CIP-009 usingreasonable business judgment.4.Applicability:4.1.Within the text of Standard CIP-007, “Responsible Entity” shall mean:4.1.1Reliability Coordinator.4.1.2Balancing Authority.4.1.3Interchange Authority.4.1.4Transmission Service Provider.4.1.5Transmission Owner.4.1.6Transmission Operator.4.1.7Generator Owner.4.1.8Generator Operator.4.1.9Load Serving Entity.4.1.10NERC.4.1.11Regional Reliability Organizations.4.2.The following are exempt from Standard CIP-007:4.2.1Facilities regulated by the U.S. Nuclear Regulatory Commission or the CanadianNuclear Safety Commission.4.2.2Cyber Assets associated with communication networks and data communicationlinks between discrete Electronic Security Perimeters.4.2.3Responsible Entities that, in compliance with Standard CIP-002, identify thatthey have no Critical Cyber Assets.5.Effective Date: June 1, 2006B. RequirementsThe Responsible Entity shall comply with the following requirements of Standard CIP-007 for all Critical Cyber Assets and other Cyber Assets within the Electronic Security Perimeter(s):R1.Test Procedures — The Responsible Entity shall ensure that new Cyber Assets and significant changes to existing Cyber Assets within the Electronic Security Perimeter do not adverselyaffect existing cyber security controls. For purposes of Standard CIP-007, a significant changeshall, at a minimum, include implementation of security patches, cumulative service packs,vendor releases, and version upgrades of operating systems, applications, database platforms,or other third-party software or firmware.R1.1.The Responsible Entity shall create, implement, and maintain cyber security test procedures in a manner that minimizes adverse effects on the production system or itsoperation.R1.2.The Responsible Entity shall document that testing is performed in a manner that reflects the production environment.R1.3.The Responsible Entity shall document test results.R2.Ports and Services — The Responsible Entity shall establish and document a process to ensure that only those ports and services required for normal and emergency operations are enabled.R2.1.The Responsible Entity shall enable only those ports and services required for normal and emergency operations.R2.2.The Responsible Entity shall disable other ports and services, including those used for testing purposes, prior to production use of all Cyber Assets inside the ElectronicSecurity Perimeter(s).R2.3.In the case where unused ports and services cannot be disabled due to technical limitations, the Responsible Entity shall document compensating measure(s) appliedto mitigate risk exposure or an acceptance of risk.R3.Security Patch Management — The Responsible Entity, either separately or as a component of the documented configuration management process specified in CIP-003 Requirement R6,shall establish and document a security patch management program for tracking, evaluating, testing, and installing applicable cyber security software patches for all Cyber Assets within the Electronic Security Perimeter(s).R3.1.The Responsible Entity shall document the assessment of security patches and security upgrades for applicability within thirty calendar days of availability of thepatches or upgrades.R3.2.The Responsible Entity shall document the implementation of security patches. In any case where the patch is not installed, the Responsible Entity shall documentcompensating measure(s) applied to mitigate risk exposure or an acceptance of risk.R4.Malicious Software Prevention — The Responsible Entity shall use anti-virus software and other malicious software (“malware”) prevention tools, where technically feasible, to detect, prevent, deter, and mitigate the introduction, exposure, and propagation of malware on allCyber Assets within the Electronic Security Perimeter(s).R4.1.The Responsible Entity shall document and implement anti-virus and malware prevention tools. In the case where anti-virus software and malware prevention toolsare not installed, the Responsible Entity shall document compensating measure(s)applied to mitigate risk exposure or an acceptance of risk.R4.2.The Responsible Entity shall document and implement a process for the update of anti-virus and malware prevention “signatures.” The process must address testing andinstalling the signatures.R5.Account Management — The Responsible Entity shall establish, implement, and document technical and procedural controls that enforce access authentication of, and accountability for, all user activity, and that minimize the risk of unauthorized system access.R5.1.The Responsible Entity shall ensure that individual and shared system accounts and authorized access permissions are consistent with the concept of “need to know” withrespect to work functions performed.R5.1.1.The Responsible Entity shall ensure that user accounts are implemented asapproved by designated personnel. Refer to Standard CIP-003 RequirementR5.R5.1.2.The Responsible Entity shall establish methods, processes, and proceduresthat generate logs of sufficient detail to create historical audit trails ofindividual user account access activity for a minimum of ninety days.R5.1.3.The Responsible Entity shall review, at least annually, user accounts toverify access privileges are in accordance with Standard CIP-003Requirement R5 and Standard CIP-004 Requirement R4.R5.2.The Responsible Entity shall implement a policy to minimize and manage the scope and acceptable use of administrator, shared, and other generic account privilegesincluding factory default accounts.R5.2.1.The policy shall include the removal, disabling, or renaming of suchaccounts where possible. For such accounts that must remain enabled,passwords shall be changed prior to putting any system into service.R5.2.2.The Responsible Entity shall identify those individuals with access to shared accounts.R5.2.3.Where such accounts must be shared, the Responsible Entity shall have apolicy for managing the use of such accounts that limits access to only thosewith authorization, an audit trail of the account use (automated or manual),and steps for securing the account in the event of personnel changes (forexample, change in assignment or termination).R5.3.At a minimum, the Responsible Entity shall require and use passwords, subject to the following, as technically feasible:R5.3.1.Each password shall be a minimum of six characters.R5.3.2.Each password shall consist of a combination of alpha, numeric, and“special” characters.R5.3.3.Each password shall be changed at least annually, or more frequently basedon risk.R6.Security Status Monitoring — The Responsible Entity shall ensure that all Cyber Assets within the Electronic Security Perimeter, as technically feasible, implement automated tools ororganizational process controls to monitor system events that are related to cyber security.R6.1.The Responsible Entity shall implement and document the organizational processes and technical and procedural mechanisms for monitoring for security events on allCyber Assets within the Electronic Security Perimeter.R6.2.The security monitoring controls shall issue automated or manual alerts for detected Cyber Security Incidents.R6.3.The Responsible Entity shall maintain logs of system events related to cyber security, where technically feasible, to support incident response as required in Standard CIP-008.R6.4.The Responsible Entity shall retain all logs specified in Requirement R6 for ninety calendar days.R6.5.The Responsible Entity shall review logs of system events related to cyber security and maintain records documenting review of logs.R7.Disposal or Redeployment — The Responsible Entity shall establish formal methods, processes, and procedures for disposal or redeployment of Cyber Assets within the ElectronicSecurity Perimeter(s) as identified and documented in Standard CIP-005.R7.1.Prior to the disposal of such assets, the Responsible Entity shall destroy or erase the data storage media to prevent unauthorized retrieval of sensitive cyber security orreliability data.R7.2.Prior to redeployment of such assets, the Responsible Entity shall, at a minimum, erase the data storage media to prevent unauthorized retrieval of sensitive cybersecurity or reliability data.R7.3.The Responsible Entity shall maintain records that such assets were disposed of or redeployed in accordance with documented procedures.R8.Cyber Vulnerability Assessment — The Responsible Entity shall perform a cyber vulnerability assessment of all Cyber Assets within the Electronic Security Perimeter at least annually. Thevulnerability assessment shall include, at a minimum, the following:R8.1. A document identifying the vulnerability assessment process;R8.2. A review to verify that only ports and services required for operation of the Cyber Assets within the Electronic Security Perimeter are enabled;R8.3. A review of controls for default accounts; and,R8.4.Documentation of the results of the assessment, the action plan to remediate ormitigate vulnerabilities identified in the assessment, and the execution status of thataction plan.R9.Documentation Review and Maintenance —The Responsible Entity shall review and update the documentation specified in Standard CIP-007 at least annually. Changes resultingfrom modifications t o the systems or controls shall be documented within ninety calendardays of the change.C. MeasuresThe following measures will be used to demonstrate compliance with the requirements of Standard CIP-007:M1.Documentation of the Responsible Entity’s security test procedures as specified in Requirement R1.M2.Documentation as specified in Requirement R2.M3.Documentation and records of the Responsible Entity’s security patch management program, as specified in Requirement R3.M4.Documentation and records of the Responsible Entity’s malicious software prevention program as specified in Requirement R4.M5.Documentation and records of the Responsible Entity’s account management program as specified in Requirement R5.M6.Documentation and records of the Responsible Entity’s security status monitoring program as specified in Requirement R6.M7.Documentation and records of the Responsible Entity’s program for the disposal or redeployment of Cyber Assets as specified in Requirement R7.M8.Documentation and records of the Responsible Entity’s annual vulnerability assessment of all Cyber Assets within the Electronic Security Perimeters(s) as specified in Requirement R8.M9.Documentation and records demonstrating the review and update as specified in Requirement R9.D. Compliancepliance Monitoring Processpliance Monitoring Responsibility1.1.1Regional Reliability Organizations for Responsible Entities.1.1.2NERC for Regional Reliability Organization.1.1.3Third-party monitor without vested interest in the outcome for NERC.pliance Monitoring Period and Reset Time FrameAnnually.1.3.Data Retention1.3.1The Responsible Entity shall keep all documentation and records from theprevious full calendar year.1.3.2The Responsible Entity shall retain security–related system event logs for ninetycalendar days, unless longer retention is required pursuant to Standard CIP-008Requirement R2.1.3.3The compliance monitor shall keep audit records for three calendar years.1.4.Additional Compliance Information.1.4.1Responsible Entities shall demonstrate compliance through self-certification oraudit, as determined by the Compliance Monitor.1.4.2Instances where the Responsible Entity cannot conform to its cyber securitypolicy must be documented as exceptions and approved by the designated seniormanager or delegate(s). Duly authorized exceptions will not result in non-compliance. Refer to Standard CIP-003 Requirement R3.2.Levels of Noncompliance2.1.Level 1:2.1.1System security controls are in place, but fail to document one of the measures(M1-M9) of Standard CIP-007; or2.1.2One of the documents required in Standard CIP-007 has not been reviewed in theprevious full calendar year as specified by Requirement R9; or,2.1.3One of the documented system security controls has not been updated withinninety calendar days of a change as specified by Requirement R9; or,2.1.4Any one of:•Authorization rights and access privileges have not been reviewed duringthe previous full calendar year; or,• A gap exists in any one log of system events related to cyber security ofgreater than seven calendar days; or,•Security patches and upgrades have not been assessed for applicabilitywithin thirty calendar days of availability.2.2.Level 2:2.2.1System security controls are in place, but fail to document up to two of themeasures (M1-M9) of Standard CIP-007; or,2.2.2Two occurrences in any combination of those violations enumerated inNoncompliance Level 1, 2.1.4 within the same compliance period.2.3.Level 3:2.3.1System security controls are in place, but fail to document up to three of themeasures (M1-M9) of Standard CIP-007; or,2.3.2Three occurrences in any combination of those violations enumerated inNoncompliance Level 1, 2.1.4 within the same compliance period.2.4.Level 4:2.4.1System security controls are in place, but fail to document four or more of themeasures (M1-M9) of Standard CIP-007; or,2.4.2Four occurrences in any combination of those violations enumerated inNoncompliance Level 1, 2.1.4 within the same compliance period.2.4.3No logs exist.E. Regional DifferencesNone identified.Version HistoryVersion Date Action Change Tracking。

GM-供应商沟通原则

GM-供应商沟通原则

3
Supplier Communication Policies
Introduction GM is moving aggressively toward the electronic exchange and sharing of digital product and process data with its suppliers, and the elimination of other forms of exchange such as digital tapes and hard copy. GM is committed to making its collaboration with suppliers a win-win situation, in which the benefits are fairly distributed as well as the expense. The rapid advance of automation technology, including the availability of common operating systems across multiple computer platforms, has driven down costs, easing the investment burden of this new way of doing business. If requested, GM will help obtain provision of related services from third party providers. Suppliers will use, and will be assisted in using, exchange and collaboration capabilities appropriate for their respective roles. GM policies that govern program-supplier interactions address math-based data, telecommunications, security, data management, user application software, hardware, and training. In many instances, minimum requirements will be specified or preferences will be noted. Summary information is provided in this document. Details for specific program-supplier interactions will be included in the applicable Statement of Requirements (SOR). GM will periodically conduct readiness reviews of suppliers to assess their respective ability to comply with the letter and spirit of GM electronic digital data exchange and collaboration objectives. Results will be shared with each supplier, and joint efforts will be identified as appropriate to address shortfalls. Supplier readiness will be a major factor in the award 4 process.

外文翻译原文—数据库

外文翻译原文—数据库

DatabaseA database consists of an organized collection of data for one or more uses, typically in digital form. One way of classifying databases involves the type of their contents, for example: bibliographic, document-text, statistical. Digital databases are managed using database management systems, which store database contents, allowing data creation and maintenance, and search and other access. ArchitectureDatabase architecture consists of three levels, external, conceptual and internal. Clearly separating the three levels was a major feature of the relational database model that dominates 21st century databases.The external level defines how users understand the organization of the data. A single database can have any number of views at the external level. The internal level defines how the data is physically stored and processed by the computing system. Internal architecture is concerned with cost, performance, scalability and other operational matters. The conceptual is a level of indirection between internal and external. It provides a common view of the database that is uncomplicated by details of how the data is stored or managed, and that can unify the various external views into a coherent whole.Database management systemsA database management system (DBMS) consists of software that operates databases, providing storage, access, security, backup and other facilities. Database management systems can be categorized according to the database model that they support, such as relational or XML, the type(s) of computer they support, such as a server cluster or a mobile phone, the query language(s) that access the database, such as SQL or XQuery, performance trade-offs, such as maximum scale or maximum speed or others. Some DBMS cover more than one entry in these categories, e.g., supporting multiple query languages.Components of DBMSMost DBMS as of 2009[update] implement a relational model. Other DBMS systems, such as Object DBMS, offer specific features for more specialized requirements. Their components are similar, but not identical.RDBMS components•Sublanguages—Relational DBMS (RDBMS) include Data Definition Language (DDL) for defining the structure of the database, Data Control Language (DCL) for defining security/access controls, and Data Manipulation Language (DML) for querying and updating data.•Interface drivers—These drivers are code libraries that provide methods to prepare statements, execute statements, fetch results, etc. Examples include ODBC, JDBC, MySQL/PHP, FireBird/Python.•SQL engine—This component interprets and executes the DDL, DCL, and DML statements.It includes three major components (compiler, optimizer, and executor).•Transaction engine—Ensures that multiple SQL statements either succeed or fail as a group, according to application dictates.•Relational engine—Relational objects such as Table, Index, and Referential integrity constraints are implemented in this component.•Storage engine—This component stores and retrieves data from secondary storage, as well asmanaging transaction commit and rollback, backup and recovery, etc.ODBMS componentsObject DBMS (ODBMS) has transaction and storage components that are analogous to those in an RDBMS. Some ODBMS handle DDL, DCL and update tasks differently. Instead of using sublanguages, they provide APIs for these purposes. They typically include a sublanguage and accompanying engine for processing queries with interpretive statements analogous to but not the same as SQL. Example object query languages are OQL, LINQ, JDOQL, JPAQL and others. The query engine returns collections of objects instead of relational rows.TypesOperational databaseThese databases store detailed data about the operations of an organization. They are typically organized by subject matter, process relatively high volumes of updates using transactions. Essentially every major organization on earth uses such databases. Examples include customer databases that record contact, credit, and demographic information about a business' customers, personnel databases that hold information such as salary, benefits, skills data about employees, manufacturing databases that record details about product components, parts inventory, and financial databases that keep track of the organization's money, accounting and financial dealings.Data warehouseData warehouses archive historical data from operational databases and often from external sources such as market research firms. Often operational data undergoes transformation on its way into the warehouse, getting summarized, anonymized, reclassified, etc. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted from internal product codes to use UPC codes so that it can be compared with ACNielsen data. Analytical databaseAnalysts may do their work directly against a data warehouse, or create a separate analytic database for Online Analytical Processing. For example, a company might extract sales records for analyzing the effectiveness of advertising and other sales promotions at an aggregate level. Distributed databaseThese are databases of local work-groups and departments at regional offices, branch offices, manufacturing plants and other work sites. These databases can include segments of both common operational and com mon user databases, as well as data generated and used only at a user’s own site. End-user databaseThese databases consist of data developed by individual end-users. Examples of these are collections of documents in spreadsheets, word processing and downloaded files, or even managing their personal baseball card collection.External databaseThese databases contain data collect for use across multiple organizations, either freely or via subscription. The Internet Movie Database is one example.Hypermedia databasesThe Worldwide web can be thought of as a database, albeit one spread across millions of independent computing systems. Web browsers "process" this data one page at a time, while web crawlers and other software provide the equivalent of database indexes to support search and otheractivities.ModelsPost-relational database modelsProducts offering a more general data model than the relational model are sometimes classified as post-relational. Alternate terms include "hybrid database", "Object-enhanced RDBMS" and others. The data model in such products incorporates relations but is not constrained by E.F. Codd's Information Principle, which requires that all information in the database must be cast explicitly in terms of values in relations and in no other way.Some of these extensions to the relational model integrate concepts from technologies that pre-date the relational model. For example, they allow representation of a directed graph with trees on the nodes.Some post-relational products extend relational systems with non-relational features. Others arrived in much the same place by adding relational features to pre-relational systems. Paradoxically, this allows products that are historically pre-relational, such as PICK and MUMPS, to make a plausible claim to be post-relational.Object database modelsIn recent years[update], the object-oriented paradigm has been applied in areas such as engineering and spatial databases, telecommunications and in various scientific domains. The conglomeration of object oriented programming and database technology led to this new kind of database. These databases attempt to bring the database world and the application-programming world closer together, in particular by ensuring that the database uses the same type system as the application program. This aims to avoid the overhead (sometimes referred to as the impedance mismatch) of converting information between its representation in the database (for example as rows in tables) and its representation in the application program (typically as objects). At the same time, object databases attempt to introduce key ideas of object programming, such as encapsulation and polymorphism, into the world of databases.A variety of these ways have been tried for storing objects in a database. Some products have approached the problem from the application-programming side, by making the objects manipulated by the program persistent. This also typically requires the addition of some kind of query language, since conventional programming languages do not provide language-level functionality for finding objects based on their information content. Others have attacked the problem from the database end, by defining an object-oriented data model for the database, and defining a database programming language that allows full programming capabilities as well as traditional query facilities. Storage structuresDatabases may store relational tables/indexes in memory or on hard disk in one of many forms: •ordered/unordered flat files•ISAM•heaps•hash buckets•logically-blocked files•B+ treesThe most commonly used are B+ trees and ISAM.Object databases use a range of storage mechanisms. Some use virtual memory-mapped files tomake the native language (C++, Java etc.) objects persistent. This can be highly efficient but it can make multi-language access more difficult. Others disassemble objects into fixed- and varying-length components that are then clustered in fixed sized blocks on disk and reassembled into the appropriate format on either the client or server address space. Another popular technique involves storing the objects in tuples (much like a relational database) which the database server then reassembles into objects for the client.Other techniques include clustering by category (such as grouping data by month, or location), storing pre-computed query results, known as materialized views, partitioning data by range (e.g., a data range) or by hash.Memory management and storage topology can be important design choices for database designers as well. Just as normalization is used to reduce storage requirements and improve database designs, conversely denormalization is often used to reduce join complexity and reduc e query execution time.IndexingIndexing is a technique for improving database performance. The many types of index share the common property that they eliminate the need to examine every entry when running a query. In large databases, this can reduce query time/cost by orders of magnitude. The simplest form of index is a sorted list of values that can be searched using a binary search with an adjacent reference to the location of the entry, analogous to the index in the back of a book. The same data can have multiple indexes (an employee database could be indexed by last name and hire date.)Indexes affect performance, but not results. Database designers can add or remove indexes without changing application logic, reducing maintenance costs as the database grows and database usage evolves.Given a particular query, the DBMS' query optimizer is responsible for devising the most efficient strategy for finding matching data. The optimizer decides which index or indexes to use, how to combine data from different parts of the database, how to provide data in the order requested, etc.Indexes can speed up data access, but they consume space in the database, and must be updated each time the data are altered. Indexes therefore can speed data access but slow data maintenance. These two properties determine whether a given index is worth the cost.TransactionsMost DBMS provide some form of support for transactions, which allow multiple data items to be updated in a consistent fashion, such that updates that are part of a transaction succeed or fail in unison. The so-called ACID rules, summarized here, characterize this behavior:•Atomicity: Either all the data changes in a transaction must happen, or none of them. The transaction must be completed, or else it must be undone (rolled back).•Consistency: Every transaction must preserve the declared consistency rules for the database. •Isolation: Two concurrent transactions cannot interfere with one another. Intermediate results within one transaction must remain invisible to other transactions. The most extreme form of isolation is serializability, meaning that transactions that take place concurrently could instead be performed in some series, without affecting the ultimate result.•Durability: Completed transactions cannot be aborted later or their results discarded. They must persist through (for instance) DBMS restarts.In practice, many DBMSs allow the selective relaxation of these rules to balance perfect behavior with optimum performance.ReplicationDatabase replication involves maintaining multiple copies of a database on different computers, to allow more users to access it, or to allow a secondary site to immediately take over if the primary site stops working. Some DBMS piggyback replication on top of their transaction logging facility, applying the primary's log to the secondary in near real-time. Database clustering is a related concept for handling larger databases and user communities by employing a cluster of multiple computers to host a single database that can use replication as part of its approach.SecurityDatabase security denotes the system, processes, and procedures that protect a database from unauthorized activity.DBMSs usually enforce security through access control, auditing, and encryption:•Access control manages who can connect to the database via authentication and what they can do via authorization.•Auditing records information about database activity: who, what, when, and possibly where. •Encryption protects data at the lowest possible level by storing and possibly transmitting data in an unreadable form. The DBMS encrypts data when it is added to the database and decrypts it when returning query results. This process can occur on the client side of a network connection to prevent unauthorized access at the point of use.ConfidentialityLaw and regulation governs the release of information from some databases, protecting medical history, driving records, telephone logs, etc.In the United Kingdom, database privacy regulation falls under the Office of the Information Commissioner. Organizations based in the United Kingdom and holding personal data in digital format such as databases must register with the Office.LockingWhen a transaction modifies a resource, the DBMS stops other transactions from also modifying it, typically by locking it. Locks also provide one method of ensuring that data does not c hange while a transaction is reading it or even that it doesn't change until a transaction that once read it has completed.GranularityLocks can be coarse, covering an entire database, fine-grained, covering a single data item, or intermediate covering a collection of data such as all the rows in a RDBMS table.Lock typesLocks can be shared or exclusive, and can lock out readers and/or writers. Locks can be created implicitly by the DBMS when a transaction performs an operation, or explic itly at the transaction's request.Shared locks allow multiple transactions to lock the same resource. The lock persists until all such transactions complete. Exclusive locks are held by a single transaction and prevent other transactions from locking the same resource.Read locks are usually shared, and prevent other transactions from modifying the resource. Write locks are exclusive, and prevent other transactions from modifying the resource. On some systems, write locks also prevent other transactions from reading the resource.The DBMS implicitly locks data when it is updated, and may also do so when it is read.Transactions explicitly lock data to ensure that they can complete without a deadlock or other complication. Explic it locks may be useful for some administrative tasks.Locking can significantly affect database performance, especially with large and complex transactions in highly concurrent environments.IsolationIsolation refers to the ability of one transaction to see the results of other transactions. Greater isolation typically reduces performance and/or concurrency, leading DBMSs to provide administrative options to reduce isolation. For example, in a database that analyzes trends rather than looking at low-level detail, increased performance might justify allowing readers to see uncommitted changes ("dirty reads".)DeadlocksDeadlocks occur when two transactions each require data that the other has already locked exclusively. Deadlock detection is performed by the DBMS, which then aborts one of the transactions and allows the other to complete.From: Wikipedia, the free encyclopedia。

数据安全管理制度英文

数据安全管理制度英文

数据安全管理制度英文1. IntroductionThis Data Security Management Policy outlines the guidelines and procedures for managing data security within our organization. Data security is crucial to protecting the confidentiality, integrity, and availability of sensitive information and ensuring compliance with relevant regulations and standards. This policy applies to all employees, contractors, and third parties who have access to our organization's data and systems.2. Definitions2.1 Data Security - Data security refers to the protection of data from unauthorized access, disclosure, alteration, or destruction.2.2 Sensitive Information - Sensitive information includes personally identifiable information, financial information, intellectual property, trade secrets, and any other data that requires protection to prevent loss, theft, or misuse.2.3 Data Breach - A data breach is the unauthorized access, disclosure, or use of sensitive information, leading to a loss of confidentiality, integrity, or availability.3. Data Security Responsibilities3.1 Management - Management is responsible for developing, implementing, and enforcing data security policies and procedures, assigning data security responsibilities to employees, and ensuring compliance with relevant laws, regulations, and standards.3.2 IT Department - The IT department is responsible for implementing technical safeguards, such as firewalls, encryption, access controls, and monitoring tools, to protect data from unauthorized access, disclosure, or alteration.3.3 Employees - All employees are responsible for following data security policies and procedures, safeguarding sensitive information, using secure passwords, and reporting any suspicious activities or incidents to the data security officer.4. Data Security Policies4.1 Data Classification - Data should be classified based on its sensitivity and importance to the organization, with appropriate security measures implemented to protect data according to its classification.4.2 Access Controls - Access to sensitive information should be restricted to authorized individuals based on the principle of least privilege, with strong authentication mechanisms, such as passwords, biometrics, or multi-factor authentication, implemented to verify users' identities.4.3 Encryption - Data should be encrypted both at rest and in transit to protect it from unauthorized access or interception, using strong encryption algorithms and secure key management practices.4.4 Data Backup and Recovery - Regular backups of critical data should be performed to prevent data loss in the event of a disaster or cyberattack, with backup copies stored securely and tested periodically for integrity and recoverability.4.5 Incident Response - An incident response plan should be developed to guide the organization's response to data breaches or security incidents, including procedures for identifying, containing, investigating, and reporting incidents to relevant stakeholders.4.6 Security Awareness Training - All employees should receive regular security awareness training to raise awareness of data security risks, best practices, and policies, with ongoing monitoring and enforcement of compliance with data security policies.4.7 Vendor Management - Third-party vendors and service providers should be selected based on their data security practices and contractual obligations to protect sensitive information, with regular assessments of their security controls and compliance with data security requirements.5. Compliance and Monitoring5.1 Compliance - The organization should comply with relevant data protection laws, industry regulations, and security standards, such as the General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry Data Security Standard (PCI DSS), and ISO/IEC 27001.5.2 Monitoring - Data security controls should be monitored regularly for effectiveness, with audits, assessments, and penetration tests conducted to identify vulnerabilities, assess risks, and ensure compliance with data security policies.5.3 Reporting - Security incidents, data breaches, and compliance violations should be reported promptly to the data security officer, management, and relevant authorities, as required by law or contractual obligations, with appropriate measures taken to mitigate risks and prevent future incidents.6. ConclusionData security is paramount to safeguarding sensitive information, maintaining trust with stakeholders, and protecting the organization from financial, legal, and reputational risks. By following this Data Security Management Policy and implementing robust data security controls, our organization can ensure the confidentiality, integrity, and availability of data and demonstrate a commitment to data security best practices and compliance with relevant regulations and standards.。

NI HIL测试系统产品简介说明书

NI HIL测试系统产品简介说明书

NI T ools for Hardware-in-the-LoopT estingOverviewThe rapid pace of technology is bringing about changing test requirements that modern hardware-in-the-loop (HIL) test systems must be flexible and customizable enough to meet. Embedded software testers need HIL systems that can keep pace as technologies like advanced driver assistance systems (ADAS), autonomous vehicles, fly-by-wire, and fuel economy techniques make their way into production. Because a one-size-fits-all test solution is now often quickly outdated, NI test systems are built on open, industry-standard platforms that can evolve as test requirements do. NI HIL systems are made up of off-the-shelf components that can fit together into either a turnkey system that you can quickly deploy into your application or a do-it-yourself system that delivers the ultimate in application flexibility and cost optimization.ContentsSystem Overview (2)Turnkey HIL Simulators (2)Software (3)VeriStand (3)Extending VeriStand Capabilities (4)TestStand (4)DIAdem (5)Hardware (6)PXI (6)CompactRIO (7)SLSC (8)Next Steps (8)System OverviewWith NI’s open, flexible HIL solutions, you can make customizations to fit your specific needs. Using a modular architecture, easily upgrade the platform with added functionality to future proof your test systems and meet the requirements of the most demanding embedded software test applications. Compared to competitors, NI ’s performance capabilities make it the best option for testing innovative control systems.Figure 1 below shows how the various hardware and software components are assembled into an HIL system. The system includes software for data manager and test automation in addition to the HIL test execution and interface. Below the software are two different hardware platforms for modular I/O and a third that is used for application-specific switching, loads, and signal conditioning. This last one is then connected to the device under test (DUT) through various connectivity options, depending on the device. Figure 1. An HIL system built on NI components offers the ultimate in flexibility because it is built onmodular, off-the-shelf components.Turnkey HIL SimulatorsFor a turnkey system that is ready to deploy but still built to fit your exact application requirements, you can look to NI HIL specialty partners around the globe who build and deliver HIL Simulators. These partners have extensive experience in integrating NI hardware and software as well as the vertical PXI or CompactRIOTest Manager TestStandHIL Engine and UI VeriStandEnvironment SimulationPlant ModelNetworkBuses Analog CameraFrames RF Digital SLSC SLSC SLSC | Load Plates | Physical DeviceI/OSignal ConditioningFault InsertionLoads Bulk ConnectorDIAdem Data ManagementDUT ECU | LRU Physical Deviceapplication of HIL in automotive, aerospace, and other industries. These systems are built to exact specifications and architectures defined by NI and on NI components to ensure a consistent quality and user experience anywhere in the world.With industry-leading customizability, HIL Simulators help you easily adapt to changing test requirements across a wide variety of application areas, including ASAS, electrification, and advanced sensor integration with cameras, radar, and ultrasonics. Additionally, these simulators provide easier test system commissioning and system integration so you can find more engine control unit (ECU) defects faster.NI HIL Simulators are built to integrate into your existing workflow while still giving you the flexibility to adapt to future requirements by including features like the following:∙Model integration from more than 20 simulation environments, including The MathWorks, Inc.Simulink® software∙Fault injection for networks and electrical systems∙Electrical load integration∙RF and camera I/O to keep pace with the latest industry trends∙User-programmable FPGA-based I/O for advanced models∙Flexible bus support including CAN, LIN, FlexRay, ARINC, AFDX, and MIL-STD-1553∙Unlimited expansion capabilities through multiple system connectionsFigure 2. HIL Simulators are turnkey HIL systems built on NI’s open platforms and delivered by a vetted NI HIL specialty partner with extensive industry experience.SoftwareVeriStandVeriStand is a software environment for configuring real-time test and HIL applications. Out of thebox, VeriStand can help you construct a multicore-ready real-time engine to execute tasks such as real-time stimulus generation, data acquisition for high-speed and conditioned measurements, and calculated channels and custom channel scaling.VeriStand can also import control algorithms, simulation models, and other tasks from both LabVIEW software and third-party environments. You can monitor and interact with these tasks using a run-time editable user interface that includes tools for value forcing, alarm monitoring, I/O calibration, and stimulusprofile editing. Although you don’t need programming knowledge to use VeriStand, you can customize and extend it with a variety of software environments such as LabVIEW, ANSI C/C++, ASAM XIL, and others for modeling and programming.VeriStand is architected to have a real-time engine that runs independently from the user interface to ensure the determinism of the system you are running.Figure 3. VeriStand is NI’s HIL testing software and offers model execution, stimulus generation, da ta acquisition, customizable UI, and logging among other features.Extending VeriStand CapabilitiesVeriStand has an open framework that you can use to add specific functionality with LabVIEW. Create add-ons to add support for specific hardware or even create new software functionality that is nonnative. Both NI Alliance Partners and NI have already deployed a number of add-ons that you can download from the community.VeriStand also provides a .NET-based API that you can use to create custom interfaces for VeriStand or to implement automation of the configuration and/or operation of VeriStand applications. For example, you can use the API library to create a custom configuration window that limits the changes a user can make to a VeriStand application or to simplify the configuration process by making it possible to specify the application parameters in a spreadsheet. Or you could automate the operation of a VeriStand application or create a completely custom run-time interface. These .NET-based APIs can be used by LabVIEW, TestStand, and a variety of other environments that can use .NET interfaces like Python. TestStandTestStand is a ready-to-run test management software that is designed to help you develop automated test and validation systems faster. You can use TestStand to develop, execute, and deploy test system software. In addition, you can develop test sequences that integrate code modules written in any test programming language. Sequences also specify execution flow, reporting, database logging, and connectivity to other enterprise systems.You can use TestStand to automate an HIL test application by calling the VeriStand .NET-based execution API. TestStand can also manage hardware and software from multiple platforms in a single TestStand sequence. For example, it can automate VeriStand real-time sequences running on an NI PXI real-timecontroller, while simultaneously controlling a third-party instrument using its native IVI driver support. After you run your test, you can log test result information in a customizable report or database automatically. Additionally, systems written in TestStand integrate with existing source code control, requirements management, and data management systems.You can also integrate TestStand into third-party test management and test generation software platforms. For example, you can use TestStand with MaTeLo from All4Tec to generate automated tests based on your application requirements. Whether you would like to use TestWeaver with VeriStand or MaTeLo with TestStand, you can create a test system to not only automate tests but also automatically generate unit tests to make your ECU fail. This helps you focus on testing embedded software thoroughly instead of writing new test scripts from scratch.Figure 3. TestStand is NI’s test automation software and can be used to automate VeriStand for fasterand more efficient testing.DIAdemSuccessful engineering enterprises share characteristics such as consistent data logging, analysis, and effective data management. This is especially true of HIL test applications where design and test teams must collaborate to ensure embedded software and mechanical system product quality. Implementing a standard, automated means of data analysis and report generation helps you view data in a consistent way, improves test efficiency, and makes data much easier to find and interpret. You can use VeriStand with DIAdem to quickly and easily log data, perform post-processing, and generate reports—all fromthe VeriStand UI tools.DIAdem is a single software tool that you can use to quickly locate, load, visualize, analyze, and report measurement data collected during data acquisition and/or generated during simulations. It is designed to help you quickly access, process, and report on large volumes of scattered data in multiple custom formats to make informed decisions.Figure 4. DIAdem is industry-leading data mining, analysis, visualization, and reporting software. HardwarePXIPXI is a rugged PC-based platform for measurement and automation systems. PXI combines PCI electrical-bus features with the modular, Eurocard packaging of CompactPCI and then adds specialized synchronization buses and key software features. PXI is both a high-performance and low-cost deployment platform for a wide range of applications that includes HIL test. Developed in 1997 and launched in 1998, PXI is an open industry standard governed by the PXI Systems Alliance (PXISA), a group of more than 70 companies chartered to promote the PXI standard, ensure interoperability, and maintain the PXI specification.A major benefit of the PXI platform for HIL is the wide range of I/O options that can be integrated into the test system. As the automotive industry evolves to incorporate new technologies, such as automotive radar, cameras with onboard image processing, V2X, and real-time GNSS position tracking, HIL systems must also evolve to verify the safe operation of these technologies. With available vision, high-performance FPGAs, and RF modules that meet the needs of the latest automotive standards, such as 77-82 GHz radar, the PXI platform is uniquely positioned to meet the demands of these new test requirements.In most cases, PXI is the best solution for HIL testing. It provides high channel count and density, the broadest availability of I/O, and the highest processing capability available. In addition to high-performance hardware, PXI also offers the best software experience for HIL testing, as it supports the most modeling environments and seamless DAQ hardware support.Figure 5. PXI is a high-performance, industry-standard I/O platform with built-in real-time processors, user-accessible FPGAs, and modular I/O including network buses, analog and digital I/O, and camera andradar I/O and simulation capabilities.CompactRIOCompactRIO is a rugged hardware design in a compact form factor that is ideal for most harsh environments as well as lab settings that require a small physical footprint. Although typically providing lower computational performance than PXI, CompactRIO offers high-performance processing and heterogeneous computing elements including ARM-based Xilinx Zynq SoCs as well as quad-core Intel Atom processors and Xilinx Kintex-7 FPGAs. CompactRIO also includes signals with measurement-specific signal conditioning, built-in isolation, and industrial I/O.CompactRIO is an ideal fit for benchtop HIL systems. It doesn’t have as many I/O options as PXI, but the compact form factor and lower price make it a great choice for unit testing of ECUs that require integrated processing and FPGA performance such as ECU testing.Figure 6. CompactRIO is a smaller, modular I/O platform well suited to benchtop HIL testing. It also comes with built-in real-time processing, user-accessible FPGAs, and modular I/O.SLSCSLSC extends PXI and CompactRIO measurement hardware with high-power relays for signal switching, power loads, and additional inline signal conditioning capability. The system consists of a chassis with built-in active cooling that can host 12 modules. SLSC plug-in modules can operate in the chassis in three different modes: standalone, pass through, or cascaded. You can use cascaded mode to implement functionality like signal fault insertion. You can select from a variety of NI and third-party modules or create your own based on a detailed hardware and software development kit from NI.SLSC is designed to simplify overall system integration, reducing system point-to-point wiring by accumulating signals and using standard cables. Each SLSC chassis consists of an SLSC digital bus that is used to discover, configure, and set parameters on the individual modules. Signals pass through SLSC modules either from the front connector or the rear expansion connector. You have the flexibility to design your own secondary backplanes to reduce system wiring using a custom application-specific backplane integrating the signal flow.Figure 7. SLSC is an open architecture for standardizing the switches, loads, and signal conditioning thatare often custom for each HIL application.。

梅特勒-托利多分析天平选型指南

Your requirements?Precise weighing is the backbone of many laboratory processes.Non-compliance with defined maximum limits can have disastrousconsequences in regulated areas. Measurement sequences must berepeated, and valuable substances are lost. Invalid values can evencause production stops.Our solutions.These guidelines are intended to help youselect the right balance for your process./servicexxlMinimum Sample Weightsfor maximum profitability!If you are able to weigh in smaller sample amounts with the same level of confidence, this raises the yield of yoursubstance and reduces costs - sometimes dramatically. And if you are able to weigh the same amount of samplewith even greater accuracy, the quality of your weighing results is enhanced. Both of these factors are critical whendealing with precious substances. For these types of application, be sure to choose a balance that offers a very lowminimum sample weight as well as a large weighing range – so you can weigh in directly into the tare containerplaced on the balance!The right balance for your applicationSubject to technical changes© 03/06 Mettler-Toledo GmbHPrinted in Switzerland 11795382MCG MarCom GreifenseeExcellenceBalancesGuidelines for Selectingthe Right Analytical Balance2.1mgLess costMore safety&XS Analytical Balances for the daily routineMaximum productivity Innovative ergonomics and high speed for unmatched weighing efficiency.Microbalances for weighing smallest samplesof the smallest samples Guarantees maximum yield Antistatic kit for XP analytical balancesThe fully integrable ionizer generates positive and negative ions which immediately eliminate disruptive electrostatic charge.PC software LabX balanceFor simple and reliable transfer of weighing results into spreadsheet programs such XP Analytical Balancesfor secure weighing processesSecurity knows no alternative Optimal measuring, user and data security, combined with high speed and maximum operating comfort.Bluetooth printer BT-P42With wireless connec-tion to the balance.GxP-compliant reportingThe report header and sample identification are selected with the extremely clear and easy touch screen operation.UMX2XP205XS205DUXP56。

NERC电力可靠性标准CIP-002-1和CIP-012-1的技术解释和说明说明书

DRAFTCyber Security – Communications between Control CentersTechnical Rationale and Justification for Reliability Standard CIP-012-1March 2018Preface (iii)Introduction (iv)Requirement R1 (1)General Considerations for Requirement R1 (1)Overview of confidentiality and integrity (1)Alignment with IRO and TOP standards (1)Identification of Where Security Protection is Applied by the Responsible Entity (2)Control Center Ownership (2)References (4)The vision for the Electric Reliability Organization (ERO) Enterprise, which is comprised of the North American Electric Reliability Corporation (NERC) and the eight Regional Entities (REs), is a highly reliable and secure North American bulk power system (BPS). Our mission is to assure the effective and efficient reduction of risks to the reliability and security of the grid.The North American BPS is divided into eight RE boundaries as shown in the map and corresponding table below.The North American BPS is divided into eight RE boundaries. The highlighted areas denote overlap as some load-serving entities participate in one Region while associated Transmission Owners/Operators participate in another.FRCC Florida Reliability Coordinating CouncilMRO Midwest Reliability OrganizationNPCC Northeast Power Coordinating CouncilRF ReliabilityFirstSERC SERC Reliability CorporationSPP RE Southwest Power Pool Regional EntityTexas RE Texas Reliability EntityWECC Western Electricity Coordinating CouncilThis document explains the technical rationale and justification for the proposed Reliability Standard CIP-012-1. It will provide stakeholders and the ERO Enterprise with an understanding of the technology and technical requirements in the Reliability Standard. It also contains information on the SDT’s intent in drafting the requirements. This Technical Rationale and Justification for CIP-012-1 is not a Reliability Standard and should not be considered mandatory and enforceable.On January 21, 2016, the Federal Energy Regulatory Commission (FERC or Commission) issued Order No. 822, approving seven Critical Infrastructure Protection (CIP) Reliability Standards and new or modified terms in the Glossary of Terms Used in NERC Reliability Standards, and directing modifications to the CIP Reliability Standards. Among others, the Commission directed the North American Electric Reliability Corporation (NERC) to “develop modifications to the CIP Reliability Standards to require Responsible Entities1 to implement controls to protect, at a minimum, communication links and sensitive bulk electric system data communicated between bulk electric system Control Centers in a manner that is appropriately tailored to address the risks posed to the bulk electric system by the assets being protected (i.e., high, medium, or low impact).” (Order 822, Paragraph 53)In response to the directive in Order No. 822, the Project 2016-02 standard drafting team (SDT) drafted Reliability Standard CIP-012-1 to require Responsible Entities to implement controls to protect sensitive Bulk Electric System (BES) data and communications links between BES Control Centers. Due to the sensitivity of the data being communicated between Control Centers, as defined in the Glossary of Terms Used in NERC Reliability Standards, the standard applies to all impact levels (i.e., high, medium, or low impact).Although the Commission directed NERC to develop modifications to CIP-006, the SDT determined that modifications to CIP-006 would not be appropriate. There are differences between the plan(s) required to be developed and implemented for CIP-012-1 and the protection required in CIP-006-6 Requirement R1 Part 1.10. CIP-012-1 Requirements R1 and R2 protect the applicable data during transmission between two separate Control Centers. CIP-006 Requirement R1 Part 1.10 protects nonprogrammable communication components within an Electronic Security Perimeter (ESP) but outside of a Physical Security Perimeter (PSP). The transmission of applicable data between Control Centers takes place outside of an ESP. Therefore, the protection contained in CIP-006-6 Requirement R1 Part 1.10 does not apply.The SDT drafted requirements to provide Responsible Entities the latitude to protect the communication links, the data, or both to satisfy the security objective consistent with the capabilities of the Responsible Entity’s operational environment.1 As used in the CIP Standards, a Responsible Entity refers to the registered entities subject to the CIP Standards.Requirement R1R1.The Responsible Entity shall implement one or more documented plan(s) to mitigate the risk of unauthorized disclosure or modification of Real-time Assessment and Real-time monitoring data whilebeing transmitted between any Control Centers. This requirement excludes oral communications. Theplan shall include: [Violation Risk Factor: Medium] [Time Horizon: Operations Planning]1.1Identification of security protection used to mitigate the risk of unauthorized disclosure ormodification of Real-time Assessment and Real-time monitoring while being transmitted betweenControl Centers;1.2Identification of where the Responsible Entity applied security protection for transmitting Real-timeAssessment and Real-time monitoring data between Control Centers; and1.3If the Control Centers are owned or operated by different Responsible Entities, identify theresponsibilities of each Responsible Entity for applying security protection to the transmission ofReal-time Assessment and Real-time monitoring data between those Control Centers.General Considerations for Requirement R1Requirement R1 focuses on implementing a document plan to protect information that is critical to the Real-time operations of the Bulk Electric System while in transit between applicable Control Centers. The SDT does not intend for the listed order of the three requirement parts to convey any sequence or significance.Overview of confidentiality and integrityThe SDT drafted CIP-012-1 to address confidentiality and integrity of Real-time Assessment and Real-time monitoring data. This is accomplished by drafting the requirement to mitigate the risk of unauthorized disclosure (confidentiality) or modification (integrity). For this Standard, the SDT relied on the definitions of confidentiality and integrity as defined by National Institute of Standards and Technology (NIST):•Confidentiality is defined as, “Preserving authorized restrictions on information access and disclosure, including means for protecting personal privacy and proprietary information.”2•Integrity is defined as, “Guarding against improper information modification or destruction, and includes ensuring information non-repudiation and authenticity.”3The SDT asserts that the availability of this data is already required by the performance obligation of the Operating and Planning Reliability Standards. The SDT drafted CIP-012 to address the data while being transmitted. The SDT maintains that this data resides within BES Cyber Systems, and while at rest is protected by CIP-003 through CIP-011.Alignment with IRO and TOP standardsThe SDT recognized the FERC reference to additional Reliability Standards and the responsibilities to protect the applicable data in accordance with NERC Reliability Standards TOP-003 and IRO-010. The SDT used these references to drive the identification of sensitive BES data and chose to base the CIP-012 requirements on the Real-time data specification elements in these standards. This approach provides consistent scoping of identified data, and does not require each entity to devise its own list or inventory of this data. Many entities are required to provide this data under agreements executed with their RC, BA or TOP. The SDT asserts that typically the RC, BA or TOP will identify all data requiring protection for CIP-012-1 through the TOP-003 and IRO-010 Reliability Standards. However, the SDT2NIST Special Publication 800-53A, Revision 4, page B-33NIST Special Publication 800-53A, Revision 4, page B-6noted that there may be special instances during which Real-time Assessment or Real-time Monitoring data is not identified by the RC, BA, or TOP. This would include data that may be exchanged between a Responsible Entity’s primary and backup Control Center.Identification of Where Security Protection is Applied by the Responsible EntityThe SDT noted the need for a Responsible Entity to identify where it will apply protection for applicable data. The SDT did not specify the location where CIP-012 security protection must be applied to provide latitude for Responsible Entities to implement the security controls in a manner best fitting their individual circumstances. This latitude ensures entities can still take advantage of security measures, such as deep packet inspection implemented at or near the EAP when ESPs are present, while maintaining the capability to protect the applicable data being transmitted between Control Centers.The SDT also recognizes that CIP-012 security protection may be applied to a Cyber Asset that is not an identified BES Cyber Asset or EACMS. The identification of the Cyber Asset as the location where security protection is applied does not expand the scope of Cyber Assets identified as applicable under Cyber Security Standards CIP-002 through CIP-011.The SDT understands that in data exchanges between Control Centers, a single entity may not be responsible for both ends of the communication link. The SDT intends for a Responsible Entity to identify only where it applied security protection. The Responsible Entity should not be held accountable for identifying where a neighboring entity applied security protection at the neighboring entity’s facility. A Responsible Entity, however, may decide to take responsibility for both ends of a communication link. For example, it may place a router in a neighboring entity’s data center. In a scenario like this, where a Responsible Entity has taken responsibility for applying security protection on both ends of the communication link, the Responsible Entity should identify where it applied security protection at both ends of the link. The SDT intends for there to be alignment between the identification of where security protection is applied in CIP-012 R1, Part 1.2 and the identification of Responsible Entity responsibilities in CIP-012 R1, Part 1.3.Control Center OwnershipThe requirements address protection for Real-time Assessment and Real-time monitoring data while being transmitted between Control Centers owned by a single Responsible Entity. They also cover the applicable data transmitted between Control Centers owned by two or more separate Responsible Entities. Unlike protection between a single Responsible Entity’s Control Centers, applying protection between Control Centers owned by more than one Responsible Entity requires additional coordination. The requirements do not explicitly require formal agreements between Responsible Entities partnering for protection of applicable data. It is strongly recommended, however, that these partnering entities develop agreements, or use existing ones, to define responsibilities to ensure the security objective is met. An example noted in FERC Order No. 822 Paragraph 59 is, “if several registered entities have joint responsibility for a cryptographic key management system used between their respective Control Centers, they should have the prerogative to come to a consensus on which organization administers that particular key management system."As an example, the reference model below shows some of the data transmissions between Control Centers that a Responsible Entity should consider to be in-scope. The example does not include all possible scenarios. The solid green lines are in-scope communications. The dashed red lines are out-of-scope communications.This reference model is an example and does not include all possible scenarios.ReferencesHere are several references to assist entities in developing plan(s) for protection of communication links: •NIST Special Publication 800-53A, Revision 4: Security and Privacy Controls for Federal Information Systems and Organizations•NIST Special Publication 800-82: Guide to Industrial Control Systems (ICS) Security•NIST Special Publication 800-175B: Guideline for Using Cryptographic Standards in the Federal Government: Cryptographic Mechanisms•NIST Special Publication 800-47: Security Guide for Interconnecting Information Technology Systems。

Plantronics Manager Pro安全与隐私白皮书说明书

SECURITY AND PRIVACY WHITE PAPER Plantronics Manager ProPart 3725-86204-001Version 08November 2023IntroductionThis white paper addresses security and privacy-related information regarding Plantronics Manager Pro.This paper also describes the security features and access controls in HP | Poly’s processing of personally identifiable information or personal data (“personal data”) and customer data in connection with the provisioning and delivery of Manager Pro, and the location and transfers of personal and other customer data. HP | Poly will use such data in a manner consistent with the HP Privacy Statement, and this white paper which may be updated from time to time. This white paper is supplemental to the HP Privacy Statement. The most current version of this white paper will be available on HP | Poly’s website.Manager Pro is an internet-based subscription service (i.e., Software as a Service—SaaS) powered by Amazon Web Services (AWS), which providesthe ability to manage, monitor, and configure avariety of Plantronics and Poly audio devices.It supports managing headsets, configuring policy, viewing policy compliance status, locking settings, managing by user groups (LDAP/manual), IT troubleshooting, and analysis reporting of assets, usage, conversation, and acoustics.Note: Although Manager Pro is powered by AWSand used with Plantronics Hub, the scope of this white paper is limited to Manager Pro. Please see here for more details about Plantronics Hub and for AWS security details see here.Security at HP | PolySecurity is always a critical consideration for a cloud-based service such as Plantronics Manager Pro. HP | Poly’s Information Security Management System (ISMS) has achieved ISO 27001:2013 certification. ISO/IEC 27001 is the most widely accepted international standard for information security best practices. Product security at HP | Poly is managed through the HP | Poly Security Office (PSO), which oversees secure software development standards and guidelines.The HP | Poly Product Security Standards align with NIST Special Publication 800-53, ISO/IEC27001:2013, and OWASP for application security. Guidelines, standards, and policies are implemented to provide our developers industry-approved methods for adhering to the HP | Poly Product Security Standards.Secure Software Development Life CycleHP | Poly follows a secure software development life cycle (S-SDLC) with an emphasis on security throughout the product development process. Every phase of the development process ensures security by establishing security requirements alongside functional requirements as part of initial design. Architecture reviews, code reviews, internal penetration testing, and attack surface analysis are performed to verify the implementation.The S-SDLC implemented by HP | Poly also includes a significant emphasis on risk analysis and vulnerability management. To increase the security posture of HP | Poly products, a defense-in-depth model is systematically incorporated through layered defenses. The principle of least privilege is always followed. Access is disabled or restricted to system services nonessential to standard operation.Standards-based Static Application Security Testing (SAST) and patch management are cornerstones of our S-SDLC.Privacy by DesignHP | Poly implements internal policies and measures based on perceived risks which meet the principlesof data protection by design and data protection by default. Such measures consist of minimizing the processing of personal data, anonymizing personal data as soon as possible, transparently documentingthe functions, and processing of personal data and providing features which enable the data subject to exercise any rights they may have.When developing, designing, selecting, and using applications, services, and products that are based on the processing of personal data or process personal data to fulfill their task, HP | Poly considers the right to data protection with due regard.Security by DesignHP | Poly follows Security by Design principles throughout the product creation and delivery lifecycle which includes considerations for confidentiality, integrity (data and systems), and availability. These extend to all systems that HP | Poly uses – both on-premises and in the cloud as well as to the development, delivery, and support of HP | Poly products, cloud services and managed services.The foundational principles which serve as the basis of HP | Poly’s security practices include:1. Security is required, not optional2. Secure by default, Secure by design3. Defense-in-depth4. Understand and assess vulnerabilities andthreats5. Security testing and validation6. Manage, monitor & maintain security posture7. End-to-end security: full lifecycle protectionSecurity TestingBoth static and dynamic vulnerability scanning as well as penetration testing are regularly performed for production releases and against our internal corporate network by both internal and external test teams.Cloud systems are managed by HP | Poly and are updated as needed. Patches are evaluated and applied in a timely fashion based on perceived risk as indicated by CVSSv3 scores. Change ManagementA formal change management process is followed by all teams at HP | Poly to minimize any impact on the services provided to the customers. All changes implemented for Plantronics Manager Pro go through vigorous quality assurance testing where all functional and security requirements are verified. Once Quality Assurance approves the changes, the changes are pushed to a staging environment for UAT (User Acceptance Testing). Only after final approval from stakeholders, changes are implemented in production. While emergency changes are processed on a much faster timeline, risk is evaluated, and approvals are obtained from stakeholders prior to applying any changes in production.Data ProcessingHP | Poly does not access any customer’s data except as required to enable the features provided by the service. If someone is an individual user and the purchase of the Plantronics Manager Pro has been made by their employer as the customer, all of the privacy information relating to personal data in this white paper is subject to thei r employer’s privacy policies as controller of such personal data. Personal data collected and the purposes for which it is collected are listed in the table below.Purpose of ProcessingIn general, the data collected by Plantronics Manager Pro is directly related to the level of subscription. For example, if you are not subscribed to the Call Quality and Analytics Suite, then the data required to populate these reportswill not be collected.While using Plantronics’ mobile apps, it is requested to collect location information for the purpose of enabling features in the Plantronics Hub for Android/iOS mobile app such as the BackTrack™feature.PLEASE NOTE: The ability to pseudonymize network username, end user display name, computer hostname, and domain was added as ofSECURITY AND PRIVACY WHITE PAPER FOR PLANTRONICS MANAGER PRO Source of Personal Data Categories of PI Processed Business Purpose for Processing Disclosed to the following Service Providers Tenant information• First/last name • Email address• Access events(login/logout)• Authenticate and authorize tenant administrative access to the service • Deliver the service • These events can be monitored by Poly at an individual (customer) level or in aggregate forunderstanding administrative behaviors.AWSPlantronics Hub for Desktop in aPlantronics Manager Pro environment (version 3.9 and higher)• Client instance ID • System ID Poly-assigned identifiers for ensuring a system and an instance of Hub can be associated to a user.AWS • Network username• End user display name • Computer hostname • Computer domainAssociates a unique user to a device and the device to a system.Note: These data elements arepseudonymized by default beginningin version 3.13.• LDAP user attributesincluding LDAP group membership, username,account name, city,company, country, department, department number, division, employee type, office, state, zip code, display name, telephone number, street address, and title LDAP attributes are entirely under the control of the administrator.These LDAP attributes arecompletely optional and are notcollected by default. Your company may choose to enable these attributes in the Plantronics Hub collection criteria. • Plantronics device information including model ID, product ID, serial numberRequired for proper update selection, troubleshooting, and reporting.Plantronics Hub for Desktop in aPlantronics Manager Pro environment (prior to version 3.9)• End user email• Local IP address• Network IP address Versions of Hub prior to 3.9 may send these pieces of data. Plantronics Manager Pro does not keep or store this information.Update to the latest version toensure this data is not sent. AWS • Call IDRequired for Radio Link Quality report (Data sent only withsubscription of Call Quality and Analytics Suite or better).Plantronics Hub for Mobile in a Plantronics Manager Pro environment • Client instance ID Poly-assigned identifiers for ensuring an instance of Hub can be associated to a user AWS • Network username• Mobile device hostname and domainAssociates a unique user to a device and the device to a system.Note: These data elements are pseudonymized by default beginning in version 3.13.• Plantronics deviceinformation: model ID,Required for proper update selection, troubleshooting, and reportingSECURITY AND PRIVACY WHITE PAPER FOR PLANTRONICS MANAGER PROversion 3.13. It is only enabled by default for new tenants. Tenants that were created prior to 3.13 will need to enable the feature manually.Versions of Hub prior to 3.13 do not support this functionality so in that case the pseudonymization is only performed on the server side.How Customer Data is Stored and ProtectedAll customer data is stored within the AWS data centers on which the service is deployed using hardware-based AWS EBS volume encryption with Advanced Encryption Standard (AES-256) for data at rest.Customer data is automatically backed up nightly in digital form. Normal access controls of authorized users and data security policies are followed for all backup data. No physical backup media is used.All identifiable customer data is stored solely within the Plantronics Manager Pro system where the tenant resides. Tenant-specific MySQL data are stored in separate DB schemas. Mongo (NoSQL) documents are stored with tenant-specific IDs.Data center locations are determined based on customer location and may include USA, Ireland, and Australia.Backups of identifiable customer data are stored in the same AWS region as where the tenant is hosted.Pseudonymized data may be stored in an aggregated reporting data warehouse located in a Virtual Private Cloud in an AWS data center in the USA. This pseudonymized data may be used for product improvement and testing purposes. NOTE: The use of third-party apps or APIs should be carefully considered as they may process data in or transfer data to different geographies.For transferring personal data of EU customers to the US, HP | Poly uses an Intragroup Data Transfer Agreement incorporating the EU Standard Contractual Clauses as the transfer mechanism.We use a combination of administrative, physical, and logical security safeguards and continue to work on features to keep your information safe. Customer data may be accessed by HP | Poly as required to support the service and access is limited to only those within the organization with the need to access data in order to support the service.Data PortabilityCertain data can be downloaded. For details, please see the Plantronics Manager Pro User Guide.Data Deletion and RetentionAll information collected from the customer is stored in the database with the tenant information configured as the access control mechanism. After a customer’s subscription terminates or expires, HP | Poly will delete customer data within 30 days of cancellation of services. All encryption keys are destroyed at time of deletion.HP | Poly may retain customer data for as long as needed to provide the customer with any HP | Poly cloud services for which they have subscribed and for product improvement purposes. When a customer makes a request for deletion to ******************, HP | Poly will delete the requested data within 30 days, unless the data is required to be retained to provide the service to customer.Secure DeploymentPlantronics Manager Pro is an internet-based subscription service hosted entirely in AWS. The customer enterprise IT admin and user computersare required to make outbound connections to the Manager Pro tenant instance. All traffic transported between the customer’s computers and Manager Pro is always encrypted. The customer admin is responsible for managing the Manager Pro tenant. The customer’s administrator can access, view,and manage application audit logs for their tenant and run various reports if the reporting features are enabled but is not able to access any tenant data directly as it is stored in AWS using encryption for data at rest. Data can potentially be made visible via third-party partner apps but only if the customer has provided access.From an HP | Poly administrative perspective, administrators are required to use strong password authentication and the HP | Poly Dev Opsteam is required to use multi-factor authentication whenever logging into AWS to manage all deployments of the Manager Pro service. Certificate-based SSH is used to access AWS instances supporting Plantronics Manager. SSH certificates are rotated on a regular basis. Any remote access required by HP | Poly is directly into the AWS instance, not the customer’s internal network.Server Access and Data SecurityPlantronics Manager Pro is hosted on AWS. Only authorized staff members with proper access permissions have access to the production servers.HP | Poly also has implemented technical and physical controls designed to prevent unauthorized access to or disclosure of customer content. In addition, we have systems, procedures, and policies in place to prevent unauthorized access to customer data and content by HP | Poly employees.Cryptographic SecurityWhile processing all Plantronics Manager Pro data, industry-standard HTTPS over TLS 1.2 is used for data encryption in transit and hardware-based AWS EBS volume encryption with Advanced Encryption Standard (AES-256) for data at rest. To protect user passwords, the standard bcrypt algorithm is used to securely hash and salt passwords before being stored in a database.Key ManagementEncryption keys are managed by the AWS Key Management Service (KMS). There is no single super-user key capable of unlocking all data. Each individual region uses separate keys for live and backup data. Customers are not able to host, control, or maintain encryption keys themselves. For more details, please see here.Password ManagementSingle Sign On accounts will follow their own corporate password policies. From an HP | Poly administrative perspective, HP | Poly admin accounts require strong password authentication. Local accounts on Plantronics Manager Pro must be configured manually by the customer IT administrator.AuthenticationPlantronics Manager Pro supports the integration of enterprise authentication providers via SAML 2.0. Once configured, Manager Pro can be accessed by selecting the single sign-on (SSO) button in the Manager Pro login dialog (service provider-initiated) or can be accessed by selecting Manager Pro from your list of identity provider applications (IDP-initiated). Both IDP-initiated SSO via SAML 2.0 and SP-initiated SSO are supported. Supported IDPs that have been tested and confirmed include Ping and ADFS. Other IDPs may work but have not been tested and therefore are not officially supported. Contact your HP | Poly account representative or your Plantronics reseller to request support for a specific IDP.From an HP | Poly administrative perspective, HP | Poly administrators are required to use multi-factor authentication as well as strong passwords. However, two-factor authentication is not required or supported for customer use.Disaster Recovery and Business Continuity Plantronics Manager Pro is architected to provide high reliability, resiliency, and security. We test our backup and restore process at regular intervals.Our infrastructure runs on fault-tolerant systems to protect the service from failures of individual servers or even entire data centers. The HP | Poly operations team tests disaster recovery measures regularly and an on-call team is ready to resolve any incidents in the event of such occurrence. Additionally, HP | Poly administrators manage and maintain the service under the Plantronics Manager Pro Standard Operating Guidelines.Customer data is stored across multiple AWS availability zones within region-specific data centers.When a system outage occurs, we will post notification on the Plantronics Manager Pro System Status page.Security Incident ResponseThe HP | Poly Security Office (PSO) promptly investigates reported anomalies and suspected security breaches on an enterprise-wide level. You may contact the PSO directly at**************************The PSO team works proactively with customers, independent security researchers, consultants, industry organizations, and other suppliers to identify possible security issues with HP | Poly products and networks. HP | Poly security advisories and bulletins can be found on the HP Customer Support website.SubprocessorsHP | Poly uses certain subprocessors to assist in providing our products and services. A subprocessor is a third-party data processor who, on behalf of HP | Poly, processes customer data. Prior to engaging a subprocessor, HP | Poly executes an agreement with the subprocessor that is in accordance with applicable data protection laws. The subprocessor list here identifies HP | Poly’s authorized subprocessors and includes their name, purpose, location, and website. For questions, please contact ******************.Prior to engagement, suppliers that may process data on behalf of HP | Poly must undergo a privacy and security assessment. The assessment process is designed to identify deficiencies in privacy practices or security gaps and make recommendations for reduction of risk. Suppliers that cannot meet the security requirements are disqualified.Additional ResourcesTo learn more about Plantronics Manager Pro, visit our product website.DisclaimerThis white paper is provided for informational purposes only and does not convey any legal rights to any intellectual property in any HP | Poly product. You may copy and use this paper for your internal reference purposes only. HP | POLY MAKES NO WARRANTIES, EXPRESS OR IMPLIED OR STATUTORY AS TO THE INFORMATION IN THIS WHITE PAPER. THIS WHITE PAPER IS PROVIDED “AS IS” AND MAY BE UPDATED BY HP | POLY FROM TIME TO TIME. To review the most current version of this white paper, please visit our website.© 2023 HP, Inc. All rights reserved. Poly and the propeller design are trademarks of HP, Inc. The Bluetooth trademark is owned by Bluetooth SIG, Inc., and any use of the mark by HP, Inc. is under license. All other trademarks are the property of their respective owners.。

Dahua Security System (DSS) Professional 数据手册说明书

DSS ProfessionalHighly Available Security System for Enterprises Highly Available Security System for EnterprisesWith distributed deployment, you can easily expand the supported channels to 20,000 and central storage capacity to 4 PB. The multi-site function allows you to incorporate multiple DSS platforms into one, and conveniently show their information on one PC client. You can access live and recorded videos, real-time and historical events, and more.DSS Professional integrates various AI capabilities that devices have, such as face recognition, automatic number plate recognition, and video metadata. You will be notified immediately when the target you are interested in appears, allowing you or security personnel to take nec-essary security measures.With hot standby and N+M redundancy, DSS Professional ensures that your business will not be interrupted by failed servers.We offer services for you to build DSS Professional into your own platform, allowing it to fully suit your needs and give you a competitive edge in the market.Scalable Design, Easy to GrowAI-Powered Applications, Proactive SecurityHighly Available Technology, More StableCustomized Services, Enhanced CompetitivenessDahua Security System (DSS) Professional is designed for centralized security management. It enhances hardware performance and provides centralized video monitoring, access control, video intercom, alarm controller, POS, radar and AI features, such as face recognition, automatic number plate recognition, and video metadata.Whether you are a small business with a few cameras, or a global business spread across the globe with over 20,000 cameras, DSS Professional is the right solution for you. Even if your needs change in the future, you can easily scale, upgrade or add functionalities to DSS Professional so that your needs are met. Build your security management system on a solid foundation with DSS Professional.FeaturesIntroductionLive ViewWith its easy to use live view, you can both customize and control how you view videos in real time. The layout can also be configured to display videos in different sizes, enabling you to give priority to important areas by placing then in larger windows. You can also remotely control certain devices to perform various actions such as talking to people through the camera, and unlocking the barrier of a turnstile to grant access to people. If an emergency occurs, manual recording is just a click away, so that you can quickly save that particular part of the video for evidence. PlaybackThe playback function allows you to play recorded videos stored on the server and devices in multiple windows. To help you efficiently wade through tons of videos, you can play them 64X faster than the normal speed, skipping parts that you are not interested in, or you can slow them down to 1/64X, to focus on important sections. To control the data in the videos, you can add tags to mark relevant content, and you can even lock them to prevent them from being overwritten when the disk space is full. The filter function can also be very helpful when you only need to deal with a specific type of video, or a type of target that appeared in one or more areas. Video WallVideo wall is used to display videos on a large screen that consists of many smaller screens. Highly customizable, you can not only configure the layout of the video wall, but you can also display recorded videos and real-time videos to zero in on important details in the video. With the task function, you can schedule videos from different channels to be displayed on the video wall at specified times or in a loop.Main FunctionsMonitoring CenterSystem ArchitectureDSS ProfessionalIPSANWall DecoderKeyboardSwitch IP CameraNVRAccess Controller Card Reader Elevator ControllerAccess ControlVideo IntercomANPR BarrierLED DisplayIVSSSwitch IP CameraMobile ClientPC ClientInternetMapThe map is a very useful function that allows you to keep track of devices and events through their location information. With it, you can mark a device and immediately know the location of an event when the device triggers an alarm and flashes red on the map. You can also add submaps to different areas. For example, a plan view of a public square can be added to a map to reveal the exact location of people who are inside the public square. Group TalkThe real-time location of MPT devices are shown on the map, making it easy for dispatchers to effectively send officers and resources to address issues such as a burglar or duress alarm going off in a building. Dispatchers can start a group talk and engage in a real-time conversation with the officers who were assigned the task to efficiently guide them through the process.DeepXplorePowered by AI technology, you can easily search for targets, look for records on them and even generate tracks on their movement to observe their whereabouts through setting simple search conditions. To gain an overview on the target, you can organize information on them into a case and generate a report.Event ManagementYou can monitor and process over 200 types of alarms right from the event center, while it continuously generates statistics. To give you a clear picture of what is happening in your area, the alarm center also displays a variety of useful information such as the number of alarms that were processed, and the type of alarms that are triggered most frequently. Highly flexible, you also have a selection of predefined alarm types available to you, and the option to not only create your own alarm, but to also manually trigger it to take snapshots and send emails for important events.Maintenance CenterBy just visiting one page, you can stay up to date with information on alerts, devices, servers and more to instantly recognize issues such as offline devices and abnormal servers. In the maintenance center, switches can also be conveniently configured and details, such as their network topology, can also be viewed. Scheduled reports are also sent based on the information collected to give you a full picture of how your system is running. Updating is also a breeze, as you can easily update multiple devices in batches when new versions are available.Access ManagementAccess ControlDoors and lifts in different areas can all be effectively controlled for added security. A zone-based management model is used, which maintains maps for each zone to make it easy for you to locate access points. Through the use of access rules, you can quickly grant and deny access to people with great efficiency, strengthening the security of each area. From the access panel, you can also view and control the channels of doors and lifts at the same time across different zones to manage access.Video IntercomAll video intercom devices can be managed directly through one easy-to-use interface that offers two-waycommunication and remote access control. Through the interface, you can secure access to your premises, andreceive calls and emergency reports directly from people on-site. Building management is also very convenient, as you can send group notices to all the indoor monitors, keeping people informed of important events, such as scheduled power outages.VisitorDSS Professional offers a complete process to manage visitors, including appointment, registration, access permission authorization, and ending visit with all permissions canceled. A complete, detailed record of all visits is available for your review at any time.Intelligent AnalysisTo help build your profits and strengthen your services, the platform provides invaluable information on people on your premises through performing a variety of intelligent analysis and generating heat maps. Through it, you can know the number of people in an area at any given time, where they frequent the most, and precisely when the highest peaks in numbers occurs.Parking Lot ManagementFrom just one platform, you can remotely manage all the devices in your parking lots, such as parking space detectors and ANPR devices, to guide vehicles in an orderly fashion. The visualization function makes it easier for you to drag and drop devices on the visual map of your parking lots. The platform also offers a vehicle search system for vehicle owners to use when they are leaving, to help them quickly locate their transport. Insightful information is also provided in the form of statistics on an easy-to-use dashboard, keeping you up to date on key activities taking place in your parking lots to help you effectively manage them.Intelligent InspectionBoth your properties and equipment are effectively monitored through our user friendly platform. The settings can even be customized to meet your particular needs for item inspection. Inspection plans can also be scheduled to capture images and monitor temperatures with HD cameras and temperature imaging technology, to help you quickly identify equipment failures and safety hazards when detected. This type of intelligent inspection greatly improves upon manual methods, increasing the accuracy and efficiency of inspection, while reducing labor cost. SynthesisDSS Professional is friendly with other systems in your infrastructure. By developing bridges, linkage actions can be flexibly configured on DSS Professional based on the events that are triggered on other platforms. What's more, DSS professional can synchronize access control records with the databases from other platforms.Performance SpecificationServer SpecificationThe following specifications are obtained in servers with recommended system requirements.10DSS Mobile Client Main FunctionsLive ViewEven when you are away from your computer, you can also ensure the safety of your area right on DSS Agile. You can watch the real-time videos remotely from up to 16 channels at the same time with 3 stream types for you to choose according to the status of your mobile network. PTZ control is also supported so that you can cover most of the area. When anything of interest happens, you can take snapshots or recordings as evidence that stores on your phone, or send a voice message to deter unwanted activities.PlaybackVideos stored on devices or the server can both be played up to 8X faster or 1/8X slower on DSS Agile. You can also use the manual recording function to record important content and save it to your phone.DSS Agile 8Media Transmission ServerTotal Incoming Bandwidth600 Mbps 6,000 Mbps Incoming Video Bandwidth 600 Mbps 6,000 Mbps Incoming Picture Bandwidth 200 Mbps 2,000 Mbps Media Transmission ServerTotal Outgoing Bandwidth 600 Mbps 6,000 Mbps Outgoing Video Bandwidth600 Mbps 6,000 Mbps Outgoing Picture Bandwidth 200 Mbps 2,000 Mbps Total Storage Bandwidth 600 Mbps 6,000 Mbps Video Storage Bandwidth 600 Mbps 6,000 Mbps Picture Storage Bandwidth200 Mbps 2,000 Mbps Playback, Storage and DownloadPrerecording Bandwidth for Alarm Recordings 400 Mbps 4,000 Mbps Maximum Capacity of Central Storage (IPSAN)400 TB 4 PBEvent ④Total Events ⑤300 per second 600 per second Storage of Events or Alarms without Pictures ⑥300 per second 600 per second Alarms with Snapshots (Stored on Devices)300 per second 600 per second Access Control Events 300 per second600 per secondCombined Events100 per second① All the devices together cannot contain more than 10 million faces when the number of faces in the watch lists are multiplied by the number of devices. For example, if a face watch listwith 200,000 faces is sent to 40 devices, you can only send another face watch list with 100,000 faces to 20 devices. Or, you can send a list with 50,000 faces to 20 devices and another list with 100,000 faces to 10 devices.② The maximum number of devices, including IPC, NVR, and ITC, cannot exceed 2,000 for a single server, and 20,000 for multiple servers.③ When adding video channels and video devices, such as IPC, NVR and ITC, to the platform, you cannot add more than 1,000 devices, 2,000 channels for a single server, and 10,000 devices,20,000 channels for multiple servers.④ These values represent the maximum number of events that can be triggered at the same time. The numbers are measured based on the peak concurrency tests that were carried out 3times a day. Each test lasted 20 minutes, with 30% of the peak concurrency being applied to the remaining day.⑤ The maximum number of events that can be triggered at the same time largely depends on the concurrent write capability of the database.⑥ For events with snapshots, you must take into account the ability for disks and servers to concurrently write images at the same time. For servers it is 200 Mbps.Access ControlWith DSS Agile, you can remotely monitor and operate all access control devices. For example, you can open a door for someone who has a proven identity, or set a door to be always closed so that no one can access.Target TrackingFor suspicious activities, you can locate targets directly in DSS Agile by searching for face recognition records from a period, uploading a face image of a specific target, or searching for capture records of people, non-motor vehicles, and motor vehicles by features.EventYou can receive and process various types of alarms. You can also receive alarms when DSS Agile is not running with a subscription button.Video IntercomYou can make calls to and receive calls from master stations, indoor monitors and door stations. After subscribing to offline calls, you will still receive calls even when the App is not running. Also, a complete record of incoming and outgoing calls ensure that you will not miss any important message.Alarm Control PanelAlarm controllers can be remotely operated through DSS Agile to protect areas. You can arm and disarm areas, bypass and isolate zones, display the status of areas and zones in real-time, and filter the status of areas and zones to display information you are specifically interested in.File ManagementSnapshots and videos stored on devices or the server can be managed by deleting them, exporting them to albums, and more. Video downloads can be automatically and manually paused, saving you time from redownloading them when there are connection issues.DSS Agile VDPVisitor ManagementYou can easily manage visitors by registering their information and generating visitor passes with necessary access permissions. When they arrive, they can use the passes to gain access to where you are. DSS Agile VDP will log when visitors begin and end their visits.Intercom MonitoringWhen guests arrive, they can call you on the door station or you can verify their identities through the live video. After confirming they are who you are expecting, you can remotely open the door for them directly on DSS Agile VDP. If you spot any unwanted activities, tap and call the management center to report an emergency.Message CenterThe unlock records and alarm messages on the indoor monitor are fully accessible on DSS Agile VDP, allowing you to identify potential threats and ensure the safety of your residence.1112© 2022 Dahua. All rights reserved. Design and specifications are subject to change without notice.The images, specifications and information mentioned in the document are only for reference, and might differ from the actual product.Rev 002.000DSS Mobile Client Requirements。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Security Requirements Model for Grid Data Management Systems *

Syed Naqvi1,2, Philippe Massonet1, Alvaro Arenas2 1Centre of Excellence in Information and Communication Technologies (CETIC), Belgium

{syed.naqvi, philippe.massonet}@cetic.be

2CCLRC Rutherford Appleton Laboratory, United Kingdom

{s.naqvi, a.e.arenas}@rl.ac.uk

Abstract. In this paper, we present our ongoing work of a policy-driven ap-proach to security requirements of grid data management systems (GDMS). We analyse the security functionalities of existing GDMS to determine their short-comings that should be addressed in our work. We identify a comprehensive set of security requirements for GDMS followed by the presentation of our pro-posed Security Requirements Model. Derivation of security policies from secu-rity requirements and their consequent refinement is also presented in this pa-per. Our approach of addressing modelling issues by providing requirements for expressing security related quality of service is the key step to turn storage sys-tems into knowledge representation systems.

Keywords: Grid security, requirements analysis, distributed data management.

1 Introduction Grids enable access to, and the sharing of, geographically distributed heterogene-ous resources such as computation, data and information sources, sensors and instru-ments, for solving large-scale or complex problems. One of the key Grid applications is the use of grids in emergency response. In this kind of applications, Grids become a critical information infrastructure providing essential information to emergency de-partments in order to minimise adverse impacts of potential tragedies. For instance, Grids may be useful in preventing floods, which can be achieved by integrating data from various sources - networks of sensors in a river basin, weather prediction centres, historical flood datasets, topography, population and land use data - for processing in sophisticated numerical flood models. The massive data sets that would need to be accessed and processed would require huge network facilities, data storage, and proc-essing power to deliver accurate predictions. This paper focuses on one element of such critical infrastructure: Grid data management systems (GDMS).

* This research work is supported by the European Network of Excellence CoreGRID (project reference number 004265). The CoreGRID webpage is located at www.coregrid.net. We have carried out a formal analysis of security requirements for semantic grid services to explore how these requirements can be expressed as metadata associated to these services. It also explores issues of negotiation of the QoS parameters in order to reach Service Level Agreements (SLA). This work is being used to gridify the FileStamp distributed file system which is currently using the peer-to-peer technology for the exchange of data resources across the distributed sites. In this paper, we pre-sent a case study of FileStamp to explain security requirements model for GDMS. This paper is organized in the following manner: an overview of the security func-tionalities of existing GDMS is given in section 2. FileStamp distributed file system is presented in section 3. Section 4 illustrates our proposed security requirements model. Our approach vis-à-vis the related work is discussed in section 5. Finally some con-clusions are drawn in section 6 along with the outline of our future directions.

2 Overview of Security Functionalities in GDMS Grid data management systems [1] offer a common view of storage resources dis-tributed over several administrative domains. The storage resources may be not only disks, but also higher-level abstractions such as files, or even file systems or data-bases. In this section, an overview of the security functionalities of various existing GDMS is presented:

2.1 ARMADA Using the Armada framework [2], grid applications access remote data sets by sending data requests through a graph of distributed application objects. The graph is called an armada and the objects are called ships. Armada provides authentication and authorization services through a security man-ager known as the harbor master. Before installing an untrusted ship on a harbor, the harbour master authenticates the client wishing to install the ship and authorizes use of the host resources based on the identity of the client and on the security policies set by the host. The harbor master uses authentication mechanisms, provided by the host machine, to identify clients that wish to install ships on the harbor. The host provides mecha-nisms that implement security policies set by the host administrator. The options for implementing authentication include using SSH or using Kerberos authentication service. The most common approaches used to protect system resources from untrusted code are hardware protection (e.g., running the untrusted code in a separate Unix process), software fault isolation (SFI) [3], verification of assembly code [4-5], and use of a type-safe language (e.g., Java or Modula3 [6]). Hardware protection requires untrusted code to run in a separate address space from the harbor. While this clearly protects the harbor from the client code, the overhead of communicating through nor-

相关文档
最新文档