Contrôles clés du CDMC

CDMC Key Controls Data Consumption Purpose Entitlements and Access for Sensitive Data Classification Cataloging Data Sovereignty and Cross-Border Movement Authoritative Data Sources and Provisioning Points Ownership field Data Control Compliance Data Lineage Cost Metrics Data Quality Measurement Data Retention, Archiving and Purging Data Protection Impact Assessments Security Controls

This document is a constituent part of the Cloud Data Management Capabilities (CDMC™) modèle (“the Modèle”) and is provided as a free license to any organization registered with EDM Council Inc. (“EDM Council”) as a recipient (“Recipient”) of the document. While this is a Free License available to both members and non-members of the EDM Council, acceptance of the CDMC Terms of Use is required to protect the Recipient’s use of proprietary EDMC property and to notify the Recipient of future updates to the Modèle.

CDMC™ and all related materials are the sole property of EDM Council Inc. All rights, titles and interests therein are vested in the EDM Council. The Modèle and related material may be used freely by the Recipient for their own internal purposes. It may only be distributed beyond the Recipient’s organization with prior written authorization of EDM Council. The Modèle may only be used by the Recipient for commercial purposes or external assessments if the Recipient’s organization has entered into a separate licensing and Authorized Partner Agreement with EDM Council governing the terms for such use.

Please accept these CDMC™ Terms of Use by registering at
https://app.smartsheet.com/b/form/b3c66d25074f4422be037da82e64b65f

The Cloud Data Management Capabilities (CDMC™) modèle defines the capabilities necessary to manage and control data in the cloud effectively. Its creation represents an important milestone in the global adoption of industry best-practices for data management. The modèle has been produced by the CDMC Work Groupe that was formed by the EDM Council in May 2020 with over 200 participants from over 70 organizations, including major consumers and providers of cloud services and technology in addition to leading advisory firms. The full modèle will be published in September 2021.

This supplementary document is intended primarily for cloud service and technology providers. It summarizes and elaborates on the key controls required by organizations, equivalent to those implemented in their on-premises environments. It also highlights opportunities to support these controls with automation. Support of the controls and automation will streamline the adoption of cloud services.

The framework addresses the control of data in cloud, multi-cloud and hybrid-cloud environments. Controls that address technology risks in other areas such as software development and service management are not within the scope of the document.

Many of the controls refer to being applicable to sensitive data. Each organization will have a scheme for classifying their sensitive and important data and will determine the specific classifications to which the controls must be applied. Examples of classifications that may be in scope include:

  • Personal Information (PI) / Données personnelles sensibles
  • Informations personnelles identifiables (PII)
  • Client Identifiable Information
  • Material Non-Public Information (MNPI)
  • Specific Information Sensitivity Classifications (such as ‘Highly Restricted’ and ‘Confidential’)
  • Critical Data Elements used for important business processes1 (including regulatory reporting)
  • Licensed data

Contrôles clés du CDMC

Control 1: Data Control Compliance

Composant

1.0 Governance & Accountability

Capacité

1.1 Cloud Data Management Business Cases are Defined and Governed

Control Description

Data Control Compliance must be monitored for all data assets containing sensitive data via metrics and automated notifications. The metrics must be calculated from the extent of implementation of the CDMC Key Controls specified in subsequent sections.

Risks Addressed

An organization does not set or achieve its value and risk mitigation goals for cloud management. Data is uncontrolled and consequently is at risk of not being fit-for-purpose, late, missing, corrupted, leaked and in contravention of data sharing and retention legislation.

Conducteurs / Exigences

Organizations are required to demonstrate adequate control of data being created in or migrated to the cloud.

Legacy / On-Premises Challenges

Significant tranches of on-premises data do not have data management applied to them and consequently do not realize maximum value for the organization or can potentially pose an unquantified risk.

When moving data to a new cloud environment, it is critical that organizations actively assess and apply the appropriate levels of data management to achieve their stated outcomes, apply controls to achieve this and measure compliance and value realization with those outcomes.

Opportunités d'automatisation
  • Where evidence of the existence of controls can be gathered automatically (including those controls referenced in subsequent sections of this document), the Data Control Compliance metrics may be calculated automatically.
  • Where the metrics fall below specified thresholds, alerts should be generated with automated notification to specified stakeholders.
Avantages

Cloud data is demonstrably controlled and supports the Cloud Data management business cases and risk mitigation requirements of the organization.

Résumé

Organizations can demonstrate an awareness of the intended outcomes of cloud data management and focus on quantifiable value realization and risk mitigation.

Back to top

Control 2: Ownership Field

Composant

1.0 Governance & Accountability

Capacité

1.2 Data ownership is Established for both Migrated and Cloud-generated Data

Control Description

The Ownership field in a data catalog must be populated for all sensitive data or otherwise reported to a defined workflow.

Risks Addressed

Accountability for decisions on and control of sensitive data is not defined. Sensitive data is not effectively owned and consequently is at risk of not being fit for purpose, late, missing, corrupted, leaked and in contravention of data sharing and retention legislation.

Conducteurs / Exigences

Organizations have policies that require explicit ownership of data that is classified as sensitive.

Legacy / On-Premises Challenges

Significant amounts of legacy data do not have ownership recorded.

Opportunités d'automatisation

The Ownership field in a data catalog must be populated “eventually” for sensitive data that is migrated to or generated within the cloud.

  • Automatically trigger workflows to enforce population when new data assets are created.
  • Provide the capability to automate workflows to review and update ownership periodically for sensitive data or when an owner leaves the organization or moves within the organization.
  • Automatically trigger escalation workflows to address population gaps.
  • Implement ownership recommendations driven by the nature of data and ownership of similar data.
Avantages

Increased compliance with data ownership politique.

Résumé

Infrastructure that supports the completion of data ownership information for sensitive data drives politique conformité.

Back to top

Control 3: Authoritative Sources and provisioning points

Composant

1.0 Governance & Accountability

Capacité

1.3 Data Sourcing and Consumption are Governed and Supported by Automation

Control Description

A register of Authoritative Data Sources and Provisioning Points must be populated for all data assets containing sensitive data or otherwise must be reported to a defined workflow.

Risks Addressed

Architectural strategy for an organization is not fully defined. Authorized sources have not been defined or suitably controlled.

Data is duplicative and/or contradictory, resulting in processus breaks, architectural inefficiencies, increased cost of ownership and accentuating existing operational risks on all dependent business processes.

Conducteurs / Exigences

An important responsibility of a data owner is to designate the authoritative data sources and provisioning points of data for a specific scope of data.

Politique controls require a actifs de données to be identified as authoritative or not when it is shared.

Legacy / On-Premises Challenges

Identification and remediation of the use of non-authoritative sources or copies of data require significant manual effort.

Opportunités d'automatisation
  • Automatically enforce the labeling of sources of data as authoritative or non-authoritative.
  • Control the consumption of sensitive data from sources that are non-authoritative.
  • Default the labeling of sources to non-authoritative until reviewed and updated by the data owner.
Avantages

Infrastructure that can run automated workflows to identify and retire non-authoritative data provides a cost savings opportunity to eliminate the manual effort involved in this work.

Résumé

Data assets automatically tagged as authoritative or non-authoritative will greatly simplify politique compliance and eliminate manual costs of controlling data sourcing and consumption.

Back to top

Control 4: Data Sovereignty and Cross‐Border Movement

Composant

1.0 Governance & Accountability

Capacité

1.4 Data Sovereignty and Cross-Border Data Movement are Managed

Control Description

The Data Sovereignty and Cross-Border Movement of sensitive data must be recorded, auditable and controlled according to defined politique.

Risks Addressed

Data can be stored, accessed and processed across multiple physical locations in cloud environments, increasing the risk of breaches to jurisdictional laws, security and privacy rules, or regulation.

Breaches can result in various penalties, including fines, reputational damage, legal action and removal of licenses.

Conducteurs / Exigences

Le data owner should understand the jurisdictional implications of cross border data movement and any region-specific storage and usage rules for a particular ensemble de données. Policy-specified controls must be applied when establishing cross-border data sharing agreements to support requests to use data from a particular location.

Legacy / On-Premises Challenges

Maintaining data about the physical location of data stores and processes is a significant undertaking and applying rules consistently across multiple different technologies is prohibitive.

Opportunités d'automatisation
  • Automatically capture and expose the physical location of all storage, usage and processing infrastructure applying to a cataloged ensemble de données.
  • Provide the ability to trigger cross border accord de partage de données workflows (for international data transfer and international data requests).
  • Automatically trigger regional storage, processing and usage constraints, with the ability to escalate to a data owner where required.
  • Automatically audit and allow workflow to be triggered when sensitive data is being accessed from a location without a accord de partage de données.
Avantages

Reducing the manual processing and audit of data sharing agreements will significantly reduce the cost and risk of data processing in the cloud.

Résumé

Codifying and automatically applying jurisdictional data management rules and cross border sharing agreements will significantly reduce the risk of processing data in the cloud. This will increase the adoption of cloud services and reduce complexity in the day-to-day processing of data in the cloud.

Back to top

Control 5: Cataloging

Composant

2.0 Cataloging & Classification

Capacité

2.1 Data Catalogs are Implemented, Used, and Interoperable

Control Description

Cataloging must be automated for all data at the point of creation or ingestion, with cohérence across all environments.

Risks Addressed

The existence, type and context of data are not identified, resulting in the inability of all other controls to be applied that are dependent on the data scope.

Data is uncontrolled and consequently is at risk of not being fit for purpose, late, missing, corrupted, leaked and in contravention of data sharing and retention legislation.

Conducteurs / Exigences

Organizations must ensure the necessary controls are in place for large or complex workloads that involve sensitive data such as client identifiers and transactional details.

Knowledge of all data that exists is foundational to ensuring that all sensitive data has been identified.

Legacy / On-Premises Challenges

Organizations cannot scan and catalog the significant variety of data assets that exist in legacy on-premises environments. Without comprehensive catalogs of all existing data, organizations cannot be confident that all sensitive data within their data assets have been identified.

Opportunités d'automatisation
  • Ensure that catalog entries are generated for all data migrated to or created in the cloud.
  • Ensure catalog entries are generated for data in development, test and production environments and for both online and archived data.
  • Generate evidence of the comprehensiveness of the data catalog.
  • Implement APIs and support open data standards for métadonnées sharing and catalog interoperability. (Refer to the CDMC Information Modèle).
Avantages

An organization can guarantee that all data has been cataloged and can use this as the foundation on which to automate and enforce controls based on the métadonnées in the catalog.

Résumé

This is the infrastructure describing what data exists, to see how much there is and how many different types there are. It is the foundation of all the other controls.

Back to top

Control 6: Classification

Composant

2.0 Cataloging & Classification

Capacité

2.2 Data Classifications are Defined and Used

Control Description

Classification must be automated for all data at the point of creation or ingestion and must be always on.

  • Informations personnelles identifiables auto-discovery
  • information sensitivity classification auto-discovery
  • Material Non-Public Information (MNPI) auto-discovery
  • Client identifiable information auto-discovery
  • Organization-defined classification auto-discovery
Risks Addressed

Sensitive data is not classified, resulting in the inability of all other controls to be applied that are dependent on the classification.

Data is uncontrolled and consequently is at risk of not being fit for purpose, late, missing, corrupted, leaked and in contravention of data sharing and retention legislation.

Conducteurs / Exigences

Information sensitivity classification (ISC) is required by most organizations’ information security policies. An organization is required to know whether data is highly restricted (HR), classified (C), internal use only (IUO), or public (P), and if it is sensitive.

Knowing whether data is sensitive is the foundation of most other controls in the framework. This requires certainty that all data has been cataloged and certainty that the sensitivity of the data has been determined.

Legacy / On-Premises Challenges

The variety of data assets in legacy environments impacts the ability to ensure that all data has been identified. Sensitive data may exist in data assets that have not been identified.

Classification of data assets is often manual and can be both error-prone and expensive. Even where assets are identified, there may be gaps or errors in the classification.

The proliferation of copies of data in legacy environments can lead to classifications in data sources not being carried through to copies of the data.

Opportunités d'automatisation
  • Apply classification processing to all data migrated to or created in the cloud.
  • Use automated classification des données to identify the classification that applies.
  • Support organization-specified classification schemes.
  • Default classifications to the highest level until explicitly reviewed and changed.
Avantages

The operations team that is responsible for classifying data is expensive. Auto-classification can significantly streamline and reduce the amount of manual effort required to perform this fonction.

Résumé

Auto-classification of data provides confidence that all sensitive data has been identified and can be controlled.

Back to top

Control 7: Entitlements and Access for Sensitive Data

Composant

3.0 Accessibility & Usage/p>

Capacité

3.1 Data Entitlements are Managed, Enforced, and Tracked

Control Description
  1. Entitlements and Access for Sensitive Data must default to creator and owner until explicitly and authoritatively granted.
  2. Access must be tracked for all sensitive data.
Risks Addressed

Access to data is not sufficiently controlled to those who should be authorized. This could result in data leakage, reputational damage, regulatory censure, criminal manipulation of business processes, or data corruption.

Data is uncontrolled and consequently is at risk of not being fit for purpose, late, missing, corrupted, leaked and in contravention of data sharing and retention legislation.

Conducteurs / Exigences

Once the auto-classifier has identified sensitive data assets, enhanced controls should be placed on those data assets, including how entitlements are granted.

The users that have access to data and how frequently they access it needs to be tracked.

Legacy / On-Premises Challenges

It is difficult to track which data consumers are using which data assets unless tracking is turned on and is consistent across all the data in the catalog.

Opportunités d'automatisation
  • Automate the defaulting of entitlements to restrict access to the creator and owner until explicitly and authoritatively granted to others
  • Automatically track which users have access to which data and how frequently they access it and store that information in a data catalog.
  • Provide all data owners access to the usage tracking tool
  • Hold entitlements as métadonnées to enable their use by any tool used to access the data.
Avantages

Tracking of data consumption enables consumption-based allocation of costs. Automation can reduce the cost of performing these allocations manually.

Résumé

Entitlements and access for sensitive data at a minimum should be automated to default to being restricted to just the creator and owner of the data until they grant permissions to other people. Once other people have access to that data, monitoring should be in place to track who is using it and how frequently they are accessing it. Costs can then be correctly allocated.

Back to top

Control 8: Data Consumption Purpose

Composant

3.0 Accessibility & Usage

Capacité

3.2 Ethical Access, Use, & Outcomes of Data are Managed

Control Description

Data Consumption Purpose must be provided for all data sharing agreements involving sensitive data. The purpose must specify the type of data required and include country or legal entity scope for complex international organizations.

Risks Addressed

Data is shared or used in an uncontrolled manner with the result that the producer is not aware of how it is being used and cannot ensure it is fit for the intended purpose.

Data is not shared in compliance with the ethical, legislative, regulatory and politique framework where the organization operates.

Conducteurs / Exigences

There are emerging ethical-use frameworks and guidelines that include specifications for what should happen when the use of data changes.

Legacy / On-Premises Challenges

It is difficult for human capabilities to recognize when the use of data has changed into a new kind of processing that could be protected under some regulatory or legal basis without specific authorization.

Opportunités d'automatisation
  • Record data access tracking information for sensitive data.
  • Enforce the capture of purpose, for example, integrated with modèle gouvernance.
  • Provide alerts to the data owner or data governance teams when there is an additional use case for existing user access to sensitive data.
  • Recognize when specific technologies are employed (e.g., Machine Learning) and leverage usage and cost tracking to highlight potential new use cases.
Avantages

Streamlined ethical data accountability for data that is accessed for new purposes.

Résumé

A accord de partage de données between a consumer and the authoritative source expresses the intent to use the data for a specific purpose. Automated tracking and monitoring of data consumption purpose can alert data owners and data governance teams when there is new or changed use.

Back to top

Control 9: Security Controls

Composant

4.0 Protection & Privacy

Capacité

4.1 Data is Secured, and Controls are Evidenced

Control Description
  1. Appropriate Security Controls must be enabled for sensitive data.
  2. Security control evidence must be recorded in the data catalog for all sensitive data.
Risks Addressed

Data is not contained within the parameters determined by the legislative, regulatory or politique framework where the organization operates. Data loss or breaches of privacy requirements resulting in reputational damage, regulatory fines and legal action.

Conducteurs / Exigences

The sensitivity level of the data dictates what level of encryption, obfuscation and data loss prevention should be enforced. The requirements for Security Controls and Data Loss Prevention become increasingly more stringent as the sensitivity level of the data increases.

Legacy / On-Premises Challenges

It is difficult to ensure that encryption is always on for sensitive data.

Opportunités d'automatisation
  • Provide security controls capabilities including encryption, masking, obfuscation and tokenization that are turned on automatically based on the sensitivity of a ensemble de données.
  • Automate recording of the application of security controls.
Avantages

Evidence that the appropriate level of encryption is on and has been consistently applied is easy to produce.

During a security audit, a data owner has a list of their data and how much of it is sensitive. Every piece of sensitive data can provide evidence that the data is encrypted, and there is a data loss prevention regime in place for all the compute environments it resides.

Having security control evidence to deliver through the catalog rather than performing a forensic cyber review is a cost savings opportunity. A full-time team of employés typically handles this work.

Résumé

Automation that enforces and records the appropriate encryption level based on a actifs de données’s sensitivity level ensures security compliance and reduces manual effort to provide evidence of the controls.

Back to top

Control 10: Data Protection Impact Assessments

Composant

4.0 Protection & Privacy

Capacité

4.2 A Data Privacy Framework is Defined and Operational

Control Description

Data Protection Impact Assessments (DPIAs) must be automatically triggered for all données personnelles according to its jurisdiction.

Risks Addressed

Data is not secured to an appropriate level for the nature and content of that ensemble de données. This results in either data being secured at greater cost and inconvenience than required or data loss or breaches of privacy requirements resulting in reputational damage, regulatory fines and legal action.

Conducteurs / Exigences

If a ensemble de données is classified as containing personal information, an organization needs to be able to demonstrate that it has performed a data protection impact assessment on it in certain jurisdictions.

Legacy / On-Premises Challenges

It is a very expensive workflow to initiate and complete a data protection impact assessment for the data assets classified as containing personal information.

Identifying the DPIAs that need to be performed can be challenging, and completing those DPIAs can be very expensive.

Opportunités d'automatisation
  • Automatically initiate Data Protection Impact Assessments based on factors such as the geography of the data infrastructure, classification of the data or the specified consumption purpose.
Avantages

Evidence that all privacy requirements have been met for sensitive data is easy to produce since DPIAs are automatically initiated.

Cost savings opportunities arise from more efficient identification of the need for DPIAs.

Résumé

Automatically enforcing a DPIA on data that is classified as personal ensures politique compliance and reduces manual labor costs for that fonction.

Back to top

Control 11: Data Retention, Archiving and Purching

Composant

5.0 Cycle de vie des données

Capacité

5.1 The Cycle de vie des données is Planned and Managed

Control Description

Data Retention, Archiving, and Purging must be managed according to a defined retention schedule.

Risks Addressed

Data is not removed in line with the legislative, regulatory or politique requirements of the organization's environment, leading to increased cost of storage, reputational damage, regulatory fines, and legal action.

Conducteurs / Exigences

Organizations have a master retention schedule that determines how long data needs to be retained in each jurisdiction it was created based on its classification.

Legacy / On-Premises Challenges

Organizations will have huge repositories of historical data, often retained to support the requirements of potential future audits. Data sets in different jurisdictions will have different retention schedules. It is difficult to comply with these requirements manually since different applicable legal requirements can modify the retention schedule.

Opportunités d'automatisation
  • Automate data retention, archiving and purging processing based on the data’s jurisdiction, purpose and classification and according to a defined retention schedule.
  • Collect and provide evidence of the data retention, archiving and purging plan and execution.
Avantages

Automatically retaining, archiving, or purging data based on its classification and association retention schedule will reduce the manual effort required to perform this fonction and ensure politique conformité.

Résumé

Organizations with this automation and control can provide the necessary evidence to verify that their data is being retained, archived or purged based on the retention schedule of its classification.

Back to top

Control 12: Qualité des données Measurement

Composant

5.0 Cycle de vie des données

Capacité

5.2 Qualité des données is Managed

Control Description

Qualité des données Measurement must be enabled for sensitive data with metrics distributed when available.

Risks Addressed

Data is not consistently fit for the organization's purposes, resulting in the inability to provide expected client service, processus breaks, the inability to demonstrate risk management, inefficiencies, and a lack of trust in the data and decisions based on flawed information.

Conducteurs / Exigences

Qualité des données metrics will enable data owners and data consumers to determine if data is fit-for-purpose. That information needs to be visible to both owners and data consumers.

Legacy / On-Premises Challenges

The limited application of qualité des données management in many legacy environments results in a lack of transparency on the quality of data and an inability for data consumers to determine if its fit-for-purpose. Data owners may not be aware of qualité des données issues.

Opportunités d'automatisation
  • Automatically deliver qualité des données metrics to data owners and data consumers.
  • Make qualité des données metrics available in the data catalog.
  • Automatically alert data owners to qualité des données issues.
Avantages

Data consumers can determine if data is fit-for-purpose. Data owners are aware of qualité des données issues and can drive their prioritization and remediation.

Résumé

Providing clarity on qualité des données and support to ensure data is fit-for-purpose will help data owners address qualité des données issues.

Back to top

Control 13: Cost metrics

Composant

6.0 Data & Technical Architecture

Capacité

6.1 Technical Design Principles are Established and Applied

Control Description

Cost Metrics directly associated with data use, storage, and movement must be available in the catalog.

Risks Addressed

Costs are not managed, detrimentally impacting the commercial viability of the organization.

Conducteurs / Exigences

As the cloud changes the cost paradigm from Capex to Opex, organizations require additional visibility on where data movement, storage and usage costs are incurred.

Poor data architectural choices concerning data placement can incur additional costs through ingress or egress costs. For example, extra compute costs will be incurred when running data warehouse workloads on OLTP infrastructure.

Legacy / On-Premises Challenges

Limited need to manage data processing or storage costs at a actifs de données level.

There is no line-item costing on the assets in a data catalog, so organizations cannot run a cost-analysis to understand where their data management costs are specifically being incurred.

Opportunités d'automatisation
  • Automatically track data assets' movement, storage, and usage costs and make this information available via the data catalog.
  • Support automated policy-driven cost management and optimization of data processing.
Avantages

Data owners would be able to understand who is using what data, the frequency of that access and the cost incurred to provide that data.

Résumé

The financial operations infrastructure of cloud service providers is robust enough to identify accounts and operations that are incurring costs and associating those costs to specific data assets as line items in the data catalog.

Back to top

Control 14: Traçabilité des données

Composant

6.0 Data & Technical Architecture

Capacité

6.2 Provenance des données and Lineage are Understood

Control Description

Traçabilité des données information must be available for all sensitive data. This must at a minimum include the source from which the data was ingested or in which it was created in a cloud environment.

Risks Addressed

Data cannot be determined as having originated from an authoritative source resulting in a lack of trust of the data, inability to meet regulatory requirements, and inefficiencies in the organization's system architecture.

Conducteurs / Exigences

Organizations need to trust data being used and confirm that it is being sourced in a controlled manner.

Regulated organizations produce lineage information as evidence that the information on regulatory reports has been taken from an authoritative source for that type of data.

Consumers of sensitive data must be able to evidence sourcing of data from an authoritative source, for example, by showing lineage from the authoritative source or providing the provenance of the data from a supplier.

Legacy / On-Premises Challenges

Lineage information is produced manually by tracing the flow of data through systems from source to consumption. The cost of this approach and the consequences of producing incorrect data can be significant.

Opportunités d'automatisation
  • Record ingestion source of all data of specific classifications migrated to the cloud.
  • Record source-to-target lineage of all movement of data of specific classifications within the cloud environment.
  • Record destination lineage of all data of specific classifications egressing from the cloud (whether to on-premises or another cloud).
Avantages

Easy to produce evidence of the lignée des données for regulatory reports. Major financial organizations incur significant costs producing this information manually and retrospectively.

Résumé

Automatically tracking lineage information for data that feed regulatory reports would streamline the reports' data and eliminate cost by replacing the manual labor required to produce that information.

Back to top

This document is a constituent part of the CDMC™ framework focusing on the key controls for effective management of data risk in cloud, multi-cloud and hybrid environments. This section provides a summary of additional parts of the overall framework.

Full documentation of the 6 components, 14 capabilities and 37 sub-capabilities of the CDMC framework, along with the 14 controls presented in this document. This 150+ page document details the objectives of each sub-capability and presents best practice advice written from both the data practitioner and cloud service and technology provider perspectives. A set of questions, artifacts and scoring guidance for each sub-capability provide the basis for organizations to perform capability assessments.

Reference: CDMC Framework Version 1.1 – published September 2021

Specifications of tests of the 14 key controls within the framework to form the basis of certification of cloud products and services against the framework.

Reference: CDMC Controls Testing Procedures V1.1 – to be published Q4 2021

Un ontologie that draws on and combines related open frameworks and standards to describe the information required to support cloud data management. This provides a foundation for interoperability of data catalogs and automation of controls across cloud service and technology providers.

Reference: CDMC Information Modèle Version 1.1 – to be published Q4 2021

A standard set of over 150 data management terms, with definitions and commentary for each.

Reference: https://www.dcamportal.org/glossary/

Feedback on the document should be contributed via the Cloud Data Management Interest Community on EDMConnect: https://edmconnect.edmcouncil.org/clouddatamanagementinterestcommunity/home

For further information on the CDMC initiative please visit: https://edmcouncil.org/page/CDMC.

Any enquiries regarding EDM Council membership or CDMC Authorized Partnership should be directed to info@edmcouncil.org.

Laisser un commentaire

Rejoignez le groupe d'utilisateurs DCAM. Soyez un leader d'opinion, partagez vos meilleures pratiques avec d'autres praticiens de l'industrie. Partagez ensuite cette invitation avec vos collègues membres - faisons bouger les choses.
Rejoindre la foule

Agree to Terms of Use

Conditions d'utilisation

Please agree to the Conditions d'utilisation before accessing the EDMC Knowledge Portal.