Page tree

Notice

Copyright © TM Forum 2020. All Rights Reserved.

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published, and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this section are included on all such copies and derivative works. However, this document itself may not be modified in any way, including by removing the copyright notice or references to TM FORUM, except as needed for the purpose of developing any document or deliverable produced by a TM FORUM Collaboration Project Team (in which case the rules applicable to copyrights, as set forth in the TM FORUM IPR Policy must be followed) or as required to translate it into languages other than English.

The limited permissions granted above are perpetual and will not be revoked by TM FORUM or its successors or assigns.

This document and the information contained herein is provided on an "AS IS" basis and TM FORUM DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY OWNERSHIP RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Direct inquiries to the TM Forum office:

181 New Road, Suite 304
Parsippany, NJ 07054 USA
Tel No. +1 973 944 5100
Fax No. +1 973 998 7916
TM Forum Web Page: www.tmforum.org


Executive Summary

Model Driven Development (MDD) has proven to be most useful for the development of standard business software applications. The Functional Architecture of the Open Digital Architecture is a standard for refactoring legacy BSS/OSS for digital business and cloud-native environment. Based on the five functional blocks defined in the functional architecture (versus the 2 in legacy BSS/OSS), demanding requirements in business technology applications, such as reusability, adaptability, flexibility, robustness and security are becoming increasingly part to evaluate the functionality of a software applications.  

The purpose of this guide is to standardize the modeling of Security, Privacy, Integrity and Trust directly into design and implementation of ODA objects - Components, environment and data (at rest, in transit). 
While methodologies such as threat modeling can be used to help obtain this understanding from a component and/or environment's design, it can be difficult to accurately map this understanding to an implementation. This difficulty suggests that there is a need for a technique that can be used to gain a better understanding of the trust boundaries that exist within an ODA implementation.

This introductory guide establishes a methodology to enable the mapping of security, privacy, integrity and trust requirements to an ODA component, environment and data handled. It is to foster the embedding of Secure-by-design, Privacy-Assurance as core tenets of ODA implementation.  Security and Privacy, which is a main concern in this guide, has usually been deferred to platform level solutions, e.g. by configuring roles. While this can sometimes meet near-sighted needs, it is not sufficient, can lead to high complexity in security policies implementation, and impair enforcement. The non-functional impact of this approach has the tendency of inhibiting performance, and breaking reusability of components. 

Component reusability is highest when both the functional and the non-functional requirements match. The modern software world has adopted the use of middleware platforms as a means to meet demanding requirements like adaptability, flexibility, robustness and security. This model-driven security and privacy framework is hereby offered as a living model to enable harness tested best models and practices to effectively assure ODA concepts are realizable from design to implementation and operations.  


1. Introduction

Security across BSS and OSS is receiving closer attention as the use of diverse data in business and operations grows exponentially. The majority of the security problems encountered have been as a result of poor design practices, where important security functionality has not been properly integrated as part of the design. Increasing value garnered from data availability and data access have become key objectives of selecting the optimal method to handling/treating security concerns like identity and access management. E.g. When is coarse-grain and/or fine-grain access control adequate to address needs in tandem with performance and core business objectives. These objectives are also driven by implications on operating policies, information system environment and regulatory regimes that have an implication on business concerns for overall security and privacy.

A critical aspect of any security and privacy review is the modeling of trust boundaries. This knowledge allows an audit to understand domains of trust or the lack of it, and how they influence one another. Without modeling this knowledge, an audit is generally not able to easily identify and mitigate against risks. As such, trust boundaries play an important role in effectively characterizing threats that exist, and also facilitate implementing appropriate controls to address both functional and non-functional needs of an information systems operation.

1.1. Scope

Security and privacy protection are critical to the service industry, and particularly to service providers across the industry. Capturing and modeling security and privacy requirements in the early stages of system development is essential to provide reliability and manageability assurance of security and privacy for both stakeholders and customers. "Security-by-design" and "Privacy-by-design" have become essential for digital-native businesses and thus the shift from "add it later", to including the requirements thereof as part of the design is mandatory for business assurance.

The guide takes an approach that uses first identifies and builds a repository for Security, Privacy, Trust and Integrity ontologies for domain specific semantics of models. This will target identifying the common semantics where Security, Privacy and Trust/Integrity specific properties can be embedded into design, development and operations. Thus, to agree on a common language reference to describe in ODA security and privacy meta models will enable understand and model the semantics using a meta model correlated to the ontology.

The confluence between Security and Privacy concepts have functional and non-functional implications on ODA Information Systems. Implementation, runtime and operations stages are no more linear models. DevOps models will now need to factor in Security which needs a model-driven approach to effectively address privacy and security targets.

1.2. Objectives

The objective of this guide is to ensure that ODA-compliant information systems are implemented with reduced security and privacy problems. It is also possible that this guide will enable triaging risks, impact and controls in a way that facilitates timely remediation in an easy and agile way.

The approach taken to formulating this guide, proposes the use of expressive UML-based approaches to construct and transform security - and privacy - design and operating concerns into models which apply model-driven architecting approach to generate technical implementation requirement in an automated way.

2. ODA Concerns for Security & Privacy Metamodel Development

The Security and Privacy Metamodel development leverages prior work done at NIST, W3C, CNCF and other reputable organizations to derive applicability of best practices based on the intent established by TM Forum members as requirement for ODA Security-by-design and Privacy-by-design. The ability therefore to be able to describe Security and Privacy needs for the different parts of ODA Functions, ODA Components, ODA Environments etc. in a way that addresses functional and non-functional requirements, while meeting digital business needs and ecosystem play, is a key concern.

Risk is the primary concern for both Security and Privacy, and it is modeled as a multiplier effect of Vulnerability, Threat and Impact. To effectively address risk concerns, context in the case of an ODA object, use case and environment must be well-defined. Context must also address the circumstances of the collection, generation, processing, disclosure and retention of data/information and the operations needed to maintain their life-cycle.

In establishing context for this guide, focus is on the ODA Component and ODA Environment regarding the various trust domains. Some general concerns and implications regarding both privacy and security are captured as assumptions in order to enable set the agenda for modeling and validating Security - and Privacy - requirements.

The guide will continue to capture concerns as they emerge, and factor them into standard language with their inter-relationships on an ongoing basis.

Generalized HL ODA Security and Privacy Implication to a Business

Generalized Concerns Security issues Privacy issues Implications
Engagement (Service Experience) Attack on Service Availability, Service integrity etc.  Communications Lack of accurate information due to inadequate access to data, and poor quality of data to impair experience (Business Operations Risk)
Party Attack on Confidential information, reducing accountability etc.

Territories, Identities, communications


Unauthorized disclosure and data loss due to hacked or stolen credentials (passwords or access keys) to systems, accounts and services. (Business & Operations Risk)
Core Commerce Attack on Availability, Compromising information integrity etc. Identities, Transactions, Communications Disclosure of commercially sensitive information (conversations, recordings etc.) and access to their channels by unauthorized users
Integration & Decoupling Attack on Availability and integrity etc. Communications, Mobility Data loss due to access by unauthorized users. (Business & Operations Risk)
Production Attack on Availability and integrity etc. Communications Risk of data loss and disclosure via illicit activities that disrupt the normal functioning of devices/networks
Sustainability Attack on Availability and integrity etc. Communications Risk of low quality and quantity of data and information via physical damages or changing the smart things’ settings or properties

2.1. Security

 


 

2.2. Privacy and Confidentiality

Privacy is assured if the purpose relating to the use of the data, the visibility that refers to who is allowed to access, the granularity that describes how detailed the information is and the retention that corresponds to the period of data storage are all clearly communicated and fully revealed to the data owners as part of the execution of the governance around the data.


2.3. Integrity and Trust

Increasingly developing trustworthy software systems is a challenge as trustworthiness depends on trust relationships that are usually "assumed" and not adequately analyzed during the conceptualization and design of systems. Appropriate analysis of trust relationships and trust boundaries, and the appropriate justification of relevant trust assumptions, can result in systems that are designed to survive failure.

The meta-model for trust allows designer and developers to capture a comprehensive view on trust scenarios and incorporate them into the core principles of ODA. The meta-model includes a set of trust based concepts, which support the development of ODA trust boundaries for the Canvas and Components.

Integrity management provides a means by which distributed systems, such as present in cloud environments, can assess the trustability of environments and/or components. Root of Trust Installation (ROTI) has been a foundation for high integrity systems which also asserts the integrity of the target software perimeter. By incorporating ROTI as a standard means to measure and manage Integrity of software "systems", consumers of ODA Canvases and/or components can verify integrity guarantees.

2.3.1. Integrity Metamodel

(Ongoing work - Future work will incorporate integrity views from Developer, Provider, Consumer, Host points of view with implications on pipeline and platform operating models. This will capture concepts for Entity integrity, as well as Referential integrity.)


2.3.2. Trust Metamodel

(Ongoing Work)

3. Security and Privacy (SecPriv) Principles

3.1. ODA Component SecPriv Principles

3.1.1. High-level Security Concepts & Principles


Concept

Implications Inclusions ODA - Environment Attributes ODA - Component Attributes
Principle Requirement

3.1.1.1. Confidentiality

Refer to; IG1187 ODA Enterprise Risk Assessment R19.0.1

Refer to IG1187 ODA Enterprise Risk Assessment R19.0.1 Refer to IG1187 ODA Enterprise Risk Assessment R19.0.1
Ongoing study Ongoing study

3.1.1.2. Integrity

Refer to IG1187 ODA Enterprise Risk Assessment R19.0.1
Refer to IG1187 ODA Enterprise Risk Assessment R19.0.1
Refer to IG1187 ODA Enterprise Risk Assessment R19.0.1
Ongoing study Ongoing study

3.1.1.3. Accountability

Refer to IG1187 ODA Enterprise Risk Assessment R19.0.1
Refer to IG1187 ODA Enterprise Risk Assessment R19.0.1
Refer to IG1187 ODA Enterprise Risk Assessment R19.0.1
Ongoing study Ongoing study

3.1.1.4. Trust

Refer to IG1187 ODA Enterprise Risk Assessment R19.0.1
Refer to IG1187 ODA Enterprise Risk Assessment R19.0.1
Refer to IG1187 ODA Enterprise Risk Assessment R19.0.1
Ongoing study Ongoing study

*For more on Security implications please refer to IG1178 - ODA Governance and Security Risk Assessment

3.1.2. Privacy Principles

The European Union GDPR establishes a set of key requirements for Privacy:

  • Lawfulness, fairness and transparency
  • Purpose limitation
  • Data minimization
  • Accuracy
  • Storage limitation
  • Integrity and confidentiality (security)
  • Accountability


The Privacy concepts in this section will be mapped to the Privacy Concepts identified here, along with the other regional (Canada, US, Australia etc.) Privacy requirements in order to ensure completeness.


Concept Mapping to Regional Privacy Requirement Implications Inclusions ODA - Environment Attributes ODA - Component Attributes
EU GDPR, US CCPA, Australia PPA, Canada PIPEDA Principle Requirement Examples

3.1.2.1. Data Quality

  • EU GDPR Accuracy

Processing of information within and between ODA Function blocks, which serve as trust boundaries must comply with quality requirement.

Any identifiable personal information to just the extent that it is adequate, relevant, not excessive, correct and accurate to the purpose for which it is collected and subsequently processed.

Identifiable personal data; Storage; Clearing; Lifecycle management of data; Error handling; Authorizations; Inspection Ongoing study Ongoing study

3.1.2.2. Transparency

  • EU GDPR Lawfulness, fairness and transparency.

Source or provider of data must know what is done to their data. Legislatory framework support collection of data. Provision of information about the collection through direct or indirect forms with stated purpose and need of capturing, including via third party ODA Component or Canvas integration. Ongoing study Ongoing study

3.1.2.3. Intents and Notification

  • EU GDPR Lawfulness, fairness and transparency.
  • EU GDPR Purpose limitation.

Notify conditions to collect with intent. Timely notification; Nature of processing Contact information for data controller; categories of resources, and description of safeguards. Ongoing study Ongoing study

3.1.2.4. Finality

  • EU GDPR Purpose Limitation
  • EU GDPR Storage limitation
Information capture is for agreed processing purposes only. Legal justification to process or share data; Leverage rules for data management based on intent. Anonymize Ongoing study Ongoing study

3.1.2.5. Processing Grounds

  • EU GDPR Purpose Limitation
  • EU GDPR Integrity and confidentiality
Information is exchanged only on grounds of legitimate use.

Consent management;

Legal requirement; Public interests; Protection of vial interests of data owner Ongoing study Ongoing study

3.1.2.6. Rights

  • EU GDPR Lawfulness, fairness and transparency.
  • EU GDPR Purpose limitation.
  • EU GDPR Accountability.

Owner of data has rights to accessing, processing and rectification of own information.

Consent management; legal requirement Data processing; Data storage; Data sharing; Data use Ongoing study Ongoing study

3.1.2.7. Security

  • EU GDPR Integrity and confidentiality
Collection, processing and storage implement appropriate technical and organizational measures to protect "owner data" against loss or unlawful processing. Security Technology; Security Policies; Legal jurisdiction mandate to processing and sharing data; Technology Adequacy; Standards; Personnel Requirements; Data Destruction; Security Policy; Contingency Plans; Safeguards Ongoing study Ongoing study

3.1.2.8. Accountability

  • EU GDPR Accountability.
Processor or handler of "owned data" is accountable for compliance. Well-defined Data management services, and organization.
Ongoing study Ongoing study

3.1.2.9. Openness

  • EU GDPR Lawfulness, fairness and transparency
Policies and procedures for data management are readily available at all times for "owners of data" Access to openly publicized policies and procedures to management of data Ecosystem business; Customers; Partners; Suppliers; legal authorities Ongoing study Ongoing study

3.1.2.10. Transfer

  • EU GDPR Purpose Limitation
Limit transfer of "owner data" outside of operating legal jurisdiction. Consent management; Contract fulfillment. Legal and lawful Jurisdictions; Ongoing study Ongoing study

3.1.2.11. Anonymity

  • EU GDPR Data minimization
  • EU GDPR Storage limitation.
Transform information that has "owner data" to make it impossible to identify. Protect "owner of data's" information; Anonymizers; Apply pseudo anonymity Ongoing study Ongoing study

3.2. ODA SecPriv Requirement Management

(future work)

3.3. Integrated SecPriv Metamodel

(future work)

4. SecPriv Ontology for ODA

This section answers the need for a common language for an ODA Security & Privacy ontology. It includes basic concepts, their intricate relations and describes main ideas. With the creation of this cohesive ODA Security and Privacy ontology, information security in an ODA implementation is efficiently communicated, designed, developed and shared with one understanding and standard language. It also provides the hierarchy of the ontology and the information it provides for specific concepts. We can use the hierarchy and information objects to categorize, for example, threats or countermeasures according to their security goal, asset or defense strategy by trust domain.

ODA, with its focus on digital ecosystems, digital business, digital services and digital operations, requires a security ontology for an all-digital target architecture that fulfills requirement management around the ODA realization lifecycle. It includes a gamut of security and privacy architecture operationalization best practices, while capturing the relationships between each entry within the ontology. This ensures that information security professionals can make faster and better decisions regarding design and implementation needs to improve operations and transition. This help to simplify the relationships between incidents, events and concepts to provide a valuable insight.

4.1.1. Security Ontology

Each of the concepts outlined here are defined and described within the domain and context of security. As a note, where the term host is used, it refers to a computer program, computing device, a computing service node.

Concept: Definition

4.1.1.1. Threat

A potential cause of an incident, that may result in harm of information systems and an organization.
Categories Description Schemes
4.1.1.1.1. Malware

A malicious software. Malicious software is an unauthorized computing program designed to infiltrate regular functioning of a computing device.

Malware may cause damage to the host or the user's reputation.

Malware is generally applicable to all threats to hosts or users' "safety".

4.1.1.1.1.1. Virus - A virus is a malware that is triggered by the activation of their host and "infects" other hosts or digital assets' performance.
4.1.1.1.1.2. Worm - A worm is a self-replicating malware that propagates within a "network" mostly without end-user action.
4.1.1.1.1.3. Spyware - A spyware is a malware that without authorization checks and captures activities of a target host.
4.1.1.1.1.4. Trojan - A trojan is a malicious host that masquerades as a legitimate host.
4.1.1.1.1.5. Rootkits (a.k.a. Hybrid Exotic forms) - Rootkits are malicious hosts that have the hybrid characteristics of multiple malware schemes, such as Trojan, Worm and/or Virus.
4.1.1.1.1.6. Ransomware - A ransomware is a malware that takes a host assets hostage
4.1.1.1.1.7. Adware - A malware that exposes the host to unwanted advertising.
4.1.1.1.1.8. Malvertising - This is malware that uses legitimate Ads or Ad networks to covertly infect other hosts.
4.1.1.1.2. Botnets
An army of hosts orchestrating a network activity in concert.
4.1.1.1.2.1. Distributed denial of operations service - a network activity meant to disrupt a network connectivity and services
4.1.1.1.2.2. Spamming of traffic monitoring - a network activity meant to sniff, hijack or create a pathway for a service, host or network
4.1.1.1.2.3. Key logging - an activity to retrieve keystrokes from a host
4.1.1.1.2.4. Mass identity theft - an activity by botnets to perform multi-host identity theft.
4.1.1.1.2.5. Pay per click abuse - an activity to bait a user of a host to click on an Ad
4.1.1.1.2.6. Botnet spread - an activity to spread botnets
4.1.1.1.2.7. Adware - an activity to attract users to an Ad.
4.1.1.1.3. Hacking
Finding and exploiting weaknesses in information systems.
4.1.1.1.3.1. Ethical hacking - an exploit meant to improve the security profile of a host or network.
4.1.1.1.3.2. Malicious hacking - an exploit meant to incite harm
4.1.1.1.4. Pharming
An act to manipulate traffic from a legitimate host in order to gain access to information.
4.1.1.1.4.1. Malware - Using malicious software to reroute traffic to illegitimate hosts/network
4.1.1.1.4.2. DNS Cache Poisoning - Poisoning temporary DNS records on a host or network to redirect traffic to illegitimate hosts/network
4.1.1.1.5. Phishing
Deceitful electronic communication meant to induce the reveal of confidential information
4.1.1.1.5.1. Email - a deceitful electronic mail communication
4.1.1.1.5.2. Spear - a targeted deceitful electronic communication
4.1.1.1.5.3. Whaling - a privilege targeted deceptive electronic communication
4.1.1.1.5.4. Smishing and Vishing - Social engineered deceptive phishing by sms and telephone
4.1.1.1.5.5. Angler - Using leveraged information from readily available sources to socially engineer reveal of information.
4.1.1.1.6. Spam
An unsolicited message
4.1.1.1.6.1. Malspam - malware spread by spam
4.1.1.1.6.2. E-mail spam - spam spread by e-mail medium
4.1.1.1.6.3. SMS spam - spam spread by SMS medium
4.1.1.1.7. Spoofing
An act of disguising a communication from an unknown source as being from a known, trusted source.
4.1.1.1.7.1. ARP Spoofing - linking an unauthorized MAC address with a real and legitimate IP address of a target in order to intercept data intended for the target.
4.1.1.1.7.2. DNS Spoofing - an act to disguise DNS translation in order to reroute DNS translation to a different target
4.1.1.1.7.3. IP Address Spoofing - Replicating a target IP to re-use in communication
4.1.1.1.8. Eavesdropping
The unauthorized real-time drop-in to a communication.
4.1.1.1.8.1. Passive - an unauthorized real-time read-only/listen-only access to a communication
4.1.1.1.8.2. Active - an unauthorized real-time read/write or active participation in a communication
4.1.1.1.9. Backdoor
A method by which to bypass or get around normal security measures in a host or network.
4.1.1.1.9.1. Intentional - A backdoor deliberately implemented and disguised to bypass security measures.
4.1.1.1.9.2. Accidental - A backdoor that is unintentionally discovered due to ongoing use.
4.1.1.1.10. Hijacking
Exploitation of a valid computer session to gain unauthorized access to information or services in a host or network.
4.1.1.1.10.1. Active - taking over an ongoing user's session by surreptitiously obtaining the session identification and a masquerading as the authorized user.
4.1.1.1.10.2. Passive - monitoring traffic on an ongoing user's session while masquerading as the authorized user.


Concept: Definition

4.1.1.2. Vulnerability

The state of an information system being exposed to the possibility of "harm" or "attack" either virtually or physically.
Categories Description Schemes
4.1.1.2.1. Handshake vulnerability
A vulnerability that allows an unauthorized actor to read encrypted network traffic.
4.1.1.2.1.1. TLS/SSL - This is use of interoperable handshake between Service/clients to support older SSL versions for compatibility with legacy systems. (e.g. POODLE)
4.1.1.2.1.2. WPA2
4.1.1.2.1.3. TCP Split
4.1.1.2.1.4. Triple Handshake
4.1.1.2.1.5. KRACK
4.1.1.2.2. Encryption vulnerability
This is a vulnerability of a particular encryption algorithm based on a software program.
4.1.1.2.2.1. Insecure Cryptographic Storage - Storage of the cryptographic keys in way that is void of security awareness.
4.1.1.2.2.2. Missing encryption keys - lack of correct sensitive or important data encryption
4.1.1.2.2.3. Design Vulnerability - Application of Cryptographic algorithms with accompanying security design, implementation and installation weakness.
4.1.1.2.2.4. Brute force cracking - This is an encryption vulnerability that is achieved through easier "trial and error" interactions.
4.1.1.2.2.5. Bad Keys - A poor quality encryption key. Such a key that can be easily be deciphered.
4.1.1.2.3. Command Injection

This is a vulnerability that can lead to an application passing unsafe user supplied data to a system shell.

4.1.1.2.3.1. OS Command Injection
4.1.1.2.3.2. SQL Command Injection
4.1.1.2.4. Buffer overflow (overrun)
This is a vulnerability that leads to arbitrary code execution. This vulnerability in the software attempts to store more data in a buffer (memory store) than its default capacity, causing the buffer to overflow.
4.1.1.2.4.1. Stack-based
4.1.1.2.4.2. Heap-based
4.1.1.2.4.3. Barriers
4.1.1.2.5. Missing Authentication
This is a vulnerability where a software does not perform any authentication for functionality that should require a provable user identity.
4.1.1.2.5.1. Missing authentication
4.1.1.2.5.2. Malformed authentication
4.1.1.2.6. Missing Authorization
This is a vulnerability where a software does not perform any authorization for access to resources or to perform an action.
4.1.1.2.6.1. Missing authorization
4.1.1.2.6.2. Malformed authorization
4.1.1.2.7. Untrusted Redirection

Untrusted redirection vulnerabilities arise when an application incorporates user-controllable data into the target of a redirection in an unsafe way. An attacker can construct a URL within the application that causes a redirection to an arbitrary external domain.

4.1.1.2.7.1. Open redirect - This is a vulnerability where a URL is redirected to an untrusted site.
4.1.1.2.7.2. Unvalidated redirect - This is vulnerability where an application accepts untrusted input.
4.1.1.2.7.3. Unvalidated redirect and forward - This is vulnerability where an application accepts untrusted input that causes a redirect the request to a URL contained within untrusted input.
4.1.1.2.8. Weak Password
A weak password is short, common, a system default, or a password that is easily and/or rapidly guessed through simple or brute force attack. It can be short, use of dictionary words, proper names, based on the user bio data such as name or common variations of these.
4.1.1.2.8.1. Short - This is a weak password with less than 12 characters in length
4.1.1.2.8.2. Bio-data - This is a weak password that is based directly on the persons bio-data. E.g. Name, Date of Birth
4.1.1.2.8.3. Dictionary based - This is weak password based on a dictionary word
4.1.1.2.8.4. Location based - This is a weak password based on location
4.1.1.2.8.5. Combination of short, bio-data and dictionary based - This is a weak password based on a combination of any of the above.
4.1.1.2.8.6. Credential re-use - A weak password inferred by re-use of an existing password.
4.1.1.2.9. Malware
Refer to 4.1.1.1.1 Refer to 4.1.1.1.1
4.1.1.2.10. Broken Algorithms
A non-standard algorithm may have well-known techniques to defeat a cryptographic algorithm. -
4.1.1.2.11. Untrusted Inputs (Improper Input)
Receiving inputs without validating or incorrectly validating inputs before processing data. -
4.1.1.2.12. Unrestricted upload
Allowing inputs or transfer without restricting the type or volume of input. -
4.1.1.2.13. Path traversal (Directory traversal)
Allowing arbitrary access via an application. -
4.1.1.2.14. Bots
An application programmed to do automated tasks.
4.1.1.2.14.1. Spider bot - a.k.a Crawlers provide automate content indexing
4.1.1.2.14.2. Scraper bot - bots copying and saving content
4.1.1.2.14.3. Spam bots - bots gathering specific address content to use for spam
4.1.1.2.14.4. Messaging bots - bots generating messages. (e.g. Social media bots)
4.1.1.2.14.5. Download bots - bots used to copy software programs
4.1.1.2.14.6. Purchasing Bots - bots used to purchase or reserve sellable items. E.g. Tickets
4.1.1.2.15. Man-in-the-middle (MITM)
An actor intercepting and relaying with possible alteration of traffic between two end-points without notice
4.1.1.2.15.1. Eavesdropping - The actor proxies' messages between two end points.
4.1.1.2.15.2. Impersonating - Intercepting and changing messages between two end points to compromise the inputs and output.
Concept: Definition

4.1.1.3. Control

Safeguards and countermeasures put in place to minimize risks (physical, or virtual or both).
Categories Description Schemes
4.1.1.3.1. Management controls

The security controls that focus on the management of risk and the management of information system security.

4.1.1.3.1.1. Security Policies and Procedures
4.1.1.3.1.2. Security Audits (Accounting, Trails, Checklists, Supervision)
4.1.1.3.1.3. Security Standards
4.1.1.3.1.4. Awareness and Training
4.1.1.3.1.5. Risk Management
4.1.1.3.2. Operational controls

The security controls that are primarily implemented and executed by people (as opposed to systems). Controls that facilitate effectiveness of operations.

4.1.1.3.2.1. Access (Identification and Authentication)
4.1.1.3.2.2. Configuration controls
4.1.1.3.2.3. Monitoring controls
4.1.1.3.2.4. Incident response
4.1.1.3.2.5. Contingency Plan
4.1.1.3.2.6. Test
4.1.1.3.3. Technical controls
The security controls that are primarily implemented and executed by the system through the system's hardware, software, or firmware.
4.1.1.3.3.1. Asset controls (e.g. Media protection, Person
4.1.1.3.3.2. Threat Boundaries
4.1.1.3.3.3. Maintenance Controls
4.1.1.3.3.4. Contingency Plan


Concept: Definition

4.1.1.4. Attack

An unauthorized access with the intent of gaining computing advantage, stealing, damaging, or exposing data from an information system.
Categories Description Schemes
4.1.1.4.1. Active

An attack used to modify communication content.

4.1.1.4.1.1. Replay - capture and retransmit to create unauthorized traffic
4.1.1.4.1.2. Masquerade - pretend to be legitimate user
4.1.1.4.1.3. Modification of content - modify, delay or reorder to produce an unauthorized effect.
4.1.1.4.1.4. Preventing normal behavior - to lead to a bad behavior of a host or network service
4.1.1.4.2. Passive


An attack used to obtain information from targeted host or networks without affecting them.
4.1.1.4.2.1. Release of communication content - An attack to reveal sensitive information
4.1.1.4.2.2. Traffic analysis - An attack to mask communication



Concept: Definition

4.1.1.5. Impact

The extent to which a threat can affect the security state of a host or network.
Categories Description Schemes
4.1.1.5.1. Financial loss

Damage to wealth

4.1.1.5.1.1. Organizational - as a result of fines, market trust etc.
4.1.1.5.1.2. Compensational - as a result of suffering a threat etc.
4.1.1.5.2. Operational loss
Damage to operating capability.
4.1.1.5.2.1. Production output loss
4.1.1.5.2.2. Service availability loss
4.1.1.5.2.3. Service data loss
4.1.1.5.3. Reputation loss
Damage to reputation and lowered opinion score
4.1.1.5.3.1. Lack of service - customers no longer wanting to do business with entity due to concerns of instability of service
4.1.1.5.3.2. Lack of employee - loss of data can lead to employee apathy and wall-off acquiring new employees
4.1.1.5.3.3. Lack of customer information - failing to protect customer data leads to reputation loss
4.1.1.5.4. Property loss
Damage to valued assets, such as product designs and trade secrets.

-

4.1.2. Privacy Ontology

Each of the concepts here are defined within the context of privacy.

Concept Definition Generic Scheme Application Specific Scheme

4.1.2.1. Data Exposure

This is when data or information is accessed or available to access.

  • 4.1.2.1.1.1. Data leak (inside-out)
  • 4.1.2.1.1.2. Data breach (outside-in)
  • 4.1.2.1.1.3. Improper disposal
  • Sensitive Data Exposure
  • Location Data Exposure
  • Unauthorized disclosure
  • Accidental exposure
  • Physical theft
  • Hacking

4.1.2.2. Data Theft

This is the act of stealing information from an unknowing victim with the intent of compromising privacy.
  • 4.1.2.2.1.1. Physical theft
  • 4.1.2.2.1.2. Hacking
  • Phishing
  • Social Engineering
  • Skimming

4.1.2.3. Information Hacking

This is the use of unauthorized technique(s) to gain access to protected information.
  • 4.1.2.3.1.1. Phishing
  • 4.1.2.3.1.2. Social engineering
  • 4.1.2.3.1.3. Skimming
  • Virus

4.1.2.4. Information Sharing

This is the act of distributing data or information to others.
  • 4.1.2.4.1.1. Information owner
  • 4.1.2.4.1.2. Information Classification
  • 4.1.2.4.1.3. Consent to share

4.1.2.5. System failure

This is the performance or state of a system which results in unpredictable output.
  • 4.1.2.5.1.1. Process failure
  • 4.1.2.5.1.2. Interaction failure
  • 4.1.2.5.1.3. Interface failure
  • 4.1.2.5.1.4. Deployment failure

4.1.3. Benefits of ODA SecPriv Ontology

This ODA SecPriv taxonomy will enable information/cyber security professionals implementing ODA in ecosystems, or across different organizations and geographical regions to communicate faster and more efficiently using standard protocols. ODA SecPriv ontology is instrumental in facilitating the description of critical vulnerabilities, risk exposures and remediation techniques. With techniques for attacks and defense changing, TM Forum members can update and exchange information using the SecPriv metamodel and ontology and reduce widespread impact. The ODA SecPriv Ontology is also helpful in enhancing ODA component definition, ODA component capability modularization, Functional decoupling and integration requirements (using the ODA Threat Boundary model), and spot weak point ODA Engagement Management function with humans and Things. By employing the ODA SecPriv ontology, members can deploy resources managing ODA implementations more efficiently while discovering new digital technology products and capabilities.

5. Approach to Realize SecPriv Metamodel of ODA Component and ODA Canvas

Reference to ODA Enterprise Risk Assessment (IG1187) section 4, figure 4-2: ODA Function Architecture Logical Threat Boundaries, all Function blocks in ODA have unique and common security and privacy design patterns that can the metamodel helps to enumerate. This section identifies the common and unique SecPriv design considerations that can enable implementation and runtime needs.

fig 5.0 - SecPriv Scope Reference to ODA FA Trust Boundaries and Trust Domains

5.1. SecPriv, Integrity and Trust Requirements Modeling

The Information Systems Security Risks Management model (ISSRM) along with the ontology for Security, Privacy and Integrity models help in evaluating the Security, Privacy and Trust concerns across trust boundaries. The choice of ISSRM is to highlight the importance of Security-by-design and Privacy-by-design, with both concepts emphasizing the need to capture these concerns at the design and development stages.


For more on the details around the activities in the ISSRM, refer to IG1187 ODA Enterprise Risk Assessment for R20.0#COOKBOOKARiskAssessmentCookbook. The decision points in the ISSRM modeling approach hints on the need to be systematic in reviewing the applicability of controls to identified vulnerabilities and threats.


5.1.1. Engagement Management and Party Management Functions (Trust Boundary 4)

5.1.1.1. Scenario 1: ODA-EM <-> ODA-PM SecPriv, Integrity and Trust Requirements Model


fig 5.1 ODA FA EM and PM Component and Environment


Common SecPriv Vulnerability Concerns

Vulnerabilities

Environment

Component

Data

Security

  • Unrestricted upload
  • Untrusted Redirection
  • Missing Authentication
  • Path Traversal
  • Command injection
  • BOT
  • Broken Algorithm
  • Unrestricted upload
  • Untrusted Redirection
  • Missing Authentication
  • Path Traversal
  • Command injection
  • BOT
  • Untrusted Input
  • MITM Attack
  • Broken Algorithm
  • Handshake
  • Encryption
  • Command injection
  • Buffer overflow
  • Missing Authentication
  • Missing Authorization
  • Broken Algorithm

Privacy

  • Information sharing
  • Information hacking
  • Data Theft
  • System failure
  • Data exposure (SoR)
  • Information sharing
  • Information hacking
  • Data Theft
  • System failure
  • Data exposure (SoR)
  • Information hacking
  • Data Theft

Ongoing study

Integrity

Ongoing study

Ongoing study

  • Command injection
  • Broken Algorithm
  • Handshake

Ongoing study

Trust

Exploiting any of the vulnerabilities leads to Trust concerns.

Ongoing study

Exploiting any of the vulnerabilities leads to Trust concerns.

Ongoing study

Exploiting any of the vulnerabilities leads to Trust concerns.

Ongoing study

 

Counter Measures

Controls

Environment

Component

Data

Security

  • Secure Network Communication (VPN, DNSSEC, HTTPS, S-MIME, SAML, X.509 etc.)
  • Access Control
  • Monitoring/Scanning
  • Backups

Ongoing study

  • Secure Network Communication (VPN, DNSSEC, HTTPS, S-MIME, SAML, X.509 etc.)
  • Standards (RSA, DS4, SHA, AES/DES, etc.)
  • Trust Management (Policy

Ongoing study

  • Access Control (DB, File etc.)
  • Message Digest, Checksum/Digital signing
  • DB Encryption

Ongoing study

Privacy

  • Secure Network Communication (VPN, DNSSEC, HTTPS, S-MIME, SAML, X.509 etc.)
  • Access Control

Ongoing study

  • Secure Network Communication (VPN, DNSSEC, HTTPS, S-MIME, SAML, X.509 etc.)
  • Access Control

Ongoing study

  • Secure Network Communication (VPN, DNSSEC, HTTPS, S-MIME, SAML, X.509 etc.)
  • Access Control

Ongoing study

Integrity

  • Trust Management (Role, Activity, Policy, Compliance, Credential assertions)

Ongoing study

  • Trust Management (Role, Activity, Policy, Compliance, Credential assertions)

Ongoing study

  • Trust Management (Role, Activity, Policy, Compliance, Credential assertions)

Ongoing study

Trust

  • Trust Management (Role, Activity, Policy, Compliance, Credential assertions)
  • Key Management (Authentication)

Ongoing study

  • Trust Management (Role, Activity, Policy, Compliance, Credential assertions)
  • Key Management (Authentication)

Ongoing study

  • Trust Management (Policy, Compliance, Credential assertions)
  • Access Control

Ongoing study

 

5.1.1.2. Scenario 2: ODA-EM <-> ODA-CCM SecPriv, Integrity and Trust Requirements Model

(Ongoing Study)

5.1.1.3. Scenario 2: ODA-EM <-> ODA-P SecPriv, Integrity and Trust Requirements Model

(Ongoing Study)

5.1.1.4. Scenario 2: ODA-EM <-> ODA-IM SecPriv, Integrity and Trust Requirements Model

(Ongoing Study)

5.1.2. Party Management and Core Commerce Management Function (Trust Boundary 5)

(Ongoing Study)

5.1.3. Party Management and Production Function - (Trust Boundary 6)

(Ongoing Study)

5.1.4. Party Management and Intelligence Management (Trust Boundary 7)

(Ongoing Study)

5.1.5. Core Commerce Management and Production (Trust Boundary 6)

(Ongoing Study)

5.1.6. Core Commerce Management and Intelligence Management (Trust Boundary 7)

(Ongoing Study)

5.1.7. Production and Intelligence Management Function- (Trust Boundary 7)

(Ongoing Study)

5.2. Unique ODA SecPriv, Integrity and Trust Requirement model

(Ongoing Study)

5.3. Common SecPriv, Integrity and Trust Requirement model

(Ongoing Study)

6. Conclusion

The conclusion is work in progress. It will eb finalized based on the modeling scenarios. While traditional thoughts have framed the implementation of ODA on one environment, this is likely not going to be the case. Private, public, hybrid, and serverless implementation strategies point to the need to address security, privacy, integrity and trust with a comprehensive array of solutions. Initial insights emerging from this work suggest each ODA Function block serves as a trust domain with a set of requirements that facilitate functional and non-functional management of security, privacy, integrity and trust.

The metamodels provided here for Security, Privacy, Integrity and Trust contexts will be used to help identify specifications that help in the defining design and implementation objectives for ODA Component, ODA Environments and Data handling within an ODA environment. As ODA FA advances the Lego-like framework for adjacent Function blocks to interact directly with each other, effectively understanding how Components and Environment in these trust boundaries are designed and implemented may result in some common as well as unique specifications.

Join the ongoing study.

7.  Appendix A: ODA Security Management

Rephael Benhamo 

There is a need to incorporate cognitive functions in the ODA AN archititeture as well as incorporting control loop functions to enable the following:

  • Ensure a more detailed analysis of the ODA environemtn's state and historical data
  • Tests and enhance cyber situational awareness and improve intrusion detection testing
  • Learn complex attack patterns from historical data and generate generic rules that allow to detect variations of known attacks.
  • Leveraging several inference engines for the different cognitive ODA environment management functions and security analytics on top of the data structure
  • Retrieve data from log created knowledge to the Knowledge source. For example, the Orient function obtains information about the historical behavior of a managed entity


Cognitive Function in the AN Architecture Control Loop

  • Observe function refers to the cognitive monitor that performs intelligent probing. For instance, when the network is overloaded, the Observe function may decide to reduce the probing rate and instead perform regression for data prediction. Or, Observe function should be able to determine what, when and where to monitor.
  • Orient function detects, predicts changes in the domain environment (e.g., faults, policy violations, frauds, performance degradation and attacks).
  • Decide function develops an intelligent automated engine that reacts to changes in the entity domain by selecting, composing changes, extracting and organization of knowledge (e.g., plans, tracks, execution traces), and eventually decides
  • Act function schedule the generated execution and determine the course of action taken should the execution fail. Act agent exploits past successful experiences to generate optimal execution policies and explore new actions should the execution fail

              

7.1. ODA Security Management Model

An important aspect of the ODA security management model is ensuring that the management interface itself is secured. This is primarily achieved by limiting access to authorized users. However, it cannot guarantee safety against malicious authorized users. This involves defining and mathematically formulating the users, roles, and attributes within DTCLA such that the security digital thread can be monitored and traced. The set of users is denoted as and can be represented as:

 

Each user can have one or more roles as defined by the security framework. The set of roles is denoted as and can be represented as:

Additionally, users may have various attributes associated with them that provide additional information about the user. The set of attributes is denoted as and can be represented as:

These attributes can include information such as user demographics, organizational affiliations, or specific permissions granted to the user. Let's define the components, C, and relationships in the control model:

 C represents the components used for modeling attributes,

A. Attributes are defined as

And each attribute can be derived from a combination of components:

Permissions are defined as

which are derived from a user's role and attributes. These permissions are limited by the user's attributes and specify the access to objects.

Objects are defined as

and the allowed operation

and each object is associated with a component to link the relevant data to the component it belongs to:

This leads to the definition of permissions as

whereby it can be concluded that . The n-m relation of users to roles is expressed by indicating that each permission is associated with an object. The relationship between users and roles is expressed as

where  represents users and  represents roles. The relationship between users and attributes can be described as

Mapping the role-attribute combination to a user is defined as

Mapping the role-attribute combination to permissions is defined as

These definitions help establish the relationships and mappings between users, roles, attributes, permissions, and objects as outlined in TMF TR284G and the references specified, and TMF720 Digital Identitiy managemet API, TMF GB1008. 

8. Appendix B: Terms & Abbreviations Used within this Document

8.1. Terminology

Term

Definition

Source

Security-by-design

Security by design is an approach to software and hardware development that seeks to make systems as free of vulnerabilities and impervious to attack as possible through such measures as continuous testing, authentication safeguards and adherence to best programming practices.

An emphasis on building security into products counters the all-too-common tendency for security to be an afterthought in development.

Tech Target, https://whatis.techtarget.com/definition/security-by-design

Privacy-by-design Privacy by Design is a framework based on proactively embedding privacy into the design and operation of IT systems, networked infrastructure, and business practices.

Deloitte,

https://www2.deloitte.com/content/dam/Deloitte/ca/Documents/risk/ca-en-ers-privacy-by-design-brochure.PDF

ODA Component Refer to IG1171 ODA Component Definition R19.0.0 TM Forum
ODA Environment Refer to IG1171 ODA Component Definition R19.0.0 TM Forum
Confidentiality The process of and obligation to keep a transaction, documents, etc., private and secret, i.e., confidential; the right to withhold information, e.g. medical information, from others. Oxford University Press
Integrity The quality of being honest and having strong moral principles Oxford University Press
Accountability The fact of being responsible for your decisions or actions and expected to explain them when you are asked. Oxford University Press
Trust The belief that somebody/something is good, sincere, honest, etc. and will not try to harm or trick you. Oxford University Press
Data Quality Data quality is when Data is fit for the purposes that data consumers want to apply it to.

DAMA, The Data Management Community

https://www.dama.org/cpages/home

Transparency The quality of something, such as a situation or an argument, that makes it easy to understand Oxford University Press
Finality The quality of being final and impossible to change Oxford University Press
Trust Boundary Refer to IG1187 ODA Enterprise Risk Assessment for R20.0 TM Forum
Trust Domain Trust Domains describe a perimeter where data or activities maintain one level of "trust". It establishes a logical boundary within which a function trusts all sub-functions (including data).
X.509 An X509 certificate is a digital certificate that uses the widely accepted international X509 public key infrastructure (PKI) standard to verify that a public key belongs to the user, computer or service identity contained within the certificate.

Tech Target



8.2. Abbreviations & Acronyms

Abbreviation/Acronym

Abbreviation/Acronym Spelled Out

Definition

Source

SecPriv Security and Privacy

NIST

National Institute of Standards and Technology

Go to: https://www.nist.gov/ U.S Department of Commerce
W3C

World Wide Web Consortium

Go to: https://www.w3.org/


CNCF

Cloud Native Computing Foundation

Go to: https://www.cncf.io/


ROTI Root of Trust Installation A set of systems and mechanisms that enable verification that a resource has been handled (developed, installed, instantiated etc.) correctly. It binds the state of a "system" to its owner/origins.  Detailed Reference: https://bit.ly/36dRHue
ISSRM Information Systems Security Risk Management model

DNSSEC Domain Name System Security Extensions An extension to Domain Name System that provides a way to authenticate DNS response data. Internet Society: https://www.internetsociety.org/deploy360/ dnssec/basics/
HTTPS Hypertext Transfer Protocol Secure HTTPS is a secure version of the HTTP protocol that uses the Secure Socker Layer protocol for encryption and authentication.  SSL https://www.ssl.com/faqs/what-is-https/
S-MIME Secure Multipurpose Internet Mail Extensions A standard for public key encryption and signing of MIME data. IETF RFC 3369 3370 3850 3851
SAML Security Assertion Markup Language Security Assertion Markup Language (SAML, pronounced SAM-el) is an open standard for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. Reference: http://docs.oasis-open.org/security/saml/Post2.0/sstc-saml-tech-overview-2.0.html
RSA Rivest–ShamirAdleman

RSA (RivestShamirAdleman) is one of the first public-key cryptosystems and is widely used for secure data transmission. 

RSA is an asymmetric encryption algorithm.

Reference: The RSA

https://www.thersa.org/

SHA Secure Hash Algorithms The Secure Hash Algorithms are a family of cryptographic hash functions published by the National Institute of Standards and Technology as a U.S. Federal Information Processing Standard, including: SHA-0: A retronym applied to the original version of the 160-bit hash function published in 1993 under the name "SHA".

Reference: Secure Hash Algorithms Wikipedia

https://en.wikipedia.org/wiki/Secure_Hash_Algorithms

AES/DES Advanced Encryption Standard The Advanced Encryption Standard, also known by its original name Rijndael, is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology in 2001. 

Reference: Advanced Encryption Standard Wikipedia

https://en.wikipedia.org/wiki/Advanced_Encryption_Standard

BOT roBOT An autonomous program on a network (especially the Internet) that can interact with computer systems or users, especially one designed to respond or behave like a player in an adventure game Oxford Languages (Google)
BOT



9. Appendix B: References

Reference

Description

Source

Brief Use Summary










10. Administrative Appendix

This Appendix provides additional background material about the TM Forum and this document. In general, sections may be included or omitted as desired; however, a Document History must always be included.

10.1. Document History

10.1.1. Version History


Version Number

Date Modified

Modified by:

Description of changes

0.8 26-Sep-2020 Emmanuel A. Otchere Ongoing team review updates
0.9 28-Sep-2020 Emmanuel A. Otchere Added Appendix for Abbreviations and Definitions of Key Terms
1.0.0 02-Oct-2020 Alan Pope Final edits prior to publication
1.0.0 02-Oct-2020 Rephael Benhamo Add section 7.1 ODA Security Managment Model

10.1.2. Release History


Release Status Date Modified Modified by: Description of changes
Pre-production 02-Oct-2020
Alan Pope
Initial release



10.2. Company Contact Details

Company

Team Member

Title

Email

Huawei Technologies Co. Ltd Emmanuel A. Otchere

Oracle Alexander Rockel



10.3. Acknowledgments

This document was prepared by the members of the TM Forum  team:

  • Brian Burton, Vodafone, Contributor
  • Abinash Vishwakarma , Netcracker, Participant
  • Konstantin Petrosov, T-systems.com, Participant


©  TM Forum 2020. All Rights Reserved.
Write a comment…