Data Models and Interface Specification of the Framework...31/08/2015 Actual delivery date: 6/09/2015 Suggested readers: Service providers’ designers and developers Version: Release
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Data Models and Interface Specification of the Framework
Editors: Ahmed Bouabdallah, Institut Mines Telecom (Telecom Bretagne)
Yudani Riobò, Quobis
Deliverable nature: (R) Document, Report
Dissemination level: (Confidentiality)
Public (PU)
Contractual delivery date:
31/08/2015
Actual delivery date: 6/09/2015
Suggested readers: Service providers’ designers and developers
Version: Release 1.0
Total number of pages: 154
Keywords: Tree data structure, REST paradigm, Interfaces, UML, JSON
Abstract
This document describes in detail the data models to be used to describe Hyperties and the interfaces for the reTHINK architecture defined in deliverable D2.1 Architecture Definition. The Hyperty data model will be used to describe Hyperties capabilities and it will be used by the Governance Directory Service to support publication and discovery of Hyperties. The defined interfaces will be implemented by Hyperties, Management and Messaging Nodes.
This document contains material, which is the copyright of certain reTHINK consortium parties, and may not be reproduced or copied without permission. The information contained in this document is the proprietary confidential information of the reTHINK consortium and may not be disclosed except in accordance with the grant agreement and consortium agreement.
The commercial use of any information contained in this document may require a license from the proprietor of that information. Neither the reTHINK consortium as a whole, nor a certain part of the reTHINK consortium, warrant that the information contained in this document is capable of use, nor that use of the information is free from risk, accepting no liability for loss or damage suffered by any person using this information.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 645342. This publication reflects only the author’s view and the European Commission is not responsible for any use that may be made of the information it contains.
Impressum
Full project title: Trustful hyper-linked entities in dynamic networks Short project title: reTHINK Number and title of work-package: WP2 – Overall Architecture Number and title of task: T2.3 Data Models T2.4 Interfaces Design Document title: Data Models and Interface Specification of the Framework Editors: Ahmed Bouabdallah, IMT - TB Yudani Riobò, Quobis Work-package leader: Name, company: Eric Paillet, Orange Labs
Copyright notice
2015 Participants in project RETHINK
This work is licensed under the Creative Commons “Attribution-NonCommercial-NoDerivatives 4.0 International” License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/
The reTHINK project describes a framework that provides solutions to manage real time communication capabilities, both human to human and machine to machine. This framework will integrate and react to contextual information in a secured and privacy respectful way while it intends to meet the requirements that are derived from Use Cases described in the deliverable D1.1. Deliverable D2.1 describes the general architecture of the reTHINK framework. The reTHINK architecture combines web technologies and web attitudes with a trusted worldwide cooperative service delivery model. It allows for different service provider ecosystems to develop communication applications over the Internet, rather than over privately managed core networks.
The data model must encompass all information underlying to the reTHINK architecture, the goal of which consists in providing a consistent and global description of web-based communication capabilities called Hyperties with their associated framework. The main part of this data model concerns Hyperties together with the structure of the information concerning them which have to be locally maintained by the various entities involved in some part of the lifecycle of any Hyperty. The stages through which passes a Hyperty involve three specific roles: the service provider, the service developer and the consumer. This document specifies the data models to be used for the Identity Management functionality, for the Registry functionality, for the User Account management, the Catalogue functionality and the Messaging Services functionality.
This document also gathers and defines the interfaces that will be used by the Hyperties to interact with the different elements of the reTHINK architecture. The interfaces will be defined from a logical point of view and the scope of this document is to provide a high level specification of them. Additionally, the document initially covers security considerations. Even though the reTHINK project tries to leave open to the implementer the use of one protocol or another, this document includes examples of real protocols which can be used to implement the interfaces at the application level of the TCP/IP stack.
The principle of Hypermedia as the Engine of Application State is applied in the design of the reTHINK architecture as the application and the Hyperties dealing with the Catalogue, Registry, and the rest of services will have to discover dynamically properties which will change with the time. This principle is a constraint of the REST application architectures that distinguish them from most other network application architectures. The principle is that a client interacts with a network application entirely through hypermedia provided dynamically by application servers. This causes that the operations which can be performed over a data object attribute will change depending on the status of that resource and the available operations are discovered from resource representation returned from the server.
Finally the document provides the foundations of the governance framework which requires the definition of the data model underlying the Hyperty administrative domain. Its articulation with the user identity data model establishes the link with the structures described previously.
List of authors .......................................................................................................................................... 4
Table of Contents .................................................................................................................................... 5
List of figures ......................................................................................................................................... 10
List of tables .......................................................................................................................................... 12
6.6.2 Security aspects of Real-Time interface ........................................................................ 84
6.6.3 Connectivity aspects of Real-Time interface ................................................................. 84
6.6.4 Considerations about Real-Time interface .................................................................... 84
7 Experimenting with JSON schema: Example Design Workflow – from data description and representation, over deployment, to interfaces. .................................................................................. 85
7.1 Determining required data sets ............................................................................................ 85
7.2 Representation of Hyperties as Objects and Resources ...................................................... 85
7.2.1 Application A Hyperty Object definition ....................................................................... 88
7.2.2 Application B Hyperty Object Definition ....................................................................... 89
7.3 Representation of ProtoStubs and Codecs as Objects and Resources.................................. 91
The term “Hyperty” stands for “Hyperlink entity”. Hyperty is a reusable script, implementing service logic and which is deployable in a runtime environment, in an end-user device that can interwork with a Web Browser or in a native application. Hyperties may also reside on a networked server that acts as a user agent or a server-based endpoint.
Hyperty instance A Hyperty instance represents a user in the reTHINK framework, and therefore determines its presence. Hyperties can be then instantiated on end-user equipment, connected objects (IoT) and servers.
Identity Provider An Identity Provider is a Service provider who verifies users’ identities.
This document defines in detail the data models to be used to describe Hyperties. The Hyperty data model will be used to describe Hyperties’ capabilities and it will be used by the Governance Directory Service to support publication and discovery of Hyperties. It also provides a detailed design of reTHINK interfaces (including messages) to be implemented by main sub-systems notably Hyperties, Management, and Messaging Nodes.
The goal of this work is to define the data models to cover all information underlying to the reTHINK architecture and the interfaces for all the systems involved in the architecture. The scope of this document is to justify the selection of the notation to express the data model, to specify the data models, and to define the interfaces. The work is based on the architecture provided by the deliverable D2.1 of the project.
1.2 About this document
This document is the deliverable D2.2 of the project, which is one of three deliverables of the architecture work:
Table 1 : Framework Architecture Deliverables
Deliverable Name Milestone Deliverable Description
D2.1 Framework architecture definition
M7 Defines technical requirements, describes the main concepts and the overall architecture of reTHINK framework.
D2.2 Data Models and Interface Specification of the framework
M8 Details the data models used to describe Hyperties and Governance policies and the interfaces of the core framework.
D2.3 Final design of the Architecture
M21
The initial architecture will be updated according to adjustments made for implementation purposes and also new requirements coming from the feedback provided by the trials.
The document summarizes the work of two tasks (highlighted) out of four:
Table 2 : Framework Architecture Tasks
Task Type Partners Description
Task 2.1 Technical Requirements Derivation
ORANGE (lead), DTAG, PTIN, Fraunhofer, Apizee, IMT, INESC, TUB
Derive technical requirements for the reTHINK architecture, with input from WP1 Use cases and Business models. Each requirement will be classified and allocated to the project iteration where it will be addressed.
Task 2.2 Reference Architecture
ORANGE, DTAG, PTIN,
Design and maintain reTHINK reference architecture and associated concepts, describing
Design Fraunhofer, Apizee, IMT (lead), INESC, QUOBIS, TUB
the reTHINK main sub-systems with their main dependencies and interfaces. Identify main entities roles, in particular the Hyperty and Manager roles. Demonstrate the power of the architecture and its possibilities beyond the existing state of the art.
Task 2.3 Data Models
ORANGE, DTAG, PTIN, Fraunhofer, Apizee, IMT (lead), INESC, TUB
Produce detailed data models that describe Hyperties and Governance policies. This will be used to describe Hyperties capabilities and the Governance Directory Service (to publish and discover Hyperties). The Governance policy data model may create a new policy descriptor language for QoS and security rules.
This task will provide a detailed design of reTHINK interfaces (including messages) to be implemented by main sub-systems notably Hyperties, Management and Messaging Nodes.
The previous deliverable D2.1 describes the general architecture of the reTHINK framework. The reTHINK architecture combines web technologies and web attitudes with a trusted worldwide cooperative service delivery model. It allows for different service provider ecosystems to develop communication applications over the Internet, rather than over privately managed core networks.
The architecture fulfils the following architectural goals:
Transferring session control to the endpoints, while using trusted, well maintained CSP software.
Enabling internet based communication to be enhanced with greater QoS and security.
Allowing flexible means of interworking with minimal CSP involvement, to gain easy implementation and fast universal deployment.
Providing greater service mobility and user choice via independent identity management, and extending identity services with Social Networks style enhancements
2.1 Actors and components
The reTHINK framework identifies 5 main actors:
The User, which can be an individual, organization or organization unit that consumes a service delivered by a Service Provider, e.g., Citizens, Tourists, Event Agency.
The Identity provider (IdP), which creates, maintains, and manages identity information for service consumers and provides consumer authentication, allows trusted verification and discovering destination addresses of called users to other service providers.
The Communication Service Provider (CSP), which manages the delivery of basic H2H communication service such as voice, video, and textual communication to customers as well as M2M services.
The Network Service Provider (NSP), which sells bandwidth or network access by providing direct internet backbone access. In the reTHINK framework, they also provide Specialized Network Services, like ensuring the appropriate QoS and Security Level.
The Application Service Provider (ASP), which represents an organization that manages the delivery of a service to a service consumer.
The relationship between the different actors and their respective roles is described by the following diagram:
Verify subscribing users, logging-on users and destination users
Agree QoS &security to be
enforced
Figure 1 : Actors relationships in the reTHINK framework
2.2 Architecture Overview
To support the communication between entities, the reTHINK framework defines the concept of “Hyperties” - standing for “Hyperlink entities”.
Hyperty is a reusable script, implementing service logic and which is deployable in a runtime environment, in an end-user device that can interwork with a Web Browser or in a native application. Hyperties may also reside on a networked server that acts as a user agent or a server-based endpoint.
A Hyperty instance represents a user in the reTHINK framework, and therefore determines its presence. Hyperties can be instantiated on end-user equipment, connected objects (IoT) and servers.
A User registers (enrols or subscribes) to a Service Provider to be able to use it. This is done once. When the User logs in to the service, it creates a service instance (Hyperty instance) that registers into the Registry, for the duration of the User staying logged on. When the User logs out, the Hyperty is de-registered and destroyed. This Hyperty registration is done each time the service is used.
The Identity of an End-User is verified by Identity Management service (IdP), which also provides End-user authentication, authorisation, and access to End-User profile information. Users may utilise an independent identity provider and provide their security tokens to the CSP, but the CSP has to verify the user further, to ensure that if, and at what level, service delivery is authorised.
The following picture summarises all the interactions between the components of the reTHINK framework and how Hyperties are produced, catalogued, discovered, downloaded and registered:
Users uploading personal details to IdP and enquiring on other users’ destinations URL address
Discovery of appropriate Hyperty service type via the Catalogue of Hyperties
User/device logging on to the CSP service and registering the Hyperty instance on the Registry
Downloading the Hyperty software to the devices for endpoint-based session control
Messaging Services provide real time message oriented communication functionalities used by user services to communicate (Message Routing). They should provide different communication patterns including publish/subscribe communication. Main functional components of Messaging Service are:
routing
communication set-up
access control to resources
session management
Governance provides the management of Hyperties life-cycle, accounts for users (including devices and “things”) with user preferences, and the management of business partnership for both network providers and application providers.
2.3 Communication Management
Hyperty instances can communicate with each other in two ways:
Message communication where asynchronous messages are exchanged among Hyperty instances.
Stream Communication where media streams are established among Hyperty instances to support audio, video and data communication among Hyperty instances. In principle, this is supported by WebRTC standards.
Different communication types can be identified:
communication with messaging services e.g. via web sockets
communication with rest API as a client and/or a server
"pure" P2P communication on top of WebRTC Data Channels
One of the goals of the reTHINK framework is to enable seamless communications between users, by achieving interoperability between communication services, without waiting for new standards.
Hyperties leverages the ProtoFly concept to interoperate each other without the need to standardize protocols and, instead, to standardize more flexible runtime APIs.
Hyperty interoperability mechanism is based on an approach that combines the standardisation of a common Data Object Model where a set of generic operation (like CRUD operations) can be applied and the capability to extend such Standard Data Objects with non-standardised resources to keep it open and allow richer interoperability if needed.
Hyperties are described with a descriptor following a data resource tree schema, the Hyperty schema and generic CRUD operations (Hyperty Operations) are applied on each resource and associated attributes.
Hyperty schemas are provided by the Catalogue functionality or by the Hyperty instance itself avoiding the need to always have to discover instances through the backend, enabling direct discoveries in the runtime devices.
Hyperty schemas will have a minimum set of data to enable ad-hoc interoperation between different Hyperties. Such model can be extended with extra resources without breaking interoperability with fully standardised Hyperties descriptors.
Hyperty compliance check should be performed during Discovery process by matching the Hyperty descriptors. A Hyperties Compliancy check is independent of the Hyperty type since there is a valid and standard schema to describe a descriptor. Thus, interoperability would be supported on the match between Hyperty Descriptors not by the Hyperty Type.
The Hyperty interoperability mechanism guaranties both simplicity and extensibility.
Non-standardised data objects or pointers to data schemas defined somewhere else can be added without compromising compliancy with the standard Hyperty schema
Main interface and functionalities used to support interoperability are shown in the next figure:
Figure 3 : Main Interfaces and Functionalities used to Support Hyperty Interoperability
2.5 Protocol on the fly - ProtoFly
Extending the Signalling On-the-fly (SigOfly) concept introduced in WONDER project [39], ProtoFly concept supports the code on-demand facility on Web runtime (e.g. JavaScript), by dynamically selecting, loading, and instantiating the most appropriate protocol during runtime. Such characteristic enables protocols to be selected at runtime and not at design time, enabling protocol interoperability among distributed services, promoting loosely coupled service architectures, optimising resources spent by avoiding the need to have Protocol Gateways in the services middleware as well as minimising standardisation efforts.
Protofly involves the following concepts:
Protocol Stub: The implementation of the protocol stack e.g. JavaScript file that can be dynamically loaded and used to support interoperability between distributed services.
Messaging Services: Services that are provided by the service provider’s domain server to route messages between distributed Hyperties and other network-side services.
Communication Hyperty Messaging Server: A server within the communication Hyperties that supports the exchange of messages between the distributed services as well messaging towards the CSP platform.
The data model must encompass all information underlying to the reTHINK architecture, the goal of which consists in providing a consistent and global description of web-based communication capabilities called Hyperties with their associated framework. The main part of this data model therefore concerns Hyperties by themselves together with the structuration of the information concerning them which have to be locally maintained by the various entities involved in some part of the lifecycle of any Hyperty.
The various stages through which passes a Hyperty involve three specific roles: the service provider, the service developer and the consumer. The Hyperty life cycle allows to globally and precisely organize their activities [1]. The figure below gives an abstract expression of the Hyperty life cycle which we can use to roughly delimit the perimeter of the data model.
Figure 5 : Data model perimeter
3.2 A data model reference paradigm: ETSI M2M and oneM2M Data Representation
Many approaches could be used to express the data model. As we will however appraise in this section, ETSI M2M and oneM2M determine a recent and promising framework to be investigated. The ETSI M2M xSCL (Service Capability Layer) handles resources associated to the system’s entities following the RESTful paradigm [18, 19]. Applications at different nodes rely on the SCLs to interchange data between each other, monitor other applications, or control devices. ETSI specifications have focused on the hierarchical representation of M2M resources as well as on standard APIs for accessing them by the CRUD (Create, Retrieve, Update and Delete) verbs. The Reachability, Addressing, and Repository (xRAR) capability is the cornerstone for ETSI M2M
platforms, responsible of data storage and exchange between applications and SCLs. This capability includes also the subscription/notification mechanism, which enables applications to receive event notifications from gateways, also supports information searching based on defined criteria.
Similarly, OneM2M supports the Hypermedia as the Engine of Application State (HATEOAS) with REST to enhance service discoverability and extensibility in the future [20]. oneM2M specified the Application and Service Layer Management (ASM) function for handling software configuration, execution, troubleshooting, and upgrading at Application Entities (AEs) and CSEs by utilizing the device management (DMG) functions.
Additionally, oneM2M specified the Data Management and Repository (DMR) CSF for data storage and mediation functions, the Discovery (DIS) CSF for information searching, and the Group Management (GMG) CSF to enable the M2M System to perform bulk operations on multiple devices, applications or resources that are part of a group.
All entities in the oneM2M System, such as AEs, CSEs, data, etc. are represented as resources that can be accessible through REST APIs. oneM2M identifies three categories of resources:
1. Normal resources: Include the complete set of representations of data, which constitutes the base of the information to be managed.
2. Virtual resources: Used to trigger processing and/or retrieve results, but they do not have a permanent representation in a CSE.
3. Announced resources: a resource at a remote CSE that is linked to the original resource that has been announced, and it maintains some of the characteristics of the original resource.
A unique identifier (URI) identifies a resource with a set of defined attributes, which can be of two types:
1. Attribute: meta-data that provides properties associated with a resource representation.
2. Sub-Resource: A resource that has a containment relationship with the addressed (parent) resource. The parent resource representation contains references to the children. The lifetime of the sub-resource is linked to the parent's resource lifetime.
Attributes can be:
RW: read/write by client
RO: Read-Only by client, set by the server
WO: Write-once, can be provided at creation, but cannot be changed anymore
The resource tree at both ETSI and oneM2M specifications have a lot of similarity, however a number of differences could be listed here:
1. oneM2M has defined a set of new resources to handle the additional capabilities and functionalities specified by the Common Service Functions (CSFs), such as LocationPolicy, StatesCollect, are Request resources.
2. The resource’s URL in oneM2M are shorter by omitting the usage of collections resource (i.e., the applications, containers and contentInstances resources).
3. In addition to the parent-child relation between resources, oneM2M specified linking resources in a non-hierarchical method.
4. Additional attributes are defined to application, container and contentInstance resources, for example, the ontologyRef attribute is used to link the resource to a predefined ontology.
In summary, the ETSI and oneM2M data model are suitable candidates to represent rethink data and resources in a well known, standardised way.
We state our data model following two levels. One high level view will provide the foundations of an abstract specification of the data model. The well-recognized and consensual semi-formal notation UML will be used to develop this level. We will also provide a low level expression of our data model which ideally should be as close as possible to the implementation. There are however several interesting notations which can be used to express this low level. We will identify several potential candidates and compare them accordingly to some set of criteria fitted to the project objectives. One last but not least requirement concerns the mapping between the two descriptions which has to be mechanically ensured. At this step of the project, a translating tool has been developped and is currently under test. The soundness and the completeness of the mapping implemented by this tool will be investigated in the next step of the project.
4 Selecting a low level notation to express the data model
We provide below a state of the art about the potential candidate notations to express the low level data model. We define and apply a set of criteria to appraise these candidates and finally identify the most-suited notation satisfying the criteria.
4.1 JSON schema
4.1.1 Description
JSON is a declarative language defining structures and constraints. It's similar to XSD but more clear and human- and machine-readable. Here is a basic example of a JSON Schema:
The above schema has five properties called keywords. The title and description keywords are descriptive only, in that they do not add constraints to the data being validated. The intent of the schema is stated with these two keywords (that is, this schema describes a product). A possible example for this schema is given as follows :
{
"id": 1,
"name": "A green door",
"price": 12.50,
"tags": ["home", "green"],
"warehouseLocation": {
"latitude": 54.4,
"longitude": -32.7
}
}
The type keyword is fundamental to JSON Schema. It specifies the data type for a schema. At its core, JSON Schema defines the following basic types: string, number, object, array, boolean, null
The $schema keyword is used to declare that a JSON fragment is actually a piece of JSON Schema. It also declares which version of the JSON Schema standard that the schema was written against.
The $ref keyword is used as a type value, refering to a complex type defined by a $schema piece. $ref can also be used in a format called JSON Pointer. e.g, let’s start with the schema that defines an address, since we are going to reuse this schema, it is customary (but not required) to put it in the parent schema under a key called definitions:
{
"definitions": {
"address": {
"type": "object",
"properties": {
"street_address": { "type": "string" },
"city": { "type": "string" },
"state": { "type": "string" }
},
"required": ["street_address", "city", "state"]
}
}
}
and then use it like: { "$ref": "#/definitions/address" }
$ref can also be a relative or absolute URI, so if you prefer to include your definitions in separate files, you can also do that. For example: { "$ref": "definitions.json#/address" }
4.1.2 Tools
reTHINK considers Docson – documentation for your JSON types - to automatically generate a well-formated, human readable documentation.
TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. With TypeScript, you can use existing JavaScript code, incorporate popular JavaScript libraries, and be called from other JavaScript code.
4.2.1 Advantages
1. We can use declarative file types *.d.ts for interface definitions (data models and API).
2. Tool support. Since there are variable types, tools are more robust for error detection in compile time. Refactoring is less problematic, and consequently scaling the size of the project.
3. Compatible will all available JavaScript libraries.
4. Compliant with available modular systems e.g. AMD, CommonJS
4.2.2 Using Typescript for data models
One of the great features of TypeScript is that interface implementation is implicit. In Java or ActionScript, you have to specifically say that a type "implements MyInterface". In TypeScript, if it fits, it fits. We can use TypeScript to describe data models (Data Transfer Object's). ex:
declare enum Direction {IN, OUT, INOUT}
interface DTO {
name: string;
direction: Direction;
size: number;
active: boolean;
object: {address: string};
from?: string; //optional field
to: string;
}
and use it like:
var dt: DTO = {
name: 'Micael',
direction: Direction.IN,
size: 20,
active: true,
object: {address: "Aveiro", name: 'xpto'/*extra fields are valid*/},
//from
to: 'Pedro',
other: 'xpto' //extra fields are valid
};
This object is equivalent to a JavaScript JSON object. It can be used in the TypeScript API or for message data transfer. Types and mandatory fields are enforced by the compiler.
It should be possible in the future to build some tools to convert *.d.ts (declarative files) to JSON Schema using some sort of reflection API. There is something similar in progress on the TypeScript API project.
4.2.4 Runtime Schema Compare
Comparison of data objects in runtime (directly in TypeScript) is an issue. We need some parser tools that are not available right now. There is no way to enforce this at run-time. JSON Schema might be a better choice if that's what we need, and we can use automatic tools to translate between.
Extensions to data model definition (ex: field validators) could be added by decorators. It's something similar to Annotations in Java. And we can build some runtime validations like "Bean Validation".
4.3 WSDL
WSDL which is the acronym of Web Services Description Language, allows XML-based descriptions of web services. There are two versions V1.1 [9] and V2.0 [10] of which only the second is approved by W3C.
WSDL provides all the capabilities necessary to describe the various parts of a web service mainly the data manipulated and transmitted by the web service together with the involved operations defining its interface.
Operations are characterized by interaction patterns defining the structure of the associated input/output messages together with their respective cardinality.
Data types are specified with XML schema [11] which allows structured types built with primitive ones [12] and derived ones [13].
A web service is structured as an abstract part which may be bound to various concrete representations. WSDL also provides the capabilities to define those bindings linking by the way a specific protocol with a concrete data format to the abstract representation of the web service. The main application of WSDL concerns the description web services operating through the SOAP model [14]. A detailed example of a web service description is given at the following link [15].
WADL is the acronym of Web Application Description Language [16]. Its development initiated by Sun Microsystems, should ideally meet the needs of the web companies (Google, Yahoo, Amazon, etc) providing many free and popular HTTP-based web services. One of the ultimate ambitions of these companies would be that these services may natively interact with each other without human intervention. A major obstacle to such automatic interoperability comes from their description, most of which is rather human-readable.
In this respect WADL is an XML-based formalism targeting RESTful HTTP-based web services for which it aims to provide descriptions allowing a completely automated processing. Such a description focuses on the resources deployed by the service together with their structure, the format of their representation, and the methods that can be applied to them. The best way to quickly capture the spirit of this formalism, is to consult the examples of WADL schemas associated to REST services [16].
Even if WADL seems more appropriate than WSDL to concisely describe RESTful HTTP-based web services, there are still discussions about the pro and cons of each approach [17,18,19,20,21].
4.5 ASN.1
ASN.1 is the acronym of Abstract Syntax Notation number One. It is the formalism promoted by ITU-T [22], and used by traditional telecom players to describe information exchanged in ITU-T standardized protocols. ASN.1 involves basic data types (boolean, integer, string, real, …) and structured ones built thanks to constructors (sequence, set, choice, …) [23]. It allows high-level operations like subtyping to restrict the domain of a type or recursion-based definitions. ASN.1 provides several encoding/decoding rules [24] to introduce compiled ASN.1 data structures descriptions in protocols implementations. It is worth noting that one such a method is dedicated to XML [25].
4.6 Selection
4.6.1 Criteria
The following aspects are considered in order to select an appropriate semantics for data representation:
expressibility : to be able to express primitive types / structured types / optional-mandatory
matching capabilities inside one language : deciding if two representations are compliant in the sense that they can interoperate
conciseness : “length-size” of the description (expressed with an integer varying from 1 = “the most concise” to 4 = “the less concise”)
genericity / domain agnostic
extendability : adding for examples new attributes to some existing model will not break the compatibility with the standard
accessibility : direct description of the rights to access without policy
clear semantics
4.6.2 Comparison
As a oneM2M-based representation of the data on entities in the architecture can be easily derived from e.g., a JSON-based description, it doesn’t appear in the comparison.
JSON schema and Typescript seem the most interesting notations. Tools for Typescript are actually not completely available, and taking into account that JSON schema can be imported from Typescript description, the notation JSON schema is selected, especially as it allows as well for a oneM2M-based representation of data on rethink entities which guarantees compatibility with M2M use cases considered in the project.
4.7 Generating JSON schemas from UML diagrams
4.7.1 Parsing Plantuml
PlantUML is an open-source tool allowing users to create UML diagrams from a plain text language, without concerning themselves too much with layout. It uses Graphviz software to layout its diagrams. Plain text languages are nice targets for parsers and text to text transformation. PlantUML can be translated to other schema definition formats.
4.7.2 The tool
PlantUML is very flexible, but also lacks constraints that are necessary for certain transformations that depend on the target output. These constraints are left for other tools to decide, and that is exactly what plantuml-json-parser does. This tool is only for a subset of the class diagram definitions. It adds constraints that avoid redundant parsing rules and some that are necessary to be compliant with JSON Schema (e.g. types for properties and format). The tool not only adds a transformation process to JSON Schema, it also verifies if everything is correct (references, names, field formats and relations), becoming a value-added tool that confirms the consistency of the model.
There are 2 eclipse plugins in folder bin/plugins, but it's possible to use command line parser with Run class available in puml-parse.jar.
It should however be pointed that an evaluation will be conducted to determine if this parser support the full range of JSON semantics, and if the produced schemas are standard-compliant.
4.7.2.1 How to use the Tool with Eclipse IDE
Just drop both jar's in eclipse "dropins" folder. Xtext plugin is needed. In eclipse create a java project and in source folder create any file "*.cduml" to use the parser/generator. Generation is automatic if no errors are emitted. You may need to add Xtext nature to the project (normally eclipse IDE asks if the user what's this, when some files extensions are recognized).
Just run the command "java -jar puml-parse.jar" and it will parse and generate JSON Schema, for any file existent in the directory that is compliant with the parser rules and have extension "*.cduml".
4.7.2.3 Parser Rules
PlantUML has many open rules that can conflict or add complexity to the JSON Schema generation. To avoid this, we should follow some additional constraints added to the PlantUML parser. You should consider this as a subset of Class Diagram PlantUML.
When defining relations like "X --> Y" do not use the inverse alternative "Y <-- X".
Define class properties like "<name> [?] : [type]", the symbol "?" signals a non-mandatory property. Available types are null, string, number, integer, boolean or any other class/enum, defaults to null. Constant properties are supported with "<name> = <value>".
Notes are available only in a restricted format like "note <top | bottom | left | right> of <\entity> [text] end note" for an attached note to an entity (class or enum). Or not attached alternative like "note as <id> [text] end note". Some inline notes are supported like: "note <top | bottom | left | right>: [text]" or "note "[text]" as <note name>".
Relations with notes are also supported.
4.7.3 Examples
A simple example of a plantuml class diagram and associated generated json-schema, is provided below.
A Hyperty fundamentally consists of a static and a dynamic part. The former is defined when the Hyperty is provisioned and remains unchanged until the Hyperty is updated or removed. The dynamic part concerns a Hyperty instance created when a Hyperty is deployed. Both entities belong respectively to the Hyperty Catalogue and the Registry which are data basis containing respectively the published and the instanced Hyperties. The Catalogue and the Registry may also appear as two main subtrees organizing in a logical way the reTHINK data model. The figure below presents the global structure of the reTHINK data model (the structure is in fact partially represented due to practical reasons). In this chapter, the order of presentation of the various substructures will follow a deep first exploration of the tree below.
The Catalogue is the functional element that lists the services available for service domain (namely a service provider). It is defined in D2.1 §5.5.1 [2]. From the code downloaded from the Catalogue it will possible to instantiate all the elements in the runtime necessary to carry out Hyperties functions.
The Catalogue Data Model includes all Objects to be handled by the Catalogue functionality including:
The Hyperty Descriptor Data Object is used to model each Hyperty offered by a certain Service Provider.
To support the Hyperty interoperability concept based on data synchronisation mechanisms, the Hyperty Descriptor is characterised by the schemas that describes the data objects handled by the Hyperty: dataObject. A dataObject contains a HypertyCatalogueURL that links to the data object schema descriptor (which will be described below in section 5.6).
In Addition, Hyperty Descriptor contains the policies to rule Hyperty execution (HypertyPolicy), the required runtime Hyperty capabilities (RuntimeConstraint) and the data needed to configure the Hyperty delivery to users (ConfigurationData).
The Protocol Stub Descriptor is used to model each Protocol Stub that can be used to connect to a certain Service Provider domain. It is characterised by the schemas of the supported messages (messageSchemas) which contains a HypertyCatalogueURL that links to the corresponding Messages schema descriptor (see below).
In Addition, Protocol Stub Descriptor contains the required runtime protocol capabilities (RuntimeConstraint) and the data needed to configure the Protocol Stub deployment in the runtime (ConfigurationData).
The Hyperty Runtime Descriptor is used to model the Runtime that can be used to execute Hyperties in a certain device or network server. Hyperty Runtimes are described in terms of supported capabilities to execute Hyperties (RuntimeHypertyCapability) and Protocol Stubs (RuntimeProtocolCapability) and its type includes browser, standalone, server and (IoT/M2M) gateway.
The Protocol Stub Descriptor sourceCode attribute contains the source code of core runtime components that are needed to be deployed in the device.
The Data Object Schema describes data objects used by reTHINK functionalities. Two types are considered:
MessageDataObjectSchema describes messages that are used by Hyperties to communicate to each other
HypertyDataObjectSchema describes the data objects handled by Hyperties and exchanged among them in the Message Body (cf. § 5.11.2). The access to these Objects is ruled by an AccessControlPolicy (see the Reporter - Observer communication pattern rules as example).
Four types of Hyperty Data Objects are considered when this deliverable was published, however other Data Schemas are likely to be found necessary in the next phases of the project:
a) Communication Data Objects handled by Communicator HypertyType (see Hyperty Descriptor above)
b) Connection Data Objects handled by Communicator HypertyType (see Hyperty Descriptor above).
c) Identity Data Objects handled by Identity HypertyType.
d) Context Data Objects handled by Context (producer or consumer)
The User Identity Data Model is used to model the reTHINK User entity. The Identity data model is handled by Identity Management functionality. The Identity has a global unique identifier UserUUIDURL which is independent of the IdP corresponding to the GraphID concept introduced in D2.1 [2]. In addition an Identity can have multiple identifiers of type UserURL.
An identity is characterised by its type which includes Human, HumanGroup, Physical Space and Physical Object. One Identity may hold one or more credentials (IDToken) used for its authentication. One Identity may hold one or more access tokens (AccessToken) used to manage identity privacy in terms of authorisation policies. AccessToken and IDToken are JSON Web Token objects.
An Identity is characterised by its profile (UserProfile) which may include informations associated to the user : profile page URL, username, birthdate, picture, etc. It is proposed to be compliant with OIDC (Open ID Connect) standard claims [36].
Identity may also handle Identity Assertions (IdAssertion) to validate some of its identitiers (IdValidation) in certain scopes e.g. in a communication. IdentityAssertion and IdValidation should be compliant with WebRTC RTCIdentityAssertionResult and RTCIdentityValidationResult.
One Identity may be associated with one or more User Hyperty Accounts i.e. Hyperty subscriptions that uses one of the user Identifiers.
JSON Web Token (JWT) is the token data structure chosen to be used in reTHINK architecture to exchange ID and Access tokens. JWT is defined in RFC7519 [40] and it has been designed to be used in the Web as a modern and more secure alternative for cookies.
JWT is a compact, URL-safe means of representing claims to be transferred between two parties. A claim is represented as a name/value pair consisting of a Claim Name and a Claim Value. JWT claims can be typically used to pass identity of authenticated users between an identity provider and a service provider (e.g. Registry and Messaging Services in reTHINK architecture), or any other type of claims as required by business processes (expiration date, e-mail, address). The tokens can also be authenticated and encrypted by using two standards: JWS (JSON Web Signature) RFC7515 [41] and JWE (JSON Web Encryption) RFC7516 [42].
Compared to Cookies and other token structures JWT presents several advantages listed below:
1. JWTs can contain any data structure so they are very flexible to meet any future identity requirement of the reTHINK architecture.
2. JWTs are a standardized container formats to encode user and client related information in a secure way using "claims" (whereas cookie contents and signing/encryption are not standardized).
3. JWTs are not restricted to present session-like information about the authenticated user itself; they can also be used to delegate access to clients that act on behalf of the user.
4. JWTs allow for a more granular access model than cookies because JWTs can be limited in "scope" (what they allow the client to do) as well as time.
OAuth2 Compliance: OAuth2 uses an opaque token that relies on a central storage. A stateless JWT can be returned instead, with the allowed scopes and expiration.
The Communication Data Model is used to model the reTHINK Communications. It is handled by Messaging Services functionality and also by Communicator type Hyperties.
Only Connection objects are mandatory by using W3C standardized interfaces/classes including RTCICECandidate and RTCSessionDescription classes.
The conventional Offer-Answer Communication Control model proposed by JSEP implies the usage of specific messages to handle the offer and response of communication using messages like Invite, Accept, Bye. This approach is quite mature but constrains very much the use cases to be supported.
Another approach is to model communication and connections as a resource tree model as defined below and peers use CRUD generic operations to be synchronised and control the communication eg Create Conversation Resource and create Conversation Participant to add a new peer joined with access control authorisation policies to handle who may join the conversation. In principle, this model is more open to innovation imposing less restrictions to add new features than the conventional one.
The analysis of this pattern to support a Communication setup is provided below in section 5.11.
Local and Remote Connection classes are compliant with ORTC and WebRTC spec. Local object models the endpoint sending the stream i.e. stream sources are located in the local endpoint. Remote models the endpoint receiving the stream i.e. stream sources are located in the remote endpoint. Remote Objects are Observed Objects (changes made remotely) while Local Objects are Reported objects (changes made locally).
The Hyperty Resource Data Model is used by the Communication Data Model to model Resources shared in reTHINK Communications e.g. user audio, user video, files, chat messages, etc. The Hyperty Resource data model is handled by Messaging Services functionality and also by Communicator type Hyperties. This model is compliant with W3C MediaStream API.
Figure 16 : Hyperty Resource UML diagram
5.10 Reporter - Observer communication pattern
5.10.1 Analysis
The Hyperty Interoperability at run time concept is based on a common data object model framework where generic operations like CRUD are applied. The usage of Data synchronisation models in Web Frameworks [3] looks very promising and is becoming very popular. The usage of the emerging object.observe javascript API [4] will make it even more effective.
However full two way distributed synchronisation raises serious challenges difficult to handle when we have to deal with concurrency. For example when two different Hyperty instances change the same synchronised at "the same time" creating inconsistencies between the two objects.
To simplify the problem, a new communication pattern is proposed: the Reporter-Observer pattern. The main principle is to only grant writing permissions to Object owner (creator). Such principle will require from the Hyperty Runtime to be able to enforce access control to synchronised object according to this rule.
Only one Hyperty instance reporter per synched object instance.
Observer Hyperty is not allowed to change objects
Reporter Hyperty creator of the object is allowed to change the object.
Such Model is depicted in the figure below. The Reporter-Observer pattern is supported by the exchange of messages between Reporter Syncher and Observer Syncher defined in the reTHINK Message Model.
Figure 17 : Reporter - Observer communication pattern
5.10.2 Communication Setup using Reporter-Observer pattern
An analysis on how to apply the Reporter-Observer pattern on communication setup is performed below.
5.10.2.1 Connection setup start
Alice creates Connection object as well as its LocalConnectionDescription and LocalICECandidates Objects and reports to Bob.
Bob receives Alice request, creating Connection object and its RemoteConnectionDescriptions and RemoteICECandidates objects sent by Alice. Bob is set as observer of these objects.
As soon as Bob accepts the request its LocalConnectionDescription object and its LocalICECandidates are created and added to Connection object, which is reported back to Alice.
Alice receives update from Bob, adding the new objects sent by Bob as its RemoteConnectionDescription and RemoteICECandidates. Alice is set as observer of these new objects.
Alice adds a ICE Candidate to its LocalICECandidates Object and reports to Bob. Bob observes there is a new ICE Candidate in its RemoteICECandidates object.
Figure 19 : ICE Candidates exchanged – I
Bob adds a ICE Candidate to its LocalICECandidates Object and reports to Alice. Alice observes there is a new ICE Candidate in its RemoteICECandidates object.
Figure 20 : ICE Candidates exchanges – II
5.10.2.3 Connection establishment
Alice reports a remote stream is added in its LocalIceCandidates object and its status is set to connected. Bob observes that its source stream was added in its RemoteIceCandidates object and its status was set to connected. Remote stream is added at Alice's Receiver.
Bob reports a remote stream is added in its LocalIceCandidates object and its status is set to connected. Alice observes that its source stream was added in its RemoteIceCandidates object and its status was set to connected.
Figure 22 : Connection establishment – II
Alice reports Connection status is set to Connected. Bob observes Connection status was set to Connected.
Figure 23 : Connection establishment – III
5.11 Message model
The main parts of such message are the headers, the body and the various message types all described below.
5.11.1 Header
Fields needed to route messages.
id To be used to associate Response messages to the initial request message.
type Message type that will be used to define the Message Body format.
contextId GUID used to identity e.g. a communication session like a telephony call
from URL of Hyperty instance or User associated with it.
to One or more URLs of Message recipients. According to the URL scheme it may be handled in different ways
Optionally, all message bodies may contain JWT tokens for Access Control for Identity Assertion purposes.
5.11.2.1 CreateMessageBody
resource The URL resource for the new Object.
value Contains the created object in JSON format.
policy URL from where access policy control can be downloaded. Examples:
1. reporter-observer where only reporter can make changes
2. similar to previous one but observers can request reporters to make changes
5.11.2.2 ResponseMessageBody
Code A response code compliant with HTTP response codes (RFC7231 [44]).
Description Description of response code compliant with HTTP response codes (RFC7231 [44]).
Value Contains a data value in JSON format. Applicable to Responses to READ MessageType.
5.11.2.3 ReadMessageBody
resource The URL resource for the Object to be read.
attribute Identifies the attribute in the Object to be read (optional)
criteriaSyntax Defines the criteria syntax used in criteria field. To be used for search purposes. Valid criteria Syntaxes are: "key-value", "mongodb", "sql"(?), ...
criteria Defines the criteria to be used for search purposes. Syntax used to define the criteria is set in the criteriaSyntax.
5.11.3 Procedures
5.11.3.1 Ask a remote Hyperty Instance to create and observe an object
It is proposed to use as much as possible Web URLs model for rethink addressing model as defined in the WHATWG standard [5]. According to this standard, there is no distinction between URL and URI. The intention is not to depend on existing DNS based naming resolution but to keep it open as such decision will take place in the identity management work (mainly WP4 of the project).
reTHINK URL is used by the Message model (cf. §5.11) to identify the message recipient, message sender and resources where operations carried by the message will be performed. It is to be noted, that in some situations, there is no need to resolve URL into IP addresses in order to reach the URL endpoint. For example, it is possible that Hyperty instances served by a messaging service like vertx.io would use a dedicated name space to manage message routing between Hyperty instances.
rethink considers the introduction of new URL schemes for the different types of addresses needed but re-use as much as possible existing schemes that are handled by IANA [6]. The different types of URLs required by reTHINK are defined in the picture below.
5.12.2 Address model UML diagram
Figure 25 : Address model UML diagram
5.12.3 User URL Type
This URL is needed to identify reTHINK users as defined in D2.1 [1](including individuals, organisations, physical spaces, physical things) and modelled as User Identity Objects (cf. §5.7). There are a lot of examples justifying the introduction of this information:
to query information about the user profile
to request to communicate with the user
…
Analysis of existing schemes for users:
acct[7]: is not appropriate since it is associated to a service provider and this address must be portable between service provider domains
In case, the user identifier is not managed by an IdP but by some other naming management mechanism like a pure DHT(cf; WP4.1) the URL would not contain the idp domain and just the user identifier that would have to be global unique identifier corresponding to the GraphID concept introduced in D2.1 [1]. The reference [8] usage seems to be a good candidate.
user-uuid://<unique-user-identifier>
The impact of using URL without domain name requires further investigation.
**examples**
Individuals:
user://orange.fr/simon
user://twitter.com/pchainho
Physical Places:
user://cm-lisboa.pt/campo-grande-28-building
Assuming the city hall is also playing the IdP role.
Organisations:
user://telecom.pt/meo
Assuming PT is also playing the IdP role.
Global Uniquer Identifiers (GraphID) without IdP domain:
Where <runtime-provider-domain> identifies the stakeholder that provides and manages the Hyperty Runtime execution environment. This URL type should be compliant with OMA LWM2M identifier for endpoint client name defined at §6.2.1 from OMA-TS-LightweightM2M-V1_0-20141126-C. According to recommendations about URNs to be used in LWM2M endpoint client name the following Runtime URLs may be valid:
In case the Hyperty instance is registered at "meo.pt" domain:
hyperty-instance://meo.pt/123456
5.12.5.5 Communication / Conversation Address
The URL Communication address is used to identify a communication data object.
Usage examples:
to identify the resource communication during communication control
to query about recorded communications
to identify communications for billing purposes
It is proposed to use a new scheme e.g. "comm"
comm://<csp-domain>/<communication-identifier>
For cross domain communications, it is used the "csp domain" from the communication owner, typically the communication requesting party. Other involved CSPs are free to generate its own communication URL for its operations purposes or user purposes e.g. to keep records of communication history.
For example in case the Communication is provided by "telekom.de":
For example in case the Context data is about energy context of "myhouse" domain:
ctxt://myhouse/energy
5.13 Hyperty Registry Data Model
5.13.1 The main entry
The Hyperty Registry (defined in D.2.1 5.5.2 [2]) provides functionality of a directory service to find users’ addresses and availability.
Registry Data Model includes all Objects to be handled by the Registry functionality including:
Data about Hyperty Instances running in Device runtime.
Data about connected devices with instances of the runtime where Hyperty instances are running.
Optionally it may contain data about instances of protocol stubs used to connect to the Hyperty Messaging Service domain. This data can be used to manage and keep logs about sessions where the user is logged in into the domain.
Optionally, data about (data) objects instances handled by Hyperty Instances e.g. communication data objects. This data can be used to manage and keep logs about objects instances handled by Hyperty Instances e.g. communication / call sessions.
Each RegistryDataObject is associated with exactly one user (via the GraphID/UserUUIDURL).The class diagram above shows the data model of an entry of the Registry. Its attributes shared by the refined classes are :
Id: unique identifier in the context of the Registry domain
URL: address to reach the instance. The URL type will depend on the Registry Data Object class eg for Hyperty Instance it will be HypertyURL (see below)
descriptor: a link to the Catalogue from where the descriptor of the instance can be retrieved
starting date and last time the Instance was modified. The dates format must be compliant with ISO8601
status: the instance status (Created, Live or Dead)
It also contains internal data objects which are defined in the following sections.
The Protocol Stub instance, contains data about instances of protocol stubs used to connect to the Hyperty Messaging Service. This data can be used to manage and keep logs about sessions where the user is logged in into the domain. It contains the Protocol Stub Descriptor URL that can be used to consult protocol stub metadata and the runtime instance where the protocol stub is executed. It also will be used to check for possible updates of the Protocol Stub.
The Hyperty Runtime Instance contains data about connected devices featuring Hyperty runtime. It contains the URL of the Device Runtime (HypertyRuntimeURL) which identifies the runtime where the Hyperty instance is running, its descriptor and the list of domains from Hyperties running in this device.
Hyperty Data Object Instance contains data about (data) objects instances handled by Hyperty Instances, e.g., communication data objects.
This data can be used to manage and keep logs about data object instances handled by Hyperty Instances, e.g., communication / call sessions. It identifies the Hyperty Instance that owns (has created) the data object.
Figure 31 : Hyperty Data Object Instance UML diagram
HypertyContextDataObjectInstance and HypertyCommunicationDataObjectInstance are two examples of Hyperty data objects instances that are reached through different URL types, namely ContextURL and CommunicationURL addresse.
This chapter gathers and defines the interfaces that will be used by the Hyperties to interact with the different elements of the reTHINK architecture. The interfaces will be defined from a logical point of view and the scope of chapter is not to provide a complete specifications but a high level specification of the interfaces.
The reTHINK project tries to leave open to the implementer the use of one protocol or another. However, we include examples of real protocols which can be used to implement this interface at the application level of the TCP/IP stack.
The real-time interface, explained in 7.7, is different from the rest of the interfaces. It follows the WebRTC media framework and the definition included here is just for informational purpose.
6.1.1 Interfaces with Proto-fly support
Registry, Identity Management and Messaging interfaces are proposed to support Protofly. It means that the Hyperty will have to download first a Protostub to interact with these elements of the architecture. This gives a lot of flexibility to the solution at the cost of having extra BW required to download the JavaScript and an initial delay.
6.1.2 CRUD operations and roles
CRUD is an acronym which stands for Create, Read, Update, and Delete. It refers to the basic operations that can be done in a data object and are independent from the protocol used to perform them. The interfaces allow to modify data objects on the remote Hyperties so the operations which can be performed over the interfaces can be easily defined as CRUD operations.
For example, a REST API allows to do CRUD operations over a data object stored in a server. The match between the CRUD operations and the HTTP methods used in a REST API is direct: Create is done with a POST method, Read with a GET, Update with a PUT and Delete with a DELETE. There are other protocols which allow to do out CRUD operations. For example, LWM2M/COAP (RFC7252 [34]) is protocol designed to be used on constrained devices which shares the basic set of request methods with HTTP and therefore it allows to do the same operations.
Not all the Hyperties instances should be able to modify any attribute from a data object. That is the reason why different roles where defined for each interface.
6.1.3 HATEOAS principle
HATEOAS, an abbreviation for Hypermedia as the Engine of Application State, is a constraint of the REST application architectures that distinguish them from most other network application architectures. The principle is that a client interacts with a network application entirely through hypermedia provided dynamically by application servers. This causes that the options which can be performed over a data object attribute (a resource using REST terminology) will change depending on the status of that resource and the available operations are discovered from resource representation returned from the server.
This principle was applied in the design of the reTHINK architecture as the application and the Hyperties dealing with the Catalogue, Registry and the rest of services will have to discover dynamically properties which will change with the time e.g. the Protocol Stubs supported by a Hyperty.
A resource is an object with a type, associated data, relationships to other resources, and a set of methods that operate on it. It is similar to an object instance in an object-oriented programming language, with the important difference that only a few standard methods are defined for the resource (corresponding to the standard HTTP GET, POST, PUT and DELETE methods), while an object instance typically has many methods.
Resource is the terminology used in REST API to design the elements over which the different actions can be performed. In the reTHINK Data Model a clear match can be done between JSON objects properties which are defined in the data model and REST resources. This will allow to define a resource access mechanism based on the REST API approach: URLs. This will make possible to define a unique entry point to perform CRUD operations over the JSON objects and its properties that will contain the data in the different services of the reTHINK architecture.
6.1.4.1 Resource model
Resources can be typically grouped into collections. Each collection is homogeneous so that it contains only one type of resource, and unordered. Resources can also exist outside any collection. In this case, we refer to these resources as singleton resources. Collections are themselves resources as well.
Collections can exist globally, at the top level of an API, but can also be contained inside a single resource. In the latter case, we refer to these collections as sub-collections. Sub-collections are usually used to express some kind of “contained in” relationship.
Figure 33 : REST resource model
For example, in reTHINK architecture all the Hyperty Descriptors contained by the Catalogue service will be a collection of resources over which different operations can be applied.
6.1.4.2 Defining the URL schema to be used
reTHINK resource model needs to have one and exactly one entry point and this entry point is a URL. The URL of the entry point needs to be known by the elements which will interact with the reTHINK data model through APIs so that they can find and use it.Each collection and resource in the data model will have its own URL. Following industry best practices the recommended convention for URLs is to use alternate collection/resource path segments, relative to the API entry point. The pattern below is proposed to be used within reTHINK architecture to access resources:
/rethinkapi/coll1 A top-level collection named coll1
/rethinkapi/coll1/id The resource id inside collection coll1
/rethinkapi/coll1/id/subcoll1 Sub-collection subcoll1 under resource id
/rethinkapi/coll1/id/subcoll1/subid The resource subid inside subcoll1
In cases where resource access is provided via a lwm2m/coap interface, instead of "/rethinkapi", the entry point "/.well-known" will be used in compliance with RFC6690 [45] and RFC5785 [46].
6.1.5 Adapting the REST API to constrained devices.
In order to support M2M use cases, reTHINK must be able to interact with constrained devices. To achieve that, it is necessary to adapt the reTHINK Representational State Transfer (REST) architecture in a suitable form for the most constrained nodes can be used to interact with constrained devices.
Constrained Application Protocol (CoAP) is a software protocol intended to be used in very simple electronics devices that allows them to communicate interactively over the Internet. It has been standardized by the CoRe WG is working on several draft to enable HTTP/COAP interoperability in order to make possible that REST API cab be used in constrained devices. The draft Guidelines for HTTP-CoAP Mapping Implementations are especially applicable to reTHINK project as they define the guidelines to map HTTP to COAP, so it makes that any API accessible from HTTP is accessible with COAP open it to constrained devices. This document also defines the mapping between HTTP and COAP URIs.
In the Chapter 7 an example of adaptation of the Catalogue interface is done as an example of the process to adapt other interfaces to be compatible with OMA LWM2M 1.0 which is a protocol from the Open Mobile Alliance for M2M or IoT device management used with COAP.
6.1.6 reTHINK interfaces
The diagram below show the main interfaces of reTHINK architecture that will be defined in this chapter.
Although the scope of this section is not to define the real implementation of the interfaces of the reTHINK architecture, it was considered important to include some security considerations which must be followed:
1. All the interfaces will be transported over a secure transport layer. For example REST APIs will transport over TLS (HTTPS). Protocols transported over UDP (e.g. LWM2M/COAP) must use DTLS (RFC4347) [38] when possible.
2. As Hyperties are also going to be executed in browsers, all the security constrains imposed by them must be considered. It is especially relevant to avoid Cross-Domain issues when different contents of a Web Application are loaded from different domains. Other Cross-Domain issues may be result of mixing content loaded from HTTP and HTTPS in the same application. Using always secure interfaces is also a way to avoid this last type of issues.
3. Some APIs exposed by browsers like Screen-Capture are only usable when the scripts which use the API are loaded over HTTPS. Using secure interfaces will always allow to be able to take advantage of any browser feature once is permitted by the End-user.
4. Valid TLS certificates must be used to avoid the connectivity issues. This is especially relevant for Websockets connections.
6.2 Registry Interface
6.2.1 Description of Registry Interface
The Hyperty Registry provides functionality of a directory service to find users’ addresses and availability. All the Hyperty instances will register on the Registry so that they can be located by other Hyperties which need to contact them or get its status. The main operations which a Hyperty needs to perform over the Registry interface are registration and de-registration, publish context information and get information by Hyperties. These operations can be translated into operations over Data Objects. Not all the elements of the reTHINK architecture are allowed to perform any operation over a data object. This is the reason why roles were defined to assign different permission sets.
6.2.2 CRUD operations over Registry Data Objects
Hyperties and Runtime (User Agents) must be able to create, read and update its own Data Object which represents a registration of itself in the Registry.
Optionally, Hyperties can also create, read and update data in the Registry about (data) objects instances handled by Hyperty Instances e.g. communication data objects.
In a similar way, Runtime (User Agents) can also create, read and update data in the Registry about instances of protocol stubs used to connect the runtime device to domains or other devices.
6.2.2.1 Roles for access policy to Registry Interface
Instance-owner: it is the instance which registers itself in the Registry.
External-Hyperty: it is a Hyperty which consults the Registry to get the instance URL to reach the Instance which wants to contact with.
Registry-manager: this role can check and delete existing Instances for different reasons. This role might be optional for DHT-based Registry.
6.2.2.2 Operations allowed per role in the Registry Interface
A REST API seems to be a suitable choice for this interface. In environments where constrained devices (typically for M2M use cases) other protocols like LWM2M/COAP should be also supported. However this interface will be used by a Hyperty already instantiated after getting the code from the Catalogue, so a ProtoStub to perform the registration process is viable option. On the other side this will allow the Hyperty to choose the protocol which best fits its characteristics (e.g. a constrained device could choose LWM2M/COAP over DTLS instead of a REST API over HTTPS).
6.3 Catalogue Interface
6.3.1 Description of Catalogue Interface
Through this interface Users will be able to discover all available Hyperties as well as supported Protocol Stubs, Runtime devices and data schemas in a Service Provider domain. In addition the code necessary to create instances of the Hyperty or Runtime components can be retrieved and deployed in devices.
6.3.2 CRUD operations over Catalogue Data Objects
Users and Runtime (User Agents) must be able to read Hyperty Descriptors, Protocol Stub Descriptors, Runtime Descriptors and Data Object Schemas stored in the Catalogue.
Catalogue publishers including Product Managers and Developers, are allowed to create, read and update data objects in the Catalogue.
6.3.2.1 Roles for access policy to Catalogue Interface
Catalogue-User: the entities which can consult data and download source code to be instantiated in the runtime including Hyperties and Protocol Stubs.
Catalogue-Publisher: the entities with this role can publish data objects in the Catalogue including Hyperties and protocol stubs data and associated source code. This role normally will be played by the Service Provider (or Developer).
Catalogue-Manager: Catalogue managers are entities which have total control over the Catalogue. They are allowed to create, read, update and delete data objects in the Catalogue.
A REST API seems to be a suitable choice for this interface. In environments where constrained devices (typically for M2M use cases) other protocols like LWM2M/COAP should be also supported. This way the data objects would be accessible by using a REST API exposed by the Catalogue and also through a LWM2M/COAP interface. As this interface is the entry point to the reTHINK architecture proto-fly mechanism has not been considered valid option.
6.3.4 Resource access in the Catalogue interface
The URL-based resource access mechanism defined in 5.1.4.2 can be easily applied to the Catalogue interface.
The base URL to create, read, update or delete a Hyperty in the Catalogue:
Erreur ! Référence de lien hypertexte non valide. domain>/rethinkapi/catalogue/Hyperty
For environments in which constrained devices have to be supported, reTHINK will be compliant to RFC6690 [45] and RFC5785 [46] and the base URL to access the Catalogue will be "/.well-known/" instead of "/rethink/catalogue".
Each first level resource at the Catalogue will HypertyDescriptor, RuntimeDescriptor and DataObjectSchema. For example, to access the Hyperty descriptor:
Both policies and constraints are collections of RuntimeContraints objetcs and HypertPolicy objects so applying a Read operation over them a JSON or XML will be returned with a list of all the existing resources.
To access a specific constraint it will be necessary to use this path:
6.4.1 Description of Identity Management Interface
The Identity Management Interface allows Hyperties to communicate with an IdP. It is used by the Hyperties to upload its identity details, assert it before establishing a communication with another identity and to verify the assertion sent by another Hyperty in the messaging.
This interface allows the Hyperty to get an IDtoken which will be used to log in different services with a specific identity. This interface also allows to get AccessToken which are use to get access to resources owned by servers which can be part of either the reTHINK architecture or an external server.
The use of solutions based on OAUth 2.0 and other authentication protocols will require communication between the Client (the services which requires the authentication of a End-User requesting its services) and the Identity Provider. As this interface will depend on the chosen protocol this has been left out of the scope of this document.
6.4.2.1 Roles for access policy to Identity Management Interface
Hyperty-EndUser: it is a Hyperty instance which can access the IdP to modify its identity data.
IdP Manager: it is an entity which has rights to modify data stored in the IdP. It can be the CSP if the IdP service it is provided by it.
Authenticating-Hyperty: it is the Hyperty which need to assert its identity before contacting other Hyperty.
IdP-client: it is the service which needs to validate the identity assertion of a Hyperty instance from which it has received any message. It also can get ID token and Access Token from the IdP in other to identify and authorize End Users before using its services. Note that we are using IdP typical terminology:
a) an End User is a Human participant. In reTHINK the concept can be extended to other non-Human Entities which are things with a separate and distinct existence and that can be identified in a context.
b) the Client is the service which uses the IdP to authenticate End-Users. In WebRTC 1.0 defined in [5] the Client role is played by browsers to validate assertions sent in messages.
It is also relevant to mention that the WebRTC 1.0 require the browser to download Javascript code from the IdP so it uses natively a Protocol-on-the-fly approach.
6.4.2.2 Operations allowed per role in the Identity Management Interface
Hyperty-EndUser over its own Identity Data Object:
Data Object Property Permitted operations
GUID RUD
Identifier RUD
UserProfile RUD
ServiceAddress RUD
IdP Manager:
Data Object Property Permitted operations
GUID CRUD
Identifier CRUD
UserPforile CRUD
ServiceAddress CRUD
IdP Client:
Data Object Property Permitted operations
IdToken R
AccessToken R
Identifier R
UserProfile R
ServiceAddress R
Besides ID and Access tokens which are used to be able to authenticate itself and access to services, Hyperties needs to be able to assert its identity when sending a messaging and it also needs to be able to validate this assertions when they are received in a message.
The Authenticating-Hyperty is the role played by a Hyperty which needs to add an assertion with its identity to a message and the IdP-Client needs to validate that assertion included in the message, it would the Relaying Party. Those roles are inspired in the Identity section from [35].
The Authenticating-Hyperty will need to create an assertion by calling a method createAssertion() which will receive as parameters information derived from the cryptographic certificates which
authenticate it. The IdP-Client will call a method validateAssertion() which will receive as parameter the assertion sent by the Authenticating-Hyperty.
6.4.3 Proposed Protocol for Identity management interface
OpenID Connect 1.0 could be a suitable protocol for this interface. It is an identity layer on top of the OAuth 2.0 protocol. It allows Clients (in reTHINK the Clients would be Registry and Messaging Services) to verify the identity of the End-User based on the authentication performed by an Authorisation Server, as well as to obtain basic profile information about the End-User in an inter-operable and REST-like manner.
OpenID Connect allows clients of all types, including Web-based, mobile, and JavaScript clients, to request and receive information about authenticated sessions and end-users. The specification suite is extensible, allowing participants to use optional features such as encryption of identity data, discovery of OpenID Providers, and session management.
Despite the fact that OpenID Connect is a suitable choice for this interface, there exist other protocols which can be chosen by CSP for investment protection or technical or strategic issues. It is possible to use the Protofly mechanism in the Identity Management Interface show any authentication protocol could be potentially used in this interface.
As mentioned in 5.4.1, the WebRTC API 1.0 exposed by the browser adopted a similar approach. The browser downloads an IdP's JavaScript code which is used to interact with the IdP from the browser.
6.5 Messaging interface
6.5.1 Description of the Messaging interface
Interface used by the Hyperty instances to send messages to its CSP in order to either establish real-time media/data session or send any type of data. This interface is also used by the Hyperties to send messages to Hyperties registered on other CSP.
When the Messaging Interface is used to send messages to a Hyperty instance from another CSP, then the Proto-fly mechanism is used. This interface can be "transient" when after the initial exchange of messages between Hyperty instances they can establish a P2P Datachannel for messaging. The availability of using P2P interfaces between Hyperty instances for messaging will be evaluated during reTHINK project as it presents some problems, for example for billing management.
Note that CSP to CSP NNI is not needed since protocol-on-the-fly will use triangular signalling topologies by using just the signalling server from the called party.
6.5.2 CRUD operations over messaging interface
6.5.2.1 Roles for access policy to Messaging Interface (Communication objets)
Reporter: It is a Hyperty which reports changes over a data object through the Messaging Interface.
Observer: It is a Hyperty which observes the changes in a data object which has taken place through the Messaging Interface.
Hyperty-End-User: It is a Hyperty which can send messages to other Hyperties registered to the same CSP or a different one. This is generic role which has been defined for the Messaging Interface to support roles not considered at this stage of the project.
6.5.2.2 Operations allowed per role in the Messaging Interface
Hyperty-End-user: it is a Communicator Hyperty which can start and/or receive media sessions.
6.5.3 Proposed Protocol for the messaging interface
A JSON over Websocket protocol can be a suitable choice for this interface. Websocket allows establishing a bidirectional connection between a browser and a server so it can send and receive data asynchronously. Using JSON for messaging is the simpler approach as it can be easily processed in Javascript language and it is well-known by Javascript developers.
6.6 Real Time Interface
6.6.1 Introduction
This interface will transport real-time flows (audio, video and data) between end-users devices and services in a peer-to-peer way.
At wire level, the real-time interface will use the protocols defined by the IETF RTCWEB WG[1] for WebRTC communications. Although existing protocols such RTP/RTCP, ICE and DTLS-SRTP are used, the WG defines all the protocols which have to be mandatory used in any WebRTC communication to meet the all the requirements of real-time communications over the Internet. [2] Describes the media transport aspects of the WebRTC framework. It specifies how the Real-time Transport Protocol (RTP) is used in the WebRTC context, and gives requirements for which RTP features, profiles, and extensions need to be supported.
The IETF RTCWEB WG forced WebRTC communications to be secure by default as the framework was designed for data transmission over Internet and from many types of devices. The encryption of the flows is provided by DTLS-RTP which is an adaptation of TLS to be used over datagrams (UDP). It implements a TLS handshake which allows exchanging the key which is going to be used for symmetric encryption of the traffic over the media flow itself. This makes not necessary to exchange the key in the signalling which is not standardized in WebRTC so encryption will not be warranted by default. The Real-time interface will not require any symmetric key exchange at signalling level to encrypt the real-time flows and all the real-time communication are secure by default.
6.6.3 Connectivity aspects of Real-Time interface
Communication over Internet is a challenging engineering task as IP connectivity between peers is almost never possible. The massive adoption of NAT to avoid the lack of IPv4 IPs and the common presence of Firewall to protect domestic and corporate LANs make necessary the use of protocols like ICE.
The RTCWeb WG adopted Interactive Connectivity Establishment (ICE) [3] as the mechanism to achieve an optimal communication between peers. ICE is a complex protocol which uses the STUN messages to discover the best path between the two peers. ICE uses STUN servers, which allow to discover the public IP from which one endpoint is accessing Internet, and TURN servers, which can relay the traffic when both endpoints are behind symmetric NATs or FW. It was initially designed for offer/answer protocols (namely Session Description Protocol) however a new flavour of ICE called Trickle ICE [4] was defined which allows to use ICE not only in offer/answer protocols.
ICE requires STUN and TURN servers for each operation in order to provide connectivity. STUN allows to discover the public IP which a device is using to access to Internet in order to provide this information to the rest of peers of the communication.
TURN allows to assign and to reserve public IP/port pairs to each device in order to be able to receive RTP from the other peer when the device is behind symmetric NAT or restrictive Firewalls. When the TURN server is used to transport the real-time flows the flows are not real peer-to-peer and the use of TURN server has an associated cost in terms of BW resources.
ICE tries to find the optimal direct path between peers trying to minimize the use of TURN server to relay the traffic.
The Real-time interface will provide optimal peer-to-peer transmission of the real-time data flows trying to minimize the use of relay servers.
6.6.4 Considerations about Real-Time interface
WebRTC Media Framework is still under development but it is already consolidated and is fully functional and it is the result of the work is the result of the work of the main actors in Telecommunication and IT industry so the reTHINK team considered it as the more natural choice for the project. There are production-ready open source libraries of WebRTC with a huge community behind it.
7 Experimenting with JSON schema: Example Design Workflow – from data description and representation, over deployment, to interfaces.
The following provides an example design Workflow applying JSON schema for describing data stored at various points in the architecture. The goal of the exercise is to have one mean to validate that using JSON schema for describing data is applicable in reTHINK for describing data.
The actual names and properties used within in the following are for example purposes only and prevail the final specification given in Chapters 5 and 6.
7.1 Determining required data sets
The following figure provides an Object View on one end device that may instantiate (at most) one Hyperty called Application B, and an arbitrary number of Hyperty Application A. The attributes associated to either of the two Hyperty objects include information / data that describes the Hyperty instance in general (which might be useful to expose), as well as data that is rather internal to the runtime (i.e. implementation specific, and does likely not to be exchanged among components in the reThink architecture).
Figure 35 : reTHINK object view
7.2 Representation of Hyperties as Objects and Resources
The above example shows that a Hyperty is fully described by attributes that are independent of an instantiation of the Hyperty, and the Hyperty runtime code. In addition, a Hyperty instantiation adds to the list attributes that are specific to the instance – including states of the Hyperty runtime logic -- that are specific to the instance. Thus, Hyperties are well described with an object-resource-based approach, which is along the notation used by OMA [21]. The following figure provides such a resource based description of the Hyperties from the example above. Therein, the end-device may contain two objects, Application A Hyperty and Application B Hyperty being one each. The former may be present (instantiated) multiple times, whereas the latter may only exist once. Every object is again described by resources (attributes); some of them only existing once while others may exist multiple times, e.g. to represent several descriptors / identifier of supported protocol stubs.
This object-resource view of the data model of a Hyperty can be formally specified. To avoid the need to align with any standardization body, the following specification of the Hyperties as an object assume a so called “well-known core”, i.e. basically meaning that the used object and resource identifiers reside in their own scope / naming scheme. Describing Hyperties as an OMA object with associated resources and storing them within a “well-known” core has several advantages:
The schema – but not the contents – describing Hyperties is standardized and widely used
A JSON based representation of the schema is standardized [22]
Storing the full Hyperty descriptor in a well-known core allows to reuse existing protocol implementations to manipulate the data using only CRUD operations as required by reThink. The operations that are allowed on the resources are listed in the Resource definition tables and described in section 4.4. Operations Description
The following specification sheets show how to formally describe the previous example.
The choice of making resources of a Hyperty object mandatory or optional takes the reThink architecture into account. For example, a Hyperty / resource entry in the reThink Registry might have only a subset of resource information (and specifically not the actual runtime code), whereas the reThink Catalogue contains every resource information. Thus, it is possible to make only those resources mandatory that have to be available at every component in the reThink architecture, e.g. to discover Hyperties, while the complete Hyperty object / resource information is always pointed to via the Hyperty URI.
URNs are unique identifiers of a type of Object. URLs are identifiers where instances of such Objects could be reached by the bootstrapping module on the end device via Read Operation.
Nb. Application Name
Hyperty Object URN
Hyperty Instance URL
Hyperty Instance Resource URL
e.g. Operating System (Resource Id 3)
Hyperty Instance Resource Instance Value
1 applA urn:rethink:Hyperty:applA:11
/rd/applA/hyperties/11/0
/rd/applA/hyperties/11/0/3
urn:rethink:os:Android:4.4.1
2 applA urn:rethink:Hyperty:applA:11
/rd/applA/hyperties/11/1
/rd/applA/hyperties/11/1/3
urn:rethink:os:iOS:8
An example of an application A Hyperty URN identifier, bound and unique per application would be: urn:rethink:Hyperty:applA:11. Another Hyperty of the same application A would be: urn:rethink:Hyperty:applA:12.
Let us consider that the Catalogue Resource Directory has the root defined by e.g. /rd. An instance of the Hyperty with the URN urn:rethink:Hyperty:applA:11 and the Operating System Resource set to urn:rethink:os:Android:4.4.1 could be stored at the URL: /rd/applA/hyperties/11/0, where 11 is the Object Id in the domain ApplA and 0 is the instance of the Hyperty. For another operating system, e.g. iOS, a new instance would be necessary, located at the URL: /rd/applA/hyperties/11/1 where 11 is the Object Id and with the Operating System Resource set to urn:rethink:os:iOS:8.
7.3 Representation of ProtoStubs and Codecs as Objects and Resources
Similar to Hyperties, a model for ProtoStubs and Codecs Objects and Resources can be defined.
For example, a ProtoStub Object for application B would have the an URN identifier: urn:rethink:hyperty:applB:25 where 25 is the Object Id of the ProtoStub. An instance URL of it would be: /rd/applB/25/88, where 88 is an instance of a ProtoStub Object.
For codecs one could define for codec M of application B the URN identifier: urn:rethink:hyperty:applB:44 with an instance being reachable at the URL: /rd/applB/44/67, where 67 is the instance Id. When sending a Read request (CoAP Get) on the instance URL, the encapsulation of the instance will be included in the reply.
7.4 Operations Description
On the URL of the resources stored on the Resource Directory of the Catalogue component of the Management Services or the bootstrapping module of the end device, several operations can be performed: Create (C), Read (R), Update (U), Delete (D), also named as CRUD operations.
Another important operation is Discovery, that is performed on the URL: /.well-known/core and retrieves a list of all URLs of the resources available.
7.5 JSON Encoding
The above data model of Hyperties, protostubs and codecs as objects & resources directly transfers into a standardized JSON-based representation. The encoding will follow [31] as required by [32].
7.6 Class design for implementation of Hyperties
To complete the workflow from design and specification to implementation, the following illustrates one possible class design directly derived from the previous specification.
7.7 Deployment of data within the reTHINK architecture
The following figures illustrates based on the previous example specification of two Objects and associated resource, how the information could be partially or completely made available at different components of the reThink architecture. Focus – without excluding other architecture components that might also store associated data – is herein only on the Repository (and an end device accessing it) in order to illustrate how data describing, e.g., one Hyperty could be distributed in the system. The first figure shows the two nodes and their communication using LWM2M/COAP. Using the a latter protocol has the advantage of natively supporting CRUD operations; e.g. deploying a new Hyperty in the repository is a simple create operation sending the data object that describes the Hyperty (and includes as a mandatory associated resource the Hyperty run-time code). Downloading / retrieving the runtime of a Hyperty is simply achieved by pulling the Hyperty object and associated resources from the server. Discovery of all available Hyperties providing “voice” services could be achieved right away by retrieving the Hyperty Kind 1 Catalogue object.
Figure 39 : Deployment view
The actual objects stored at the nodes are shown in the following figures. Note that e.g. at the Catalogue, more than just the mere Hyperty objects are stored. Those additional objects are dynamically created and updated by the Catalogue itself to facilitate easily searching the Catalogue.
Finally, it should be noted that LWM2M/COAP-based access to the stored data has also the advantage of granting individual access right per user/client per stored object.
7.8 Catalogue Interface and data object encoding
The reThink Catalogue stores descriptors for Hyperties, PrococolStubs, DataObjectSchema, and Runtime descriptors. To access the Catalogue objects, the server provides a LWM2M/COAP interface according to OMA-TS-LightweightM2M-V1_0-20140619-D and RFC7252 ( [33] and [34]) as shown in the following figure:
Representation of the two main roles impacting the data model can be given through the Hyperty Domain and the User Identity which may be related by a contract generating User Hyperty Account.
Figure 42 : Connection between the Service provider and the Consumer
8.2 User Hyperty Account
8.2.1 Description
The User Hyperty Account data object contains data managed by Hyperty Service providers required to deliver an Hyperty to users. It contains URLs to Hyperty Descriptors and User Identities that are associated with this account.
A User Hyperty Account contains two main types of data including:
Configuration Data which are of two kinds :
a) Personal data(or settings) used by the user to configure the Hyperty usage
b) Authorisation Data including AccessTokens used by User Hyperty Authorisation policies. Similar to user identity authorisation data but now associated to a certain Hyperty.
User Hyperty Policies including Authorisation Policies with personal rules for the Hyperty execution behaviour.
The Hyperty domain brings together and organizes the data of the service provider about the induced administrative domain. Beyond the associated identifiers, it is structured around two main building blocks concerning respectively the governance carried out in this domain and the associated infrastructure subject of this governance.
The infrastructure block includes the main servers supporting the various services required in such a domain:
Hyperty referential : global data basis containing the data about the defined Hyperties
Hyperty Catalogue : data basis of the published Hyperties
Hyperty Registry : data basis of the instanced Hyperties
QoS servers
STUN/TURN servers
…
The governance block concerns the definition of the data underlying two main life-cycles :
the one with an external insight concerning the structure of the service provider partnership with other service providers
the life-cycle of a Hyperty,
the policies involved in these two life-cycles
We will focus in this work on the definition of the data involved in a Hyperty life-cycle together with the associated policies concerning management, QoS, AAA.
During the HypertyProvisioning stage of Figure 2, completely defined Hyperties are stored in a repository acting as an internal referential of the service provider. The provisioning ends with the publication of the Hyperty in a Catalogue accessible from end users. An authorized request of a
consumer for some Hyperty leads to the deployment of its instance on the consumer device together with the registration of the instance in a dedicated register managed by the service provider.
Figure 44 : Working part of a Hyperty life cycle
The Hyperty management involves the following services:
publishing, updating and removing a provisioned Hyperty
registering, monitoring, charging and unregistering a deployed Hyperty instance
Figure 45 : Hyperty governance
8.3.2 Hyperty Domain UML diagram
The Hyperty Domain data model is mainly described by its name (DomainURL), the addresses of provided management services (Infrastructure) and its Governance. According to the Protocol On-the-fly concept, Network Platforms, Registry and Message Servers are defined through URLs from where protocol stubs are downloadable. It is also possible to have a different protocol stub for inter domain communications. The usage of well-known URIs to discover Infrastructure data, should be considered for implementation purposes.
The domain governance is described by the data schemas used (DomainDataSchemas) as well as by the Business Processes and the Business Policies used to rule the Hyperty Life-cycle and Partnership life-cycle as defined in T1.3.
[25] X.693 : ITU-T Recommendation X.693 (2002) | ISO/IEC 8825-4:2002, Information Technology - ASN.1 Encoding Rules: Encoding Using XML or Basic ASN.1 Value Notation
[37] Media Types for Sensor Markup Language (SENML), C. Jennings, Z. Shelby, J. Arkko & A. Keranen / draft-jennings-core-senml-01 / expires January 2016.
[38] RFC4347 -- Datagram Transport Layer Security https://tools.ietf.org/html/rfc4347