Comment on page
Data Models and Formats
This building block establishes a common format for data model specifications and serialization of data in data exchange payloads. Combined with the Data Exchange APIs building block, this ensures full interoperability among participants.
It facilitates a common format for data model specifications and representation of data.
- Based on InterConnect's reference architecture, we have developed a pair of components: the Knowledge Engine and the Generic Adapter (part of InterConnect's Semantic Interoperability Framework), accounting for the "Data Exchange APIs" building block.
- Our methodology relies in the use of graph dissemination for discovery and data exchange.
- InterConnect relies on SAREF (ontology) as our main building block account for the "Data Models & Formats).
SCSN is structured according to the four-corner model. The SCSN network is a network of networks in which all service providers/brokers are connected to each other. This enables every manufacturing company to communicate with all other manufacturing companies in the SCSN network, irrespective of the service providers to which the manufacturing companies are affiliated. This is made possible by strict technical and commercial agreements between the service providers, which are managed by the independent SCSN Foundation.
- 1.Deploy batch data to the dataspace.
- 2.Provide streaming data to the dataspace.
- 3.Create semantic models for your data.
- 4.Find data
- 5.Create exports.
The experiment will facilitate sharing of data using the standard NGSI-LD API to create digital twins. To develop more ambitious CO2-reduction projects for non-residential buildings, requires complete and up-to-date data on the measured energy consumption in relation to key construction features. With this data, Data Service Consumers can create digital twins of non-residential buildings, modelling the desired energy / CO2 reduction in various renovation scenarios for their clients.
- InterConnect does not rely in any reference implementation of IDSA. Rather it built all the required components, that make the part of the SIF - Semantic Interoperability Framework. The SIF includes:
- The Knowledge Engine that handles the graph data exchange between parties.
- The Generic Adapter that acts as a gateway to provide interoperability at digital services and devices.
- The Service Store, acting as the repository of interoperable services (just as the "Publications and Marketplace" category for the "Data Value" building block.
- Most use-cases use the SIF as the key enabler to unlock interoperability that is directly geared by the use of the SAREF ontology; using it to demonstrate DSF solutions for flexibility exchange and actuation over smart appliances according to demand.
OGC suite are technical level interoperability standards, both abstract and encodings. They can contain transport, encodings and data models. Combining these based on the specification is a challenging and error-prone process. In addition, generic standards often require extensions (e.g. specific additional structures) and profiles (subsets and compilations). Ultimate interface or data model definition shall be maximally reusing existing data models, support translations between others, while preserving the semantics and provenance of data. To enable semantic representations that will enable both logical and ontological model matching, the OGC Definition Server was developed. It is maintaining OGC standards ontologies models and some domain-specific ones that bridge to the abstract secifications models.
- Use W3C stacks and reflect multiple use cases for the energy value chain.
- Have different reference impletemantions with different technological framework.
- Complete pipeline from heterogeneous data sources to semantic data / Knowledge graph that support data driven services (e.g. ML for forecasting ...)
The Smart Agrifood domain needs a common representation of agronomic data (e.g. crops, senso data from the field, multispectral imagery from UAVs, geolocation data, fertilisation logs, …). This common data model shall be used for all data exchanged between software components.
The need of sharing data is crucial in low volume industry, like semiconductor industry. Nowadays, every industry adopts different standards and understandings of the same information, which provokes that a lot of work has to be done manually and it is not digitized. Furthermore, manual actions have a huge impact in the final price and they require extra effort. By using data standards and IDS components, the process is digitized, some errors are avoided and the effort is minimized. Besides, it enables the small manufacturing companies to join the digitalization process. Until now, they were sometimes excluded due to the high cost of hiring IT professionals (they have less budget).
The City Dataspace focuses on the challenge of increasing and enabling the usability of Smart City relevant Open and Urban data using the example of mobility and geodata across municipal boundaries. It relies on the established semantic technologies, combined with new innovative concepts to simplify the use of the required technologies. This interoperability should enable app developers to develop an app once and roll it out to a large number of municipalities using the City Dataspace. Conversely, the City Dataspace enables municipalities to participate in existing apps just by making their data available.
The idea is to test & showcasing the Dataspace Measured Energy Consumption in Non-Residential Buildings. To optimise the cost-effectiveness of CO2-reduction projects, non-residential building-owners need to share relevant data more easily but safely with their project partners.
The wind and solar description model is the digital backbone to federate all the businesses and its ecosystem around one single source of truth from “DESIGN, MODIFICATION to others REFRESHMENT”. Being able to share the same abstract representation of data for the wind and solar domain would allow a better understanding of the associated operations (asset management, RCA, Structural Analysis, Visual Inspection, monitoring ...) and an obvious improvement of the processes that mobilize the processing of this information.
It aims to digitalise the energy sector, enabling thus higher levels of operational excellence with the adoption of disrupting technologies. PLATOON will deploy distributed edge processing and data analytics technologies for optimized real-time energy system management in a simple way for the energy domain expert. Moreover, it will contribute to increased renewable energy consumption, smart grids management, increased energy efficiency and optimised energy asset management.
The solutions developed within the scope of InterConnect will allow a digitalisation of homes, buildings and electric grids based on an Internet of Things (IoT) architecture. By including digital technologies (Artificial Intelligence, Blockchain, Cloud and Big Data) based on open standards, such as SAREF, it will guarantee the interoperability between equipment, systems and privacy/cybersecurity of user data.
- An organization (A) asking a data owner for consent to receive and process data from their connected car, which an OEM organization (B) must facilitate. In this setup, organization A would need to use the consent manager to fill a template with the terms of this consent such as the purpose of the consent, actors involved (A and B), or personal data to be processed. After this first step, A can start asking their customers for consent as defined in the template and ask the consent manager to process the response. The consent manager automatically handles the certification, signature, and safe keeping of this consent, while also notifying the rest of the actors affected (in this case B, the OEM), so that they can enforce the consent given by the data owner.
- The data owner is also automatically registered in the manager, so that they can review and revoke any given consent at any point in time directly from the consent manager itself.Scenario: consent creation
- Our approach was to gear data exchange in a full semantic approach relying not only in graph representations of data and ontology engineering to build the needed graphs. If there is intention to follow this approach in the near future, tools will be required to automate, assist and validate the data representations, as this is yet not a common feature for industry to have. InterConnect has developed some of this tools, may be considered in the ecosystem.
- Formal definitions are not always easy to adapt, which is not the gap cannot be filled on the adopter's side, but the thresholds if high.
- The governance administration is internal, while it is not clear if it could be better aligned to the Access & Usage Policies building block.
- Implementation of the Data Usage and Accounting, and Publication & Marketplace Solutions could be helpful.
- Provenance and traceability support is not stable.
Last modified 4mo ago