Custom Data
Aggregation
Framework: Ready to
Adapt to Your Needs

single blog background
 author`s image

Oleg Boyko

Introduction

In the dynamic world of modern software development, the importance of data-driven applications cannot be overstated. These systems are the backbone of countless industries, enabling businesses to harness data for strategic decision-making, operational efficiency, and competitive advantage. By aggregating data, organizations can identify trends, patterns, and insights that would otherwise be hidden in disparate data sources.

However, creating a robust and scalable data-driven application requires more than just advanced algorithms and data processing capabilities—it demands a carefully structured approach to design and implementation. This article explores the critical principles and modular architecture underlying event-based data-driven applications, emphasizing the flexibility, scalability, and reliability necessary to meet evolving project requirements.

What is Data Aggregation?

Data Aggregation

Data aggregation is the process of gathering and organizing raw data into a form that is easy to analyze and visualize. It involves collecting data from various sources, processing it, and transforming it into a single, unified view. This process is crucial for businesses of all sizes, as it enables them to make informed decisions and improve their operations.

By aggregating data, organizations can identify trends, patterns, and insights that would otherwise be hidden in disparate data sources. Whether it’s a small business looking to optimize its marketing strategy or a large corporation aiming to enhance its operational efficiency, data aggregation provides the foundation for data-driven decision-making.

Types of Data Aggregation

Data Aggregation Types

There are several approaches to data aggregation, each serving a unique purpose and providing different insights:

Historical Data Aggregation: This approach involves gathering historical data from available sources. By analyzing past data, businesses can uncover trends and patterns, such as seasonal sales fluctuations or long-term customer behavior. For example, a retailer might use historical data aggregation to evaluate monthly sales performance and plan for peak shopping periods.

Scheduled Data Aggregation: In this case, data is collected from predefined sources at regular intervals, such as daily or weekly, to create a consistent data history. This method is particularly useful for building datasets over time when historical data isn’t readily available. For instance, a business might schedule daily data scraping from specific sources to monitor competitive pricing trends.

On-Demand (Real-Time) Data Aggregation: This type provides data aggregation in real-time or upon request, without relying on a predefined schedule. It is essential for scenarios requiring immediate insights, such as monitoring stock market fluctuations or tracking live website traffic. By aggregating data on demand, businesses can make timely, informed decisions in rapidly changing conditions.

By understanding and leveraging these different data aggregation methods, businesses can gain a comprehensive view of their operations and make data-driven decisions that drive success.

What are Event-Based Data-Driven Applications?

Event-Based Data-Driven Applications are software systems designed to collect, process, and distribute data in a structured manner centered around events. These events represent specific actions or changes within the system, serving as triggers for data processing workflows. By employing modular architecture, these applications ensure seamless communication between components, enabling tasks such as data analysis, storage, and integration.

This approach provides the flexibility to adapt to diverse requirements, the scalability to handle growing data volumes, and the reliability necessary for mission-critical operations. As a result, event-based architectures empower organizations to transform raw data into actionable insights efficiently and effectively.

Understanding what event-based data-driven applications are is only the first step. To fully leverage their potential, it’s crucial to adhere to core principles that ensure their effectiveness, scalability, and reliability. The following section delves into these foundational principles, offering a roadmap for building robust and efficient data-driven systems.

Core Principles for Building Custom Data Aggregation System

Building a custom data aggregation system requires careful consideration of several foundational principles that ensure its reliability, scalability, and efficiency. From data collection and transformation to ensuring data quality, security, and scalability, these principles guide the development of systems capable of handling diverse data sources and delivering actionable insights. Below, we explore the key components that make a robust data aggregation system, highlighting their roles in supporting effective data-driven decision-making.

Data Collection and Transforming (ETL\ELT): Data collection, validation, enrichment, and transformation are core functionalities for applications built around tracking and/or comparing records from different sources. Aggregation functions play a crucial role in this process by performing calculations on sets of values to return a single value, which is essential for summarizing and analyzing data effectively.

Data Storage and Availability: Any data can often be collected with varying levels of effort, but proper storage and accessibility are what make any application usable for specific tasks and enable data-driven decision-making.
Data Quality and Integrity: Applications should provide mechanisms to verify and track that data is collected accurately, ensuring no specific data points are missing in the resulting dataset.

Monitoring and Alerting: Missing data or issues related to the data collection process—or other processes within the application—can have varying priorities and severities. A flexible system for managing alert triggers and overall performance allows for swift error detection and resolution, reducing the time between identifying and fixing issues.
Security: Despite the fact that collected data is public data, each application adds its own value to it while transforming from original raw data into a unified format or providing additional analytics, etc, so reliable security boundaries are crucial for preventing damage from wrong or improper usage

Scalability: Data-driven systems tend to grow over time, both in data volume and in the complexity of tasks they need to handle. System design should enable horizontal and vertical scaling, ensuring the architecture remains extendable and does not require complete rewrites to accommodate new challenges.

Documentation and Transparency: Complex systems with numerous dependencies (e.g., external websites used for data collection) and intricate data processing workflows should be clear and transparent for end-users. These systems are built for people benefiting from the insights provided, not just for engineers writing the code.

Documenting architecture principles, design patterns, and source code with industry-standard docstrings and contribution guidelines ensures the application can be maintained or passed to new teams without disruptions.

How We Ensure Data Quality

One of the key strengths of our system is its unwavering focus on data completeness and quality. Our team of experienced scrapers and QA specialists brings deep domain expertise, ensuring accurate mapping of data from free text or custom website layouts into OCDS fields and extensions.

By leveraging an object-oriented approach in our code, we effectively manage and monitor the complex structure of a release. This approach minimizes errors and incorrect mappings, ensuring reliable, high-quality outputs.

For projects of this nature, we assign a dedicated support team that works full-time to oversee and optimize the data collection process. If this level of service is something you value, we can tailor our support to match your specific needs. Whether it’s about the speed at which issues are resolved or the urgency of problem fixes, the level of service depends on your required SLA. How fast do you need us to act? Let’s ensure our service aligns perfectly with your expectations.

Practical Use Case of Data Aggregation System

A real-world implementation of these principles is GroupBWT’s data aggregation system for the EU tender market. The system aggregates data from various sources, such as TED and national tender platforms, cleans and normalizes it using NLP and AI tools, and ensures real-time updates for tender specifications and deadlines. Additionally, the platform offers interactive analytics and seamless integration with ERP and CRM systems, enabling businesses to make data-driven decisions quickly and efficiently.

Overall Architecture and Modules of Data Aggregation

At the core of our approach is a modular micro-services architecture. This architecture is designed to enable efficient and decoupled communication between modules using a RabbitMQ-based data bus. The architecture is structured around several key modules, each with its specific role and function within the application.

Modules

Event Bus

  • The Event Bus acts as the central system of our application, facilitating the exchange of events between modules.
  • RabbitMQ, a robust and scalable message broker, is employed as the backbone of our event bus, ensuring reliable and real-time event propagation.
  • Contracts are utilized to precisely describe the structure and content of events, allowing for a standardized and predictable format.
  • This standardized approach to event definition enables any service to easily subscribe to specific event types that align with their requirements.

API(s)

Management API

  • The control center of our data-driven application. It facilitates the management and coordination of data collection tasks, providing a unified interface to control various aspects of the system
  • Tasks performed by the Management API include scheduling data collection, defining data sources, setting data processing parameters, and gathering overall statistics about the application’s performance
  • This API streamlines the administrative aspects of the application, ensuring efficient data collection and management


Data API 

  • The main role is to provide data in the required format to consumers and applications
  • It serves as a point through which data can be accessed and retrieved, offering a standardized way to request specific data sets, filtered and formatted as needed
  • This module ensures that data is readily available for analytics, reporting, or integration with other systems


Auth API

  • Introduces roles and permissions to the application
  • Implements the security boundaries 


API Gateway

  • Serves as the gateway through which any other API is available, simplifying usage of other APIs and allowing to replace them without changing publicly exposed endpoints


Other APIs

Storage system

  • Responsible for efficiently managing the storage of data collected by the application.
  • Can collect data into single or multiple databases, based on the optimal efficiency and scalability requirements of the project
  • Ensures that data is reliably stored, easily retrievable, and can be efficiently queried for analysis and reporting, including the use of aggregated data to summarize large datasets
  • Usually implemented not as a standalone service but as a part of one or multiple APIs (each service module has its own storage chosen and configured for best performance of related service)

UI

  • The module is a Single Page Application (SPA) designed to provide a user-friendly and interactive front-end for our data-driven application
  • Built around the exposed APIs, this SPA offers a seamless and responsive user experience, allowing users to interact with the application effortlessly
  • Offers intuitive data visualization, enabling users to gain insights and make data-driven decisions with ease
  • The SPA’s responsive design ensures that users can access the application from various devices and platforms.

Data Source and Collection

  • Responsible for gathering data from diverse sources
  • Supports data ingestion at scale, ensuring that data is collected efficiently and in real-time, allowing for timely insights and analysis
  • Can be configured to run data collection tasks on demand at predefined intervals and other specific scenarios, providing a flexible and automated data collection process.

Data Transform 

  • The module allows you to define specific data transformation rules and processes, providing the flexibility to tailor the data to the project’s requirements
  • Encapsulates different processes that define data transform pipeline:
  • Cleaning from noise data, irrelevant chars, etc
  • Enrichment using different tools, including but not limited to NLP processing, AI predictions, cross-source aggregation, etc
  • Validation based on complex predefined rules and dependencies
  • Comparing data from different sources, searching and marking as duplicates, etc
  • Ensures that data is standardized and ready for analysis

Logging

  • Ensures the collection, analysis, and visualization of system and application logs
  • Service employs powerful tools like Grafana, Loki, or ElasticSearch to centralize log data and provide real-time visibility into the system’s performance and behavior
  • Grafana is used for creating insightful dashboards, offering a visual representation of system health, performance metrics, and key data points
  • Loki, a log aggregation system, is employed for efficient log storage and querying, ensuring that log data is readily accessible for troubleshooting and analysis
  • Sentry is integrated to capture errors and exceptions in the application, allowing for timely issue detection and resolution.
  • Service is able to build also unique alerting schemas, allowing to define custom conditions and thresholds for generating alerts based on specific events and performance metrics specific to the project domain

Deployment

  • More “approach“ rather than a separate module that is used to deploy and run other modules and services
  • Utilizes Terraform and Terragrunt to allocate cloud resources, enabling the dynamic provisioning and scaling of infrastructure as needed. These tools provide infrastructure as code (IaC) capabilities, guaranteeing consistent, repeatable, and version-controlled resource allocation
  • Kubernetes serves as the target platform for deploying services and their applications. It provides container orchestration and ensures that application modules run efficiently, scale effectively, and remain highly available
  • The deployment process is orchestrated using a GitOps methodology, where the entire deployment pipeline is version-controlled in a Git repository. GitLab CI/CD plays a crucial role in automating the continuous integration and continuous deployment process
  • ArgoCD complements the GitOps workflow by enabling automated deployment and management of Kubernetes applications. It ensures that application modules are always in the desired state, with changes and updates triggered by changes in the Git repository
  • Multiple environments are used to streamline feature releases and confirm their coverage by tests 
  • This approach streamlines deployment, reduces manual intervention, and enhances reliability by automating the entire process. It promotes infrastructure as code, version-controlled deployments, and seamless updates with minimal downtime

Schemas

In this section, we provide illustrative examples of how the implementation of our “Event-based Data-Driven Application” can take shape. These schemas offer a visual representation of how the architecture and modules we’ve discussed earlier can be structured to build a robust, flexible, and scalable data-driven solution.

Modules, relationships and data flow in the application

Modules, relationships and data flow in the application

Logging service integration and workflow

Logging service integration and workflow

GitOps workflow diagram

GitOps workflow diagram

Customization and applying

One of the key strengths of our “Event-based Data-Driven Application” approach lies in its inherent flexibility and adaptability. It’s designed to be a versatile framework that can be tailored to meet the unique requirements and objectives of a specific project.

The modularity of our approach enables us to configure individual modules to align with the precise requirements of the project. It is possible to add, remove, or modify modules as needed, ensuring that the architecture serves your project’s objectives effectively.

Simplifying

While the complete architecture concept covers and may address almost any additional workflow introduced into the application it also may be simplified to some extent without losing its modular flexibility and according to current needs.

As an example of such simplifications:

  • Data Transform module redundancy. If the project doesn’t require any additional processing and modifications of input data this service may be omitted or incorporated into Data Collection as some simple steps.
  • UI may not be required for the initial stages of applications and API could serve as the only available interface and tool for application operation.
  • Auth and Gateway APIs could be replaced with network access restrictions and firewall
  • Management API and Data API could be merged into a single API or a monolithic dashboard application made on top of existing frameworks.
  • Some deployment processes or environments could be simplified or removed in favor of reducing costs
  • Custom aggregation methods can be created to cater to specific project requirements, allowing for personalized data analysis and visualization.


While some modules can be adjusted or omitted, it’s important to emphasize that the essence of this approach lies in the fundamental components: events, their transmission via the event bus, and their allocation among the Data Collection, Logging, and Management modules. These core elements are indispensable to fully harness the benefits of the approach.

Implementing

Applying the event-based data-driven application approach to a project involves a series of well-defined steps to ensure a successful and tailored implementation. In this section, we guide you through the essential steps required to tailor this approach to the project’s unique objectives and requirements.

  1. Define project objectives – establishing clear and measurable objectives for the project. Understand the specific challenges aimed to address and the desired outcomes.
  2. Analyse data needs – conduct a comprehensive analysis of your data requirements. Identify the types of data your project will generate, collect, and process. Determine the volume, velocity, and variety of data sources
  3. Module Customisation – by addressing each requirement with a corresponding part of the application and module which should cover the particular requirement
  4. Estimation and Roadmap – summarising requirements and demands on the current stage and preparing a roadmap for implementation.

Conclusion

The future belongs to those who harness the power of data with precision and adaptability. GroupBWT‘s expertise in creating custom event-based, data-driven solutions ensures your business can scale, evolve, and succeed in a data-centric world. Our modular, flexible approach is designed to meet your unique goals, turning raw data into strategic opportunities.

Let’s shape your next success story! Contact GroupBWT today to explore how we can build a tailored data aggregation framework for your business.

Looking for a data-driven solution for your retail business?

Embrace digital opportunities for retail and e-commerce.

Contact Us