Warning

🚧 Work in Progress: This page is currently under construction. Content may be incomplete or subject to change. To contribute, see the contribution guide.

[PAX-EA] 4: Integration Architecture

FieldValue
Version1.2
Status✅ Active
Updated2026-03-24
AuthorLeandro Crespo
Supersedes[PAX-EA] 4. Integration Architecture v1.1 (draft)

Revision History

VersionDateAuthorComments
1.023/02/2026Leandro CrespoDocument Creation (Draft)
1.124/02/2026Leandro CrespoMinor format corrections. Fixed some incorrect standard references
1.224/03/2026Leandro CrespoAdded §5.0 Platform Selection Guide (5 platforms); §3.7 AI-Augmented pattern; fixed Fluig/ServiceNow distinction; Updated UiPath description; aligned with [PAX-EA] 3

1. Introduction

1.1 Purpose

This document establishes the Integration Architecture standards and guidelines for Patria. It defines the principles, patterns, technologies, and governance practices for designing, implementing, and managing integrations across our enterprise ecosystem.

Integration Architecture is a critical component of our overall Enterprise Architecture, enabling seamless connectivity between applications, systems, data sources, and external partners while maintaining security, scalability, and reliability.

This document serves as the authoritative reference for architects, developers, and integration specialists to ensure consistency, interoperability, and alignment with our strategic architectural vision.

1.2 Objectives

The Integration Architecture aims to achieve the following objectives:

  • Strategic Alignment: Ensure integration initiatives support business objectives and digital transformation goals
  • Interoperability: Enable seamless data flow and process orchestration across heterogeneous systems and platforms
  • Scalability and Performance: Design integrations that scale efficiently with business growth and transaction volumes
  • Security by Design: Implement robust security controls to protect data and prevent unauthorized access
  • Standardization: Establish consistent integration patterns, technologies, and practices across the organization
  • Agility: Enable rapid integration development and deployment to support changing business needs
  • Governance and Visibility: Provide comprehensive oversight, monitoring, and management of the integration landscape
  • Cost Optimization: Maximize reuse and minimize integration complexity to reduce total cost of ownership

1.3 Scope

This document covers the following integration aspects:

  • Integration principles aligned with enterprise architecture principles
  • Integration patterns and architectural styles
  • Technology standards for API development, messaging, workflows, and data integration
  • Security standards for authentication, authorization, and data protection
  • API design, governance, and lifecycle management
  • Event-driven architecture principles and standards
  • Data integration standards and best practices
  • Integration monitoring, observability, and operational excellence
  • Reference architectures for common integration scenarios
  • Platform selection criteria for Airflow, Sensedia, N8N, UiPath, and Fluig

This document should be read in conjunction with:

  • [PAX-EA] 1. Enterprise Architecture Overview
  • [PAX-EA] 3. Technology Standards
  • [PAX-EA] 5. Data Architecture
  • CSAF-001 Cybersecurity Architecture Controls Framework (CACF)

2. Integration Principles

Integration principles guide our architectural decisions and ensure alignment with our broader enterprise architecture vision. These principles are derived from and complement the core architectural principles defined in [PAX-EA] 1. Enterprise Architecture Overview.

2.1 Strategic Integration Principles

2.1.1 API-First Approach

All system capabilities should be exposed through well-designed APIs, enabling reusability, composability, and future extensibility. APIs serve as the primary integration interface, promoting loose coupling and standardization.

Rationale:

  • Enables rapid development of new applications and services
  • Facilitates ecosystem expansion through partner integrations
  • Supports microservices and cloud-native architectures
  • Provides flexibility for future technology evolution

2.1.2 Event-Driven by Design

Where appropriate, systems should communicate through asynchronous events to enable loose coupling, scalability, and real-time responsiveness. Event-driven patterns support reactive architectures and improve system resilience.

Rationale:

  • Reduces tight coupling between systems
  • Enables scalable, distributed architectures
  • Supports real-time data synchronization and notifications
  • Improves system resilience through asynchronous communication

2.1.3 Avoid Point-to-Point Integration

Direct point-to-point integrations create tightly coupled architectures that are difficult to maintain and scale. Instead, leverage integration layers, API gateways, and event brokers to mediate system communication.

Rationale:

  • Reduces integration complexity and maintenance burden
  • Enables centralized governance and security enforcement
  • Facilitates monitoring and observability
  • Supports system evolution without breaking existing integrations

2.1.4 Integration as a Product

Treat integrations as products with clear ownership, documentation, SLAs, and lifecycle management. Integration teams should adopt product thinking, focusing on consumer needs, quality, and continuous improvement.

Rationale:

  • Ensures accountability and ownership
  • Improves integration quality and reliability
  • Facilitates discovery and reuse
  • Enables better governance and lifecycle management

2.1.5 Cloud-First Integration

Prioritize cloud-native integration solutions that leverage platform services for scalability, resilience, and operational efficiency. Cloud platforms provide managed services that reduce infrastructure complexity and accelerate delivery.

Rationale:

  • Leverages managed services to reduce operational overhead
  • Provides elastic scalability and high availability
  • Enables global reach and geographic distribution
  • Supports hybrid and multi-cloud strategies

2.2 Technical Integration Principles

2.2.1 Loose Coupling

Integrations should minimize dependencies between systems, enabling independent evolution, deployment, and scaling. Use contracts, schemas, and versioning to manage changes without breaking consumers.

Implementation Guidelines:

  • Use asynchronous communication where possible
  • Implement versioned APIs and event schemas
  • Avoid sharing databases or internal data structures
  • Apply circuit breaker and retry patterns for resilience

2.2.2 Idempotency and Reliability

All integration endpoints should be designed to handle duplicate requests safely. Implement idempotency keys, deduplication mechanisms, and retry logic to ensure reliable message delivery.

Implementation Guidelines:

  • Design all APIs and event handlers as idempotent operations
  • Use unique identifiers for request tracking and deduplication
  • Implement retry mechanisms with exponential backoff
  • Ensure exactly-once or at-least-once delivery semantics

2.2.3 Security in Depth

Apply multiple layers of security controls including authentication, authorization, encryption, input validation, and threat monitoring. Security should be embedded at every integration layer.

Implementation Guidelines:

  • Enforce OAuth 2.0 / OpenID Connect for authentication
  • Implement role-based access control (RBAC) for authorization
  • Encrypt all data in transit using TLS 1.3 or higher
  • Apply rate limiting and DDoS protection
  • Monitor and log all integration activity

2.2.4 Schema and Contract Management

Define explicit contracts for APIs and events using formal schema languages (OpenAPI, JSON Schema, Avro, Protobuf). Version and publish schemas in a centralized registry for discovery and governance.

Implementation Guidelines:

  • Use OpenAPI 3.0+ for REST API specifications
  • Define schemas using JSON Schema for web-based integrations and Apache Avro for high-throughput data events
  • Maintain schema registries for discoverability
  • Implement schema validation in producers and consumers
  • Apply backward and forward compatibility principles

2.2.5 Observability and Monitoring

All integrations must emit comprehensive telemetry including logs, metrics, and distributed traces. Implement correlation IDs to track requests across system boundaries.

Implementation Guidelines:

  • Implement structured logging with correlation IDs
  • Emit business and technical metrics
  • Use distributed tracing (OpenTelemetry standard)
  • Establish monitoring dashboards and alerting
  • Track SLAs and error rates

3. Integration Patterns and Styles

Integration patterns define the architectural approaches for connecting systems and exchanging data. Selecting the appropriate pattern depends on requirements such as latency, consistency, volume, and coupling.

3.1 API-Led Connectivity

API-Led Connectivity organizes integrations into three distinct layers: Experience, Process, and System APIs. This pattern promotes reusability, maintainability, and separation of concerns.

Pattern Description

The API-Led approach structures integration architecture into logical layers, each with specific responsibilities and consumers. This layered approach enables composability and reuse while maintaining clear boundaries.

Layer Definitions

  • Process APIs: Orchestration and business logic APIs that coordinate across multiple systems to implement business processes. These APIs encapsulate business rules and workflows.
  • System APIs: Technical APIs that expose underlying system capabilities with minimal transformation. These APIs provide standardized access to systems of record.
  • Experience APIs: Channel-specific APIs tailored for specific consumer experiences (web, mobile). These APIs aggregate and transform data for optimal consumption.

When to Use

  • Building omnichannel digital experiences
  • Creating reusable integration assets
  • Implementing complex business processes
  • Abstracting system complexity from consumers

Technology Alignment

  • Microservices: FastAPI (Python), Node.js with Express/NestJS (exposed through Sensedia)
  • API Gateway: Sensedia API Management Platform (Standard), Cloud-native API gateways (GCP API Gateway, AWS API Gateway) for specific use cases
  • Documentation: OpenAPI (Swagger)

Anti-Patterns to Avoid

  • Bypassing layers and connecting directly to System APIs from user interfaces
  • Duplicating business logic across multiple layers
  • Creating Experience APIs that don’t serve actual consumer needs

3.2 Event-Driven Integration

Event-Driven Integration enables systems to communicate through asynchronous events, supporting real-time responsiveness, loose coupling, and scalability.

Pattern Description

Systems publish events when state changes occur, and interested consumers subscribe to relevant event types. Event brokers decouple publishers from subscribers, enabling independent evolution and scaling.

Event Patterns

  • Event Notification: Lightweight notifications that signal state changes, prompting consumers to query for details
  • Event-Carried State Transfer: Events contain complete state information, reducing consumer dependencies on source systems
  • Event Sourcing: Events are stored as the primary source of truth, enabling state reconstruction and temporal queries
  • Command Query Responsibility Segregation (CQRS): Separates read and write models, optimizing each for specific use cases

When to Use

  • Real-time data synchronization across systems
  • Implementing reactive and responsive architectures
  • Decoupling microservices communication
  • Building audit trails and temporal analytics
  • Handling high-volume, high-velocity data streams

Technology Alignment

  • Event Platform: Sensedia Event Hub (Standard), GCP Pub/Sub
  • Event Streaming: Sensedia Event Hub, GCP Dataflow
  • Event Processing: Sensedia Integrations, Cloud Functions (GCP)

3.3 Batch Integration

Batch Integration processes large volumes of data in scheduled or triggered batches, optimizing for throughput rather than latency.

Pattern Description

Data is collected, transformed, and transferred in bulk operations, typically on scheduled intervals. Batch processing is efficient for high-volume data movement and complex transformations.

When to Use

  • High-volume data transfers where real-time processing is not required
  • Complex data transformations requiring significant processing time
  • Integration with legacy systems lacking real-time capabilities
  • Optimizing resource utilization during off-peak hours

Technology Alignment

  • Orchestration: Apache Airflow (Cloud Composer for GCP)
  • Processing: Python scripts, Cloud Dataflow, BigQuery
  • Scheduling: Cloud Scheduler, Airflow DAGs
  • File Transfer: Cloud Storage, SFTP

3.4 File-Based Integration

File-Based Integration exchanges data through structured file formats, supporting integration with systems lacking API capabilities.

Supported File Formats

  • CSV: Simple tabular data exchange
  • JSON: Structured data with hierarchical relationships
  • XML: Enterprise data exchange with schema validation
  • Parquet: Optimized columnar format for analytics
  • Excel: Business user-friendly format (with limitations)

When to Use

  • Integrating with legacy systems without API capabilities
  • Exchanging data with external partners and vendors
  • Bulk data import/export operations
  • Regulatory reporting and compliance data submission

Technology Alignment

  • Storage: Google Cloud Storage, SFTP servers
  • Processing: Python scripts, Cloud Functions
  • Orchestration: Airflow for scheduled file processing
  • Validation: Schema validation libraries, data quality tools

3.5 Real-Time Integration

Real-Time Integration provides immediate data synchronization and processing with minimal latency, supporting time-sensitive business processes.

Implementation Approaches

  • Synchronous REST APIs: Direct request/response for immediate feedback
  • WebSockets: Bidirectional streaming for real-time updates
  • Server-Sent Events (SSE): Server-to-client event streaming
  • Change Data Capture (CDC): Database-level change streaming

When to Use

  • User-facing applications requiring immediate feedback
  • Financial transactions and payment processing
  • Real-time fraud detection and risk assessment
  • Live dashboards and monitoring systems
  • Collaborative applications and notifications

Technology Alignment

  • Synchronous APIs: Sensedia API Management (gateway) with FastAPI, Node.js microservices
  • WebSockets: Socket.IO, native WebSocket support
  • CDC: GCP Datastream
  • Streaming: Sensedia Event Hub, Cloud Pub/Sub

See §5.0 Platform Selection Guide for criteria on when to use Sensedia vs. other platforms.

3.6 Hybrid Integration

Hybrid Integration combines multiple patterns to address complex requirements, balancing real-time responsiveness with batch efficiency.

Pattern Description

Different integration patterns are applied to different parts of the system based on specific requirements. This pragmatic approach optimizes for the appropriate trade-offs in each context.

Common Hybrid Scenarios

  • Real-time transactional data with batch analytics processing
  • Event-driven core with batch reconciliation
  • API-led user interactions with scheduled backend synchronization
  • File-based partner integration with real-time internal APIs

3.7 AI-Augmented Integration

AI-Augmented Integration incorporates Large Language Model (LLM) inference, AI agents, and intelligent automation into integration workflows, enabling dynamic decision-making, document interpretation, and human-in-the-loop processing.

Pattern Description

AI-augmented integrations include one or more AI model calls as processing steps within a standard integration flow. The AI component may classify, extract, summarize, route, or generate content, with results driving downstream logic.

When to Use

  • Document interpretation: Parsing unstructured documents (PDFs, emails, reports) and extracting structured data
  • Intelligent routing: Dynamic workflow branching based on AI classification of content or intent
  • Human-in-the-loop: Low-confidence AI outputs escalated for human review before proceeding
  • Conversational integrations: Chatbot-driven or voice-driven system interactions
  • AI agent workflows: Chaining multiple AI tool calls with conditional logic and external data grounding (MCP)

Key Characteristics

  • Non-deterministic outputs: AI responses may vary — implement validation and confidence thresholds
  • Latency: LLM calls add seconds of latency — design flows as asynchronous where possible
  • Cost awareness: Token consumption has direct cost; apply caching and prompt optimization
  • Auditability: Log AI inputs, outputs, model version, and confidence scores for traceability
  • Fallback logic: Always define behavior when AI confidence is below threshold or model is unavailable

Technology Alignment

  • Orchestration: N8N (primary — visual workflow with native AI nodes)
  • AI Models: Vertex AI (Gemini), OpenAI / Azure OpenAI, Anthropic Claude (via API)
  • Tool Use / MCP: Model Context Protocol servers for grounding AI in internal data
  • Triggers: Webhooks, scheduled jobs, event-driven (Cloud Pub/Sub, Sensedia Event Hub)

Anti-Patterns to Avoid

  • Using AI where deterministic logic suffices (unnecessary cost and latency)
  • Routing AI-generated content directly to production systems without validation
  • Hardcoding prompts without version control — treat prompts as versioned artifacts
  • Skipping human review in high-risk or regulated processes

4. Integration Architecture Layers

Integration architecture is organized into logical layers, each serving specific purposes and consumer types. This layered approach promotes reusability, maintainability, and separation of concerns.

4.1 Experience Layer

Purpose

The Experience Layer provides consumer-optimized APIs tailored for specific channels and user experiences. These APIs aggregate, transform, and present data in formats optimized for web, mobile, IoT, and partner integrations.

Characteristics

  • Channel-specific data formats and structures
  • Optimized payload sizes for performance
  • Consumer-centric naming and design
  • Aggregation of multiple backend services
  • Response caching and optimization

Responsibilities

  • Transform backend data into consumer-friendly formats
  • Implement channel-specific business logic
  • Aggregate data from multiple Process/System APIs
  • Apply consumer-specific security and throttling
  • Provide versioned, stable contracts for consumers

Technology Standards

  • RESTful APIs with JSON responses (managed through Sensedia API Management Platform)
  • GraphQL for flexible data querying (alternative)
  • OpenAPI documentation
  • OAuth 2.0 authentication
  • Rate limiting and quota management (via Sensedia)

4.2 Process Layer

Purpose

The Process Layer orchestrates business processes across multiple systems, implementing business logic, workflows, and complex transformations.

Characteristics

  • Business process orchestration
  • Cross-system coordination
  • Business rule implementation
  • Data transformation and enrichment
  • Transaction management and compensation

Responsibilities

  • Coordinate multi-step business processes
  • Implement business rules and validation logic
  • Transform data between system formats
  • Handle error compensation and rollback
  • Maintain process state and audit trails

Technology Standards

  • RESTful APIs for process invocation
  • Integration Platform: Sensedia iPaaS (Standard), N8N (AI automations), Power Automate (limited, Microsoft flows)
  • Approval Workflows / BPM: Fluig (multi-step business approvals, document routing)
  • IT Service Management: ServiceNow (incident ticketing, change requests, ITSM)
  • Business rule engines where applicable
  • Event-driven orchestration: Sensedia Event Hub for async processes
  • Saga pattern for distributed transactions

Common Use Cases

  • Order processing and fulfillment
  • Customer onboarding workflows
  • Approval and review processes
  • Multi-system data synchronization
  • Composite business operations

4.3 System Layer

Purpose

The System Layer provides standardized, technical APIs that expose underlying system capabilities with minimal transformation. These APIs abstract system complexity and provide stable contracts for upstream consumers.

Characteristics

  • One-to-one mapping to backend systems
  • Minimal business logic or transformation
  • Technical naming reflecting system capabilities
  • Standardized error handling and response formats
  • Performance optimization (caching, batching)

Responsibilities

  • Expose system of record capabilities via APIs
  • Implement system-specific authentication and authorization
  • Standardize error responses and status codes
  • Apply connection pooling and resource management
  • Provide technical documentation

Technology Standards

  • RESTful APIs following OpenAPI standards
  • FastAPI (Python) or Node.js for microservice implementation (exposed via Sensedia API Management)
  • System-specific authentication (database, legacy systems)
  • Connection pooling and resource optimization
  • Comprehensive error handling and logging

4.4 Data Integration Layer

Purpose

The Data Integration Layer handles bulk data movement, transformation, and synchronization between systems, supporting analytics, reporting, and data warehousing.

Characteristics

  • Batch and real-time data pipelines
  • Extract, Transform, Load (ETL) / Extract, Load, Transform (ELT) processes
  • Data quality validation and cleansing
  • Schema mapping and transformation
  • Medallion architecture alignment (Bronze, Silver, Gold)

Responsibilities

  • Extract data from source systems
  • Transform and cleanse data according to business rules
  • Load data into target systems (data lake, data warehouse)
  • Maintain data lineage and audit trails
  • Monitor data quality and pipeline health

Technology Standards

  • Apache Airflow (Cloud Composer) for orchestration
  • BigQuery for data warehousing
  • Cloud Storage for data lake
  • Python for data transformation
  • Dataplex for data governance

Integration with Data Architecture

  • Bronze Layer: Raw data ingestion with minimal transformation
  • Silver Layer: Cleaned, validated, and standardized data
  • Gold Layer: Business-ready, aggregated datasets
  • Datamart Layer: Curated data for specific consumption

See [PAX-EA] 5. Data Architecture for detailed data integration standards.


5. Integration Technologies and Platforms

This section defines the approved technologies for implementing integration solutions, aligned with [PAX-EA] 3. Technology Standards.

Sensedia serves as the central integration platform for Patria, providing:

  • Sensedia iPaaS: Integration Platform as a Service for internal integrations (scheduled jobs, stored procedures, internal system connectivity)
  • Sensedia API Management Platform: Primary API gateway and management for both internal and external APIs
  • Sensedia Event Hub: Event streaming and messaging platform
  • Deployment: Cloud-hosted by Sensedia (IaaS on AWS)
  • Connectivity: Native integration with BigQuery and Cloud Pub/Sub

Other technologies (FastAPI, Cloud Functions, N8N, UiPath, Fluig) serve as complementary tools for specific use cases, each with defined scope described in §5.0 below.

Note on Technology Standards alignment: [PAX-EA] 3 Technology Standards should be updated to formally include Sensedia (API Management, iPaaS, Event Hub) and Apache Airflow (Cloud Composer) as approved integration platforms. Until that update, this document takes precedence for integration platform decisions.

5.0 Integration Platform Selection Guide

Selecting the right integration platform is critical for maintainability, performance, and cost efficiency. This section provides decision criteria for the five integration platforms used at Patria.

For more detailed information and use cases please refer to [PAX-EA] Integration Use Case Catalog.

Decision Matrix

CriteriaAirflow (Cloud Composer)SensediaN8NUiPathFluig
Primary use caseData pipelines & batch orchestrationAPI-led connectivity & microservice integrationAutomations & AI workflowsRobotic Process Automation (RPA)Approval workflows & BPM
Data volumeHigh (GB–TB)Low–medium (request/response)Low–mediumLow (per-transaction)Low (form/document)
LatencyMinutes to hours (batch)Milliseconds to secondsSeconds to minutesMinutes (attended/unattended)Hours to days (human steps)
Integration styleBatch, ELT/ETL, file-basedSynchronous REST, event-drivenEvent-triggered, AI-augmentedScreen/UI interactionBPM, form-driven
Managed byData / Platform EngineeringIntegration teamIT / Business AutomationIT / RPA teamBusiness / IT

When to Use Airflow (Cloud Composer)

Use Airflow when the integration involves:

  • High-volume data movement between systems (financial transactions, market data, regulatory filings)
  • Multi-step data pipelines with dependency management between tasks (Bronze → Silver → Gold)
  • Scheduled batch processing with complex retry policies and SLA requirements
  • ETL/ELT workloads requiring Python-based transformation, validation, or enrichment logic

Do not use Airflow for: real-time synchronous API calls, simple webhooks, UI automation, or human-in-the-loop workflows.

Examples at Patria: XBRL financial filing ingestion from CMF into BigQuery; nightly reconciliation between Geneva and the data warehouse; scheduled extraction of market data feeds.

When to Use Sensedia

Use Sensedia (iPaaS + API Management Platform) when the integration involves:

  • API-led connectivity: exposing or consuming microservices via REST, with routing, transformation, or security enforcement
  • Simple integration flows: protocol transformation, payload mapping, header enrichment, scheduled jobs, stored procedures
  • API security enforcement: OAuth 2.0 token validation, rate limiting, API key management
  • Event streaming: publishing or consuming events via Sensedia Event Hub

Do not use Sensedia for: heavy data transformation workloads, AI-augmented workflows, or long-running batch jobs.

Examples at Patria: Exposing Geneva portfolio data as a REST API; price revision flows between systems (Camel-based flows); event-driven notifications on fund valuation updates.

When to Use N8N

Use N8N when the integration involves:

  • AI-augmented automation: LLM-based document interpretation, MCP tool orchestration, Copilot integrations
  • Business process automation with human-in-the-loop steps or conditional branching (non-approval)
  • Cross-platform automation: connecting systems that lack APIs (web scraping, file downloads, email parsing)
  • Rapid prototyping and low-code workflows connecting SaaS tools

Do not use N8N for: production-critical API gateway functions (use Sensedia), high-volume data pipelines (use Airflow), or formal approval workflows with audit trails (use Fluig).

Examples at Patria: Downloading XBRL files from CMF and uploading to GCS; AI-assisted document classification; automated Slack/email notifications triggered by business events.

When to Use UiPath

Use UiPath when the integration involves:

  • Automating repetitive interactions with systems that lack APIs: legacy application data entry, screen scraping, cross-application data transfer
  • Back-office process automation: report generation, file manipulation in desktop applications, attended and unattended bots
  • Bridging legacy systems: acting as a human-substitute to extract or input data in systems that cannot be integrated via API

Do not use UiPath for: systems that expose APIs (use Sensedia or Airflow instead), real-time integrations, AI-augmented decision flows (use N8N), or approval workflows (use Fluig).

UiPath integrations should be treated as a transitional solution. Where possible, the preferred long-term approach is to expose the target system via API and replace the RPA bot with a Sensedia integration.

When to Use Fluig

Use Fluig when the integration involves:

  • Multi-step approval workflows with human decision points (purchase orders, contracts, onboarding, expense approval)
  • Document routing and review processes requiring signatures, comments, or conditional routing
  • BPM processes with audit trail requirements: who approved, when, with what justification
  • Form-driven business processes integrated with document management

Do not use Fluig for: system-to-system data integrations (use Sensedia), batch data processing (use Airflow), or IT change management and incident ticketing (use ServiceNow).

Fluig vs ServiceNow: Fluig manages business-level approval workflows (purchasing, contracts, HR). ServiceNow manages IT service management processes (incidents, change requests, ticketing). These are complementary, not competing.

Platform Selection Flowchart

Is it a data pipeline moving or transforming large volumes of data?
├── YES → Airflow (Cloud Composer)
└── NO
    Does it require automating a UI or interacting with a system that has no API?
    ├── YES → UiPath
    └── NO
        Is it a multi-step approval or document workflow with human decision points?
        ├── YES → Fluig
        └── NO
            Is it AI-augmented, an automation, or involves human-in-the-loop (non-approval)?
            ├── YES → N8N
            └── NO → Sensedia (iPaaS or API Management)

Hybrid scenarios: Platforms can be combined when each contributes its strength. For example:

  • N8N downloads a file and places it in GCS → Airflow picks it up for pipeline processing
  • UiPath extracts data from a legacy system → Sensedia exposes it via API
  • Fluig approval triggers a Sensedia integration to provision access

5.1 API Development and Management

API Development Frameworks

ComponentStandard TechnologyVersionAlternativeVersion
Backend FrameworkFastAPI (Python)LatestNode.js (Express/NestJS)Latest LTS
API DocumentationOpenAPI (Swagger)3.0+
API TestingPytest + RequestsLatestPostman, NewmanLatest
Schema ValidationPydantic (Python)LatestJoi (Node.js)Latest

API Gateway and Management

ComponentStandard TechnologyAlternative
API GatewaySensedia API Management PlatformGCP API Gateway, Cloud Endpoints (specific use cases)
Rate LimitingSensedia API Management (built-in)Gateway-specific capabilities
API AnalyticsSensedia AnalyticsCloud Monitoring, Custom dashboards
Developer PortalSensedia Developer PortalCustom developer portal

Authentication and Authorization

ComponentStandard TechnologyNotes
Identity ProviderEntra ID (Microsoft AD)Centralized identity management
API AuthenticationOAuth 2.0 / OpenID ConnectIndustry standard
Service-to-ServiceService Account tokens, API KeysManaged through Secret Manager
Token ManagementJWT (JSON Web Tokens)Short-lived, signed tokens

5.2 Workflow and Process Automation

Process Orchestration

ComponentStandard TechnologyAlternativeUse Case
Integration PlatformSensedia iPaaSN8N (AI/automations)Internal integrations, scheduled jobs, stored procedures
Approval Workflows / BPMFluigPower Automate (limited)Multi-step approvals, document routing, BPM (purchase orders, contracts, onboarding)
IT Service Management (ITSM)ServiceNowIncident management, change requests, ticketing
RPAUiPathPower Automate DesktopRobotic Process Automation — automates repetitive interactions with systems that lack APIs (legacy data entry, screen scraping, cross-application data transfer, report generation)
AI AutomationsN8NCustom scriptsAI-augmented workflows, document interpretation via MCP, AI integrations

5.3 Event Streaming and Messaging

Message Brokers and Event Platforms

ComponentStandard TechnologyAlternativeUse Case
Event StreamingSensedia Event HubPrimary event platform, async messaging
Message QueuingSensedia Event HubGCP Pub/Sub, Cloud Pub/Sub (integration)Task queues, decoupling, integrates with Cloud Pub/Sub
Event ProcessingCloud Functions, DataflowAzure FunctionsEvent-driven processing

Event Platform Requirements:

  • At-least-once or exactly-once delivery guarantees
  • Message ordering where required
  • Dead letter queue support
  • Message retention and replay capabilities
  • Schema registry integration

5.4 Data Integration

Data Integration Platforms

ComponentStandard TechnologyAlternativeUse Case
OrchestrationCloud Composer (Apache Airflow)Batch data pipelines
ETL ProcessingPython, BigQueryDataflowData transformation
Data QualityPython validation libraries, dbtGreat ExpectationsData validation
Change Data CaptureGCP DatastreamDebeziumReal-time data sync

File Transfer and Storage

ComponentStandard TechnologyAlternative
File StorageGoogle Cloud StorageAzure Blob Storage, AWS S3
Secure File TransferCloud Storage APIs, SFTPManaged Transfer Service
File ParsingPython (Pandas, PyArrow)Cloud Functions

See [PAX-EA] 5. Data Architecture for comprehensive data integration standards.

5.5 Integration Monitoring and Observability

Monitoring and Observability Stack

ComponentStandard TechnologyAlternative
Centralized LoggingCloud LoggingELK Stack, Splunk
Metrics and MonitoringCloud MonitoringDatadog, Prometheus + Grafana
Distributed TracingCloud TraceJaeger, Zipkin
Application PerformanceCloud ProfilerDatadog APM
AlertingCloud Monitoring AlertsPagerDuty integration

Observability Requirements:

  • Correlation IDs across all integration calls
  • Structured logging (JSON format)
  • Integration-specific metrics and dashboards
  • SLA monitoring and violation alerts
  • Error rate tracking and anomaly detection

6. API Standards and Governance

APIs are the primary integration mechanism in our architecture. This section defines comprehensive standards for API design, development, documentation, and lifecycle management.

6.1 API Design Principles

6.1.1 Design for the Consumer

APIs should be designed from the consumer’s perspective, prioritizing ease of use, clarity, and consistency. API naming, structure, and behavior should align with consumer mental models and use cases.

Guidelines:

  • Use consumer-friendly, business-oriented naming
  • Provide predictable, consistent patterns across APIs
  • Minimize the number of API calls needed for common scenarios
  • Design intuitive request/response structures
  • Provide helpful error messages and guidance

6.1.2 Platform Independence

APIs should be platform and technology agnostic, avoiding assumptions about consumer implementation. Use standard protocols, formats, and conventions that work across diverse platforms.

Guidelines:

  • Use standard HTTP methods and status codes
  • Return JSON as the default response format
  • Support standard authentication mechanisms (OAuth 2.0)
  • Avoid technology-specific assumptions or requirements

6.1.3 Stability and Backward Compatibility

Published APIs represent contracts that consumers depend on. Maintain backward compatibility and provide clear migration paths when changes are necessary.

Guidelines:

  • Never break existing API contracts without versioning
  • Implement graceful deprecation with advance notice
  • Support old versions for a defined transition period
  • Provide migration guides and tooling

6.1.4 Self-Service and Discoverability

APIs should be easily discoverable and usable without extensive support. Comprehensive documentation, examples, and developer tools enable self-service integration.

Guidelines:

  • Publish comprehensive OpenAPI specifications
  • Provide interactive API documentation (Swagger UI)
  • Include code examples in multiple languages
  • Offer sandbox environments for testing
  • Maintain up-to-date documentation

6.2 API Design Standards

6.2.1 RESTful API Standards

All REST APIs must follow these standards:

Resource Naming

  • Use plural nouns for resource collections: /customers, /orders
  • Use lowercase with hyphens for multi-word resources: /purchase-orders
  • Use nested resources for relationships: /customers/{id}/orders
  • Avoid verbs in URLs (use HTTP methods instead)

HTTP Methods

  • GET: Retrieve resources (must be idempotent and safe)
  • POST: Create new resources or execute actions
  • PUT: Replace entire resource (idempotent)
  • PATCH: Partial update of resource (idempotent)
  • DELETE: Remove resource (idempotent)

HTTP Status Codes

CodeMeaningWhen to Use
200 OKSuccessSuccessful GET, PUT, PATCH requests
201 CreatedResource createdSuccessful POST creating a resource
204 No ContentNo bodySuccessful DELETE or update with no response body
400 Bad RequestInvalid requestInvalid request syntax or parameters
401 UnauthorizedAuth missing/invalidMissing or invalid authentication
403 ForbiddenInsufficient permissionsValid authentication but insufficient permissions
404 Not FoundResource missingResource does not exist
409 ConflictState conflictRequest conflicts with current state
422 Unprocessable EntitySemantic errorSemantic validation errors
429 Too Many RequestsRate limitedRate limit exceeded
500 Internal Server ErrorServer failureUnexpected server error
503 Service UnavailableTemp unavailableService temporarily unavailable

Example Error Response

{
  "error": {
    "code": "INVALID_CUSTOMER_ID",
    "message": "Customer ID must be a valid UUID",
    "details": [
      {
        "field": "customer_id",
        "issue": "invalid_format"
      }
    ],
    "trace_id": "550e8400-e29b-41d4-a716-446655440000"
  }
}

6.2.2 Pagination Standards

For collection resources, implement pagination to manage large datasets.

Cursor-Based Pagination (Preferred)

GET /orders?limit=50&cursor=eyJpZCI6MTIzfQ==
 
Response Headers:
X-Next-Cursor: eyJpZCI6MTczfQ==
X-Has-More: true

Offset-Based Pagination (Alternative)

GET /orders?limit=50&offset=100
 
Response Headers:
X-Total-Count: 1523
X-Page: 3
X-Page-Size: 50

6.2.3 Filtering, Sorting, and Searching

Support flexible querying capabilities:

Filtering

GET /products?category=electronics&price_min=100&price_max=500

Sorting

GET /orders?sort=created_at:desc,amount:asc

Field Selection

GET /customers?fields=id,name,email

6.2.4 Asynchronous Operations

For long-running operations, implement asynchronous patterns:

  1. Accept request and return 202 Accepted with operation tracking URL
  2. Provide status endpoint to check operation progress
  3. Support optional webhooks for completion notification
POST /reports
Response: 202 Accepted
{
  "operation_id": "op_123456",
  "status": "processing",
  "status_url": "/operations/op_123456",
  "estimated_completion": "2025-02-23T15:30:00Z"
}

6.3 API Versioning Strategy

6.3.1 Versioning Approach

Use URI path versioning as the primary versioning mechanism:

/v1/customers
/v2/customers

Version Lifecycle

  • v1: Initial release
  • v2: New major version with breaking changes
  • v1 enters deprecation phase when v2 is released
  • Support overlapping versions during transition period (minimum 6 months)
  • Communicate deprecation timeline clearly

6.3.2 What Constitutes a Breaking Change

Breaking changes require a new major version:

  • Removing or renaming fields
  • Changing field data types
  • Modifying error response structures
  • Changing authentication mechanisms
  • Altering required request parameters
  • Modifying behavior of existing operations

Non-Breaking Changes (No Version Change Required):

  • Adding new optional fields
  • Adding new endpoints
  • Adding new optional query parameters
  • Expanding enum values (with proper default handling)
  • Making required fields optional

6.3.3 Deprecation Process

  1. Announce deprecation with minimum 6-month notice
  2. Update API documentation with deprecation warnings
  3. Include deprecation headers in API responses:
Sunset: Sat, 31 Aug 2025 23:59:59 GMT
Deprecation: true
Link: </docs/migration-guide>; rel="deprecation"
  1. Monitor usage of deprecated APIs
  2. Communicate with remaining consumers before sunset
  3. Decommission after sunset date

6.4 API Documentation Requirements

All APIs must provide comprehensive documentation:

OpenAPI Specification

  • Complete OpenAPI 3.0+ specification
  • Detailed descriptions for all operations
  • Request/response examples
  • Schema definitions with validation rules
  • Authentication requirements
  • Error response documentation

Developer Portal Content

  • Getting started guide
  • Authentication and authorization guide
  • Common use case tutorials
  • Code examples (Python, JavaScript minimum)
  • Interactive API explorer (Swagger UI)
  • Changelog and migration guides
  • Support and contact information

6.5 API Lifecycle Management

6.5.1 API Development Lifecycle

  1. Design Phase: Create OpenAPI specification, review with stakeholders
  2. Development Phase: Implement according to specification, create tests
  3. Testing Phase: Deploy to dev, conduct integration/security/performance testing
  4. Release Phase: Deploy to production, publish documentation
  5. Operations Phase: Monitor, support, collect feedback
  6. Retirement Phase: Announce deprecation, support migration, decommission

6.5.2 API Governance and Review

All APIs must undergo architectural review before implementation.

Review Checklist:

  • Alignment with integration principles and patterns
  • Compliance with API design standards
  • Security controls and authentication mechanisms
  • Performance and scalability considerations
  • Documentation completeness
  • Monitoring and observability implementation
  • Disaster recovery and business continuity planning

7. Integration Security

Security is paramount in integration architecture. This section defines the security controls and standards applicable to integrations at Patria.

Authoritative reference: For the complete cybersecurity architecture framework, controls catalog, and implementation specifications, see CSAF-001 Cybersecurity Architecture Controls Framework (CACF). The following summarizes integration-specific security requirements.

7.1 Authentication and Authorization

7.1.1 API Authentication Standards

All APIs must implement robust authentication.

OAuth 2.0 / OpenID Connect (Primary)

  • Use Entra ID (Microsoft AD) as the central identity provider
  • Implement OAuth 2.0 authorization code flow for user-facing applications
  • Use client credentials flow for service-to-service authentication
  • Enforce token expiration and refresh token rotation
  • Validate tokens on every request

Token Format and Validation

  • Use JWT (JSON Web Tokens) for access tokens
  • Validate token signature, expiration, issuer, and audience
  • Implement token revocation capability
  • Use short-lived access tokens (recommended: 1 hour maximum)
  • Store refresh tokens securely and rotate regularly

API Keys (Limited Use Cases)

  • Use only for server-to-server integrations where OAuth is not feasible
  • Generate cryptographically strong, unique keys
  • Rotate keys regularly (recommended: every 90 days)
  • Store keys in Secret Manager, never in code or configuration files
  • Implement key rotation without service disruption

7.1.2 Authorization and Access Control

Role-Based Access Control (RBAC)

  • Define granular roles aligned with business functions
  • Implement least privilege principle
  • Separate read and write permissions
  • Apply role hierarchies where appropriate
  • Audit and review permissions regularly

Authorization Implementation

  • Enforce authorization at the API gateway and application layer
  • Validate permissions for every operation
  • Implement resource-level authorization where applicable
  • Log all authorization decisions
  • Deny by default (fail closed)

7.1.3 Multi-Factor Authentication (MFA)

Requirements:

  • Enforce MFA for all user-facing applications
  • Support TOTP (Time-based One-Time Password) and push notifications
  • Implement adaptive authentication based on risk signals
  • Provide account recovery mechanisms
  • Log all MFA events

7.2 API Security Controls

7.2.1 Input Validation and Sanitization

All API inputs must be validated and sanitized.

Validation Requirements:

  • Validate all request parameters, headers, and body content
  • Enforce strict schema validation using OpenAPI or JSON Schema
  • Reject requests with unexpected or malformed data
  • Validate data types, formats, ranges, and patterns
  • Implement whitelist validation where possible

Injection Prevention:

  • Use parameterized queries for database access
  • Escape output for rendering contexts
  • Implement Content Security Policy (CSP)
  • Validate and sanitize file uploads
  • Apply input length limits

7.2.2 API Rate Limiting and Throttling

Implement rate limiting to prevent abuse and ensure fair usage.

Rate Limiting Strategy:

  • Apply rate limits at API gateway level
  • Implement tiered limits based on consumer type or subscription
  • Use token bucket or sliding window algorithms
  • Return 429 Too Many Requests with retry guidance

Rate Limit Headers:

X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 742
X-RateLimit-Reset: 1708704000
Retry-After: 3600

Recommended Limits:

Consumer TypeLimit
Authenticated users1,000 requests/hour
Service accounts5,000 requests/hour
Public endpoints100 requests/hour per IP
Burst allowance2× rate for short periods

7.2.3 API Security Headers

Implement security headers on all API responses:

Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
Content-Security-Policy: default-src 'none'
X-XSS-Protection: 1; mode=block
Referrer-Policy: no-referrer
Permissions-Policy: geolocation=(), microphone=(), camera=()

7.2.4 CORS (Cross-Origin Resource Sharing)

Configure CORS policies restrictively:

  • Whitelist specific origins (avoid using *)
  • Restrict allowed methods to required HTTP verbs
  • Limit exposed headers
  • Set appropriate max-age for preflight caching
  • Require credentials only when necessary
Access-Control-Allow-Origin: https://app.patria.com
Access-Control-Allow-Methods: GET, POST, PUT, DELETE
Access-Control-Allow-Headers: Authorization, Content-Type
Access-Control-Max-Age: 86400

7.3 Data Protection in Transit

7.3.1 Transport Layer Security (TLS)

All data transmission must be encrypted.

TLS Requirements:

  • Use TLS 1.3 (minimum TLS 1.2)
  • Disable outdated protocols (SSL, TLS 1.0, TLS 1.1)
  • Use strong cipher suites (AES-256, ChaCha20)
  • Implement perfect forward secrecy
  • Use valid certificates from trusted CAs
  • Implement certificate pinning for mobile applications

Certificate Management:

  • Automate certificate issuance and renewal
  • Monitor certificate expiration
  • Implement certificate revocation checking
  • Use wildcard certificates sparingly
  • Maintain certificate inventory

7.3.2 Data Encryption Standards

  • Encrypt all sensitive data in transit (TLS 1.3)
  • Encrypt sensitive data at rest (AES-256)
  • Use field-level encryption for highly sensitive data (PII, financial)
  • Implement key rotation policies
  • Never transmit credentials or tokens in URLs

7.4 Secrets Management

7.4.1 Secret Storage and Access

All secrets must be managed through approved secret management solutions.

Secret Manager Standards:

  • Use GCP Secret Manager as the primary secret store
  • Never store secrets in code, configuration files, or version control
  • Use service account authentication to access secrets
  • Implement least privilege access to secrets
  • Enable secret versioning and rotation
  • Audit all secret access

Secret Types:

  • API keys and tokens
  • Database credentials
  • Encryption keys
  • OAuth client secrets
  • Certificate private keys

7.4.2 Secret Rotation

Implement automated secret rotation.

Rotation Schedule:

Secret TypeRotation Frequency
Database passwordsEvery 90 days
API keysEvery 90 days
OAuth client secretsEvery 180 days
Encryption keysAnnually or upon compromise
CertificatesAutomated renewal 30 days before expiration

Rotation Process:

  • Support dual secrets during rotation window
  • Test new secrets before revoking old secrets
  • Implement zero-downtime rotation
  • Monitor for authentication failures during rotation
  • Audit and log all rotation events

7.5 Security Monitoring and Threat Detection

7.5.1 Security Event Logging

Log all security-relevant events.

Required Security Logs:

  • Authentication attempts (success and failure)
  • Authorization decisions (grants and denials)
  • API access patterns and anomalies
  • Rate limit violations
  • Input validation failures
  • Sensitive data access
  • Configuration changes
  • Secret access

Log Format:

  • Use structured logging (JSON)
  • Include correlation IDs
  • Timestamp in ISO 8601 format (UTC)
  • Include user/service identity
  • Log severity levels appropriately
  • Redact sensitive data from logs

7.5.2 Security Monitoring and Alerting

Implement continuous security monitoring.

Monitoring Requirements:

  • Integrate logs with SIEM system
  • Implement anomaly detection rules
  • Monitor failed authentication attempts
  • Track unusual API usage patterns
  • Alert on suspicious activities
  • Correlate events across systems

Alert Thresholds:

ConditionThreshold
Failed authentication> 5 attempts in 5 minutes
Rate limit violations> 100 violations per hour
Authorization denials> 20 denials in 10 minutes
Unusual geographic accessAny access outside expected regions
Off-hours API accessAccess outside business hours (if applicable)

7.5.3 Vulnerability Management

Continuous vulnerability scanning:

  • Scan all APIs for OWASP Top 10 vulnerabilities
  • Conduct quarterly penetration testing
  • Implement automated dependency scanning
  • Monitor security advisories for used components
  • Apply security patches within SLA timeframes
  • Conduct annual security audits

See CSAF-001 Cybersecurity Architecture Controls Framework for comprehensive security standards.


8. Data Integration Standards

Data integration standards ensure reliable, consistent, and high-quality data movement across systems.

Authoritative reference: For complete data architecture standards including Medallion architecture, data governance, BigQuery conventions, and data quality frameworks, see [PAX-EA] 5. Data Architecture. This section covers integration-specific data standards only.

8.1 Data Integration Patterns

8.1.1 Real-Time Data Integration

Use real-time integration for time-sensitive data synchronization.

Implementation Approaches:

  • Change Data Capture (CDC): Database-level change streaming using GCP Datastream or Debezium
  • Event-Driven Sync: Publish events on data changes for downstream consumption
  • API Polling: Scheduled polling for systems without push capabilities (last resort)

When to Use:

  • Operational data requiring immediate visibility
  • Critical business processes dependent on current data
  • Real-time analytics and dashboards
  • Event-driven application workflows

Technology Alignment:

  • CDC: GCP Datastream
  • Event Streaming: Sensedia Event Hub, Cloud Pub/Sub
  • Stream Processing: Cloud Dataflow

8.1.2 Batch Data Integration

Use batch integration for high-volume, periodic data synchronization.

Batch Processing Patterns:

  • Full Load: Complete dataset extraction and load
  • Incremental Load: Only changed records since last extraction
  • Delta Load: Changes within a time window
  • Snapshot Load: Point-in-time data capture

When to Use:

  • Large volume data transfers
  • Reporting and analytics data warehousing
  • Non-time-critical data synchronization
  • Resource-intensive transformations

Technology Alignment:

  • Orchestration: Apache Airflow (Cloud Composer)
  • Processing: Python, BigQuery, Cloud Dataflow
  • Storage: Cloud Storage, BigQuery
  • Scheduling: Airflow DAGs, Cloud Scheduler

See [PAX-EA] 5. Data Architecture for detailed batch integration standards and Medallion architecture alignment.

8.1.3 Hybrid Data Integration

Combine real-time and batch patterns for optimal results.

Hybrid Scenarios:

  • Real-time operational data with nightly reconciliation
  • Event-driven updates with batch analytics processing
  • Incremental real-time loads with periodic full refreshes
  • Streaming ingestion with batch transformation

Example Architecture:

  • Bronze Layer: Real-time streaming ingestion (CDC, events)
  • Silver Layer: Batch transformation and quality validation
  • Gold Layer: Incremental updates from silver, optimized for consumption

8.2 Data Transformation Standards

8.2.1 Transformation Principles

  • Separation of Concerns: Extract, transform, and load are distinct phases
  • Idempotency: Transformations produce same results when re-executed
  • Lineage: Maintain complete data lineage from source to target
  • Validation: Validate data quality at each transformation stage
  • Documentation: Document transformation logic and business rules

8.2.2 Transformation Layers (Medallion Architecture)

Bronze Layer (Raw Data)

  • Minimal transformation, preserve source format
  • Add ingestion metadata (timestamp, source, batch ID)
  • Store in original data types
  • No data quality validation (accept as-is)
  • Immutable once written

Silver Layer (Curated Data)

  • Standardize data formats and types
  • Cleanse and deduplicate records
  • Apply business rules and validation
  • Enrich with reference data
  • Implement slowly changing dimensions (SCD)

Gold Layer (Analytics-Ready Data)

  • Aggregate and denormalize for consumption
  • Apply complex business logic
  • Optimize for query performance
  • Create subject-area-specific datasets
  • Implement calculated metrics and KPIs

8.2.3 Common Transformations

Data Type Conversions:

  • Standardize date/time formats (ISO 8601)
  • Convert numeric precision consistently
  • Normalize string encoding (UTF-8)
  • Handle null values consistently

Data Cleansing:

  • Trim whitespace
  • Remove duplicate records
  • Standardize casing
  • Validate against reference data
  • Correct common data quality issues

Data Enrichment:

  • Add derived fields and calculations
  • Join with reference data
  • Geocoding and address standardization
  • Hierarchy and taxonomy mapping

8.3 Data Quality and Validation

8.3.1 Data Quality Dimensions

Monitor and enforce data quality across dimensions:

DimensionDescription
CompletenessRequired fields must not be null; record counts match expected volumes; no missing critical data elements
AccuracyValues fall within expected ranges; referential integrity is maintained; calculated fields match source systems
ConsistencyData conforms to standard formats; cross-system data reconciles; no conflicting values
TimelinessData arrives within SLA windows; timestamps are current and accurate; freshness meets business requirements
ValidityData conforms to defined schemas; values match allowed enumerations; formats match specifications

8.3.2 Data Validation Implementation

Implement validation at multiple stages:

Source Validation:

  • Validate schema and data types on ingestion
  • Check file completeness and integrity
  • Verify expected record counts
  • Detect duplicate submissions

Transformation Validation:

  • Validate business rules application
  • Check data quality metrics
  • Verify referential integrity
  • Detect anomalies and outliers

Target Validation:

  • Reconcile source and target record counts
  • Validate aggregations and calculations
  • Check for data loss or corruption
  • Verify target schema compliance

8.3.3 Data Quality Monitoring

Implement continuous data quality monitoring.

Monitoring Approach:

  • Define data quality metrics and thresholds
  • Implement automated quality checks
  • Track quality trends over time
  • Alert on quality degradation
  • Maintain data quality dashboards

Quality Metrics:

MetricDescription
Null ratePercentage of null values in required fields
Duplicate ratePercentage of duplicate records
Error ratePercentage of records failing validation
FreshnessAge of most recent data
CompletenessPercentage of expected records received

8.4 Integration with Data Architecture

Data integration must align with the Data Architecture standards.

Medallion Architecture Compliance

  • All data flows through Bronze → Silver → Gold layers
  • Maintain immutability of Bronze layer
  • Apply transformations progressively through layers
  • Optimize Gold layer for consumption patterns

Dataplex Governance

  • Register all datasets in Dataplex
  • Apply data classification tags
  • Implement access controls at zone and asset level
  • Maintain data lineage in Dataplex

BigQuery Standards

  • Follow naming conventions for datasets and tables
  • Implement partitioning and clustering for performance
  • Apply column-level security and masking
  • Document all tables and fields

Airflow Orchestration

  • Follow DAG naming conventions
  • Implement task groups for logical separation
  • Use templating and variables for reusability
  • Implement comprehensive error handling and retries

See [PAX-EA] 5. Data Architecture for complete data architecture standards.


9. Event-Driven Architecture Standards

Event-Driven Architecture (EDA) enables loosely coupled, scalable, and reactive systems. This section defines Patria-specific standards for implementing event-driven integrations.

Standards basis: Event-driven patterns follow the CloudEvents specification and industry patterns (Hohpe & Woolf — Enterprise Integration Patterns). Patria-specific standards are defined below; for generic EDA theory, refer to those references.

9.1 Event Design Principles

9.1.1 Event Granularity

Design events at the appropriate level of granularity.

Fine-Grained Events:

  • Represent single state changes
  • Easier to process and understand
  • Support flexible consumption patterns
  • May result in higher event volumes

Coarse-Grained Events:

  • Represent aggregated or composite changes
  • Reduce event volumes
  • May require more processing by consumers
  • Better for performance-sensitive scenarios

Recommendation: Default to fine-grained events unless performance requirements dictate otherwise.

9.1.2 Event Naming and Semantics

Use clear, consistent event naming.

Event Naming Convention:

{domain}.{entity}.{action}

Examples:

customer.created
order.status.changed
payment.completed
inventory.stock.depleted

Event Types:

  • Entity Events: Represent changes to domain entities (customer.created)
  • Action Events: Represent business actions or commands (order.submitted)
  • Notification Events: Inform of system occurrences (report.generated)

9.1.3 Event Payload Design

Design event payloads for clarity and efficiency.

Event Payload Principles:

  • Include sufficient context for processing without additional queries
  • Use consistent field naming across events
  • Include metadata (event ID, timestamp, version, correlation ID)
  • Balance payload size with consumer needs
  • Version event schemas

9.2 Event Schema Standards

9.2.1 Event Structure

All events must follow a consistent structure.

Standard Event Format:

{
  "event_id": "evt_550e8400-e29b-41d4-a716-446655440000",
  "event_type": "customer.created",
  "event_version": "1.0",
  "event_time": "2025-02-23T10:30:00Z",
  "source": "customer-service",
  "correlation_id": "corr_123456",
  "data": {
    "customer_id": "cust_789012",
    "name": "Example Customer",
    "email": "customer@example.com",
    "created_at": "2025-02-23T10:30:00Z"
  },
  "metadata": {
    "user_id": "usr_345678",
    "ip_address": "192.0.2.1"
  }
}

Required Fields:

FieldDescription
event_idUnique identifier for the event (UUID)
event_typeType/category of the event
event_versionSchema version
event_timeWhen the event occurred (ISO 8601 UTC)
sourceSystem or service that generated the event
dataEvent-specific payload

Optional but Recommended Fields:

FieldDescription
correlation_idID linking related events across systems
causation_idID of the event that caused this event
metadataAdditional context (user, session, etc.)

9.2.2 Schema Versioning

Implement versioning for event schemas.

Versioning Strategy:

  • Include event_version in all events
  • Use semantic versioning (major.minor.patch)
  • Increment major version for breaking changes
  • Support multiple schema versions simultaneously
  • Document schema changes in schema registry

Schema Evolution Rules:

Change TypeVersion Impact
Adding optional fieldsMinor version increment
Deprecating fieldsMinor version with deprecation notice
Removing or renaming fieldsMajor version increment
Changing field typesMajor version increment

Schema Registry:

  • Store all event schemas in a centralized registry
  • Validate events against schemas
  • Provide schema discovery for consumers
  • Track schema usage and dependencies

9.2.3 Schema Definition Format

Use JSON Schema for event schema definition.

Example JSON Schema:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "CustomerCreatedEvent",
  "type": "object",
  "required": ["event_id", "event_type", "event_version", "event_time", "source", "data"],
  "properties": {
    "event_id": {
      "type": "string",
      "format": "uuid"
    },
    "event_type": {
      "type": "string",
      "const": "customer.created"
    },
    "event_version": {
      "type": "string",
      "pattern": "^\\d+\\.\\d+$"
    },
    "event_time": {
      "type": "string",
      "format": "date-time"
    },
    "data": {
      "type": "object",
      "required": ["customer_id", "name", "email"],
      "properties": {
        "customer_id": {"type": "string"},
        "name": {"type": "string"},
        "email": {"type": "string", "format": "email"}
      }
    }
  }
}

9.3 Event Sourcing and CQRS

Event Sourcing (storing state as an append-only event log) and CQRS (separating read/write models) are advanced patterns applicable when audit history, temporal queries, or significantly divergent read/write workloads are required.

Apply these patterns only when justified by specific business requirements — they introduce significant operational complexity. Consult the Integration team before adopting either pattern.

Technology alignment when used:

  • Event Store: Sensedia Event Hub, Cloud Pub/Sub
  • Write Model: Operational databases (SQL Server, MongoDB)
  • Read Models: BigQuery, denormalized tables

9.4 Event Processing Patterns

9.4.1 Event Processing Topologies

Simple Event Processing:

  • One-to-one event transformation
  • Filtering and routing
  • Data enrichment
  • Technology: Cloud Functions, lightweight processors

Complex Event Processing (CEP):

  • Pattern detection across event streams
  • Aggregation and windowing
  • Correlation and causality analysis
  • Technology: Cloud Dataflow

9.4.2 Error Handling and Retries

Implement robust error handling.

Retry Strategy:

  • Implement exponential backoff for transient failures
  • Configure maximum retry attempts
  • Use dead letter queues for failed messages
  • Log all processing errors with context
  • Alert on high error rates

Dead Letter Queue (DLQ):

  • Route failed events to DLQ after max retries
  • Implement DLQ monitoring and alerting
  • Provide DLQ replay mechanism
  • Analyze DLQ patterns to improve reliability

9.4.3 Event Ordering and Idempotency

Event Ordering:

  • Use partition keys for ordered delivery (when required)
  • Design for at-least-once delivery semantics
  • Avoid assumptions about global event order
  • Use event timestamps for temporal ordering

Idempotency:

  • Design all event handlers as idempotent
  • Use event_id for deduplication
  • Store processed event IDs to detect duplicates
  • Implement idempotent database operations

10. Integration Governance

Integration governance ensures consistency, quality, and alignment with architectural standards across all integrations.

10.1 Integration Catalog and Discovery

10.1.1 Integration Inventory

Maintain a comprehensive catalog of all integrations.

Required Metadata:

  • Integration name and description
  • Source and target systems
  • Integration pattern (API, event, batch, file)
  • Data classification and sensitivity
  • Ownership and contacts
  • SLA and performance requirements
  • Dependencies and consumers
  • Documentation links

Catalog Platform:

  • Use centralized repository (API catalog, documentation portal)
  • Implement searchable, browsable interface
  • Keep catalog synchronized with actual implementations
  • Provide APIs for programmatic access

10.1.2 Integration Discovery

Enable self-service integration discovery.

Discovery Mechanisms:

  • Centralized API catalog with search
  • Interactive API documentation (Swagger UI)
  • Event catalog with schema browser
  • Code examples and quickstart guides
  • Sandbox environments for testing

Documentation Requirements:

  • Purpose and use cases
  • Authentication and authorization
  • Request/response examples
  • Error codes and troubleshooting
  • SLAs and support contacts
  • Migration and versioning guides

10.2 Change Management

10.2.1 Change Control Process

All integration changes must follow change management.

Change Categories:

CategoryDescription
Standard ChangesPre-approved, low-risk (e.g., adding optional fields)
Normal ChangesRequire review and approval
Emergency ChangesExpedited process for critical issues

Change Request Contents:

  • Description of change and rationale
  • Impact analysis (affected systems and consumers)
  • Risk assessment and mitigation plan
  • Testing plan and results
  • Rollback procedure
  • Communication plan

Approval Requirements:

  • Architecture review for design changes
  • Security review for authentication/authorization changes
  • Business approval for functional changes
  • Consumer notification for breaking changes

10.2.2 Versioning and Deprecation

Implement consistent versioning across integrations.

API Versioning:

  • URI path versioning (/v1/, /v2/)
  • Maintain backward compatibility within major versions
  • Support multiple versions during transition period
  • Communicate deprecation timeline (minimum 6 months notice)

Event Versioning:

  • Include version in event schema
  • Support multiple event versions simultaneously
  • Provide migration guides for consumers
  • Deprecate old versions after transition period

Deprecation Timeline:

  1. Announcement: Communicate deprecation and timeline
  2. Warning Period: Include deprecation headers/metadata
  3. Support Period: Maintain old version while consumers migrate
  4. Sunset: Remove deprecated version after notice period

10.3 Integration Testing Standards

10.3.1 Testing Levels

Implement comprehensive testing at multiple levels.

Unit Testing:

  • Test individual integration components
  • Mock external dependencies
  • Validate transformation logic
  • Achieve minimum 80% code coverage

Integration Testing:

  • Test interactions between components
  • Validate end-to-end data flow
  • Test error handling and edge cases
  • Use test environments with realistic data

Contract Testing:

  • Validate API contracts (OpenAPI specs)
  • Test event schema compliance
  • Verify backward compatibility
  • Implement consumer-driven contract tests

Performance Testing:

  • Load testing to validate throughput
  • Stress testing for breaking points
  • Endurance testing for stability
  • Scalability testing for growth scenarios

Security Testing:

  • Vulnerability scanning (OWASP Top 10)
  • Penetration testing for critical integrations
  • Authentication and authorization testing
  • Input validation and injection testing

10.3.2 Test Automation

Automate testing in CI/CD pipelines.

Automated Test Requirements:

  • Run unit tests on every commit
  • Run integration tests on pull requests
  • Run contract tests before deployment
  • Run performance tests on staging
  • Run security scans continuously

Test Data Management:

  • Use synthetic test data (avoid production data)
  • Implement test data generation
  • Maintain test data privacy and security
  • Reset test environments regularly

10.3.3 Testing Environments

Provide appropriate testing environments.

EnvironmentPurpose
DevelopmentRapid iteration, frequently updated
Testing/QAStable for integration testing
StagingProduction-like for final validation
ProductionLive environment with monitoring

Environment Requirements:

  • Mirror production architecture and configuration
  • Use production-equivalent data volumes (staging)
  • Implement environment isolation
  • Provide self-service environment provisioning
  • Maintain environment parity

10.4 Performance and SLA Management

10.4.1 Performance Requirements

Define and monitor performance requirements.

Key Performance Indicators (KPIs):

KPIDescription
Response TimeP50, P95, P99 latency
ThroughputRequests or messages per second
AvailabilityUptime percentage (99.9%, 99.99%)
Error RatePercentage of failed requests
Data FreshnessTime from source to target

Performance Targets:

Integration TypeTarget
Synchronous APIsP95 < 500ms, P99 < 1,000ms
Asynchronous EventsProcessing within 1 minute
Batch JobsComplete within SLA window
Data FreshnessWithin business requirements

10.4.2 Service Level Agreements (SLAs)

Define clear SLAs for all integrations.

SLA Components:

  • Availability target (e.g., 99.9% uptime)
  • Performance targets (response time, throughput)
  • Support response times
  • Scheduled maintenance windows
  • Consequences for SLA violations

SLA Monitoring:

  • Real-time SLA tracking dashboards
  • Automated alerting on SLA violations
  • Regular SLA reporting to stakeholders
  • SLA breach analysis and remediation

10.4.3 Capacity Planning

Plan for growth and scalability.

Capacity Planning Activities:

  • Monitor current usage and growth trends
  • Forecast future capacity requirements
  • Identify bottlenecks and constraints
  • Plan infrastructure scaling
  • Test scalability limits

Scalability Strategies:

  • Horizontal scaling (add instances)
  • Vertical scaling (increase instance size)
  • Caching and optimization
  • Asynchronous processing
  • Load balancing and distribution

11. Monitoring and Observability

Comprehensive monitoring and observability are essential for operating reliable integrations. This section defines monitoring requirements and standards.

11.1 Integration Monitoring Requirements

11.1.1 Monitoring Pillars

Implement the three pillars of observability.

Logs:

  • Structured logging (JSON format)
  • Include correlation IDs for request tracing
  • Log levels: DEBUG, INFO, WARNING, ERROR, CRITICAL
  • Centralize logs in Cloud Logging
  • Retain logs according to compliance requirements

Metrics:

  • Business metrics (transaction volumes, success rates)
  • Technical metrics (latency, throughput, error rates)
  • Infrastructure metrics (CPU, memory, network)
  • Custom application metrics
  • Store metrics in Cloud Monitoring

Traces:

  • Distributed tracing across service boundaries
  • Trace all integration calls with correlation IDs
  • Implement OpenTelemetry standard
  • Visualize traces in Cloud Trace
  • Analyze latency and bottlenecks

11.1.2 Key Metrics to Monitor

Monitor these critical integration metrics.

CategoryMetrics
AvailabilityUptime percentage, service health status, dependency availability
PerformanceRequest latency (P50, P95, P99), throughput (req/sec), queue depth and processing lag, batch job duration
ErrorsError rate (% of failed requests), error types and distribution, timeout frequency, retry and circuit breaker activations
BusinessTransaction volumes by type, data freshness and lag, SLA compliance, consumer adoption and usage

11.2 Logging Standards

11.2.1 Structured Logging

Use structured logging for all integrations.

Log Format (JSON):

{
  "timestamp": "2025-02-23T10:30:00.123Z",
  "level": "INFO",
  "service": "customer-api",
  "correlation_id": "corr_550e8400-e29b-41d4-a716-446655440000",
  "user_id": "usr_123456",
  "method": "POST",
  "endpoint": "/v1/customers",
  "status_code": 201,
  "duration_ms": 145,
  "message": "Customer created successfully",
  "metadata": {
    "customer_id": "cust_789012",
    "ip_address": "192.0.2.1"
  }
}

Required Log Fields:

FieldDescription
timestampISO 8601 format in UTC
levelLog severity level
serviceService or component name
correlation_idRequest correlation identifier
messageHuman-readable description

Context-Specific Fields:

FieldContext
user_id, session_idUser context
method, endpoint, status_codeAPI requests
duration_msOperation duration
error, stack_traceError information

11.2.2 Log Levels and Usage

Use appropriate log levels.

LevelUsage
DEBUGDetailed diagnostic information (disabled in production)
INFOGeneral informational messages (successful operations)
WARNINGWarning messages for unexpected but handled situations
ERRORError messages for failures requiring attention
CRITICALCritical failures requiring immediate action

Log Volume Management:

  • Implement sampling for high-volume logs
  • Use appropriate log levels to control volume
  • Configure log retention policies
  • Archive old logs to cost-effective storage

11.2.3 Sensitive Data in Logs

Protect sensitive data in logs.

Data Redaction Requirements:

  • Never log passwords or secrets
  • Redact PII (email, phone, address)
  • Mask financial data (credit cards, account numbers)
  • Redact authentication tokens
  • Apply redaction automatically via logging library

Redaction Example:

{
  "email": "cu*****@example.com",
  "credit_card": "****-****-****-1234",
  "token": "[REDACTED]"
}

11.3 Alerting and Incident Response

11.3.1 Alert Configuration

Configure alerts for critical conditions.

Alert Types:

TypeDescription
Threshold AlertsMetric exceeds threshold (error rate > 5%)
Anomaly AlertsMetric deviates from baseline
Absence AlertsExpected metric or event missing
SLA AlertsSLA violation detected

Alert Routing:

SeverityAction
CriticalPage on-call engineer immediately
HighNotify team channel and create incident
MediumLog and review during business hours
LowAggregate and review weekly

Alert Best Practices:

  • Ensure alerts are actionable
  • Avoid alert fatigue (tune thresholds)
  • Include runbook links in alerts
  • Test alerts regularly
  • Review and optimize alert rules

11.3.2 Incident Response

Establish incident response procedures.

Incident Severity Levels:

SeverityDescription
Severity 1 (Critical)Complete service outage, data loss
Severity 2 (High)Significant degradation, major functionality unavailable
Severity 3 (Medium)Partial functionality impacted
Severity 4 (Low)Minor issues, workaround available

Incident Response Process:

  1. Detect: Alert triggers or user report
  2. Acknowledge: On-call engineer acknowledges
  3. Investigate: Diagnose root cause using logs, metrics, traces
  4. Mitigate: Implement fix or workaround
  5. Resolve: Verify resolution and close incident
  6. Review: Conduct post-incident review

Post-Incident Review:

  • Document timeline and impact
  • Identify root cause and contributing factors
  • Define action items to prevent recurrence
  • Share learnings with team
  • Track remediation items to completion

11.4 Integration Metrics and KPIs

11.4.1 Integration Health Dashboard

Create dashboards for integration health.

Dashboard Components:

  • Service availability and uptime
  • Request volume and throughput
  • Error rate trends
  • Latency percentiles (P50, P95, P99)
  • SLA compliance status
  • Recent alerts and incidents

Dashboard Best Practices:

  • Organize by integration or domain
  • Highlight critical metrics prominently
  • Use consistent color schemes (red/yellow/green)
  • Include trend visualization
  • Enable drill-down for details

11.4.2 Business Metrics

Track business-relevant metrics.

Example Business Metrics:

  • Transaction volumes by type
  • Customer onboarding completions
  • Payment processing success rate
  • Data synchronization lag
  • Partner integration usage

Business Value:

  • Demonstrate integration ROI
  • Identify optimization opportunities
  • Support capacity planning
  • Inform business decisions

12. Anti-Patterns and Constraints

This section identifies integration anti-patterns to avoid and architectural constraints that must be respected.

12.1 Prohibited Integration Patterns

12.1.1 Point-to-Point Integration

Anti-Pattern:

  • Direct integration between every pair of systems
  • Creates N×(N-1)/2 connections for N systems
  • Tightly coupled, brittle architecture

Why Prohibited:

  • Exponential complexity growth
  • Difficult to maintain and evolve
  • No centralized governance or monitoring
  • Security and access control challenges

Correct Approach:

  • Use API gateway or integration layer
  • Implement event-driven architecture with message broker
  • Apply standard integration patterns

12.1.2 Database Sharing

Anti-Pattern:

  • Multiple applications directly accessing the same database
  • Tight coupling at data layer
  • Bypasses application logic and security

Why Prohibited:

  • Breaks encapsulation and modularity
  • Schema changes impact all consumers
  • No access control or audit trail
  • Performance and locking issues
  • Prevents independent scaling

Correct Approach:

  • Expose data through APIs
  • Implement data replication via events or batch
  • Use data virtualization where appropriate

12.1.3 Synchronous Chaining

Anti-Pattern:

  • Long chains of synchronous API calls (A → B → C → D)
  • Each call waits for the next to complete
  • Latency compounds across chain

Why Prohibited:

  • High latency and poor user experience
  • Reduced availability (all services must be up)
  • Difficult to trace and debug
  • Poor error handling

Correct Approach:

  • Use asynchronous event-driven patterns
  • Implement choreography instead of orchestration
  • Apply circuit breaker and timeout patterns

12.1.4 Shared Integration Logic

Anti-Pattern:

  • Embedding integration logic in multiple consumers
  • Duplicated transformation and mapping code
  • Inconsistent implementations

Why Prohibited:

  • Code duplication and maintenance burden
  • Inconsistent behavior across consumers
  • Difficult to update and evolve
  • No single source of truth

Correct Approach:

  • Centralize integration logic in System APIs
  • Provide reusable libraries and SDKs
  • Use API gateway for common transformations

12.1.5 File Sharing via Network Drives

Anti-Pattern:

  • Using network file shares for integration
  • Polling for new files
  • No transactional guarantees

Why Prohibited:

  • Unreliable and difficult to monitor
  • No audit trail or security
  • Scalability limitations
  • Race conditions and locking issues

Correct Approach:

  • Use cloud storage with event notifications (Cloud Storage + Pub/Sub)
  • Implement file-based integration via APIs
  • Apply proper access controls and encryption

12.2 Architectural Constraints

12.2.1 Security Constraints

Mandatory Security Controls:

  • All APIs must use OAuth 2.0 or API key authentication
  • All communication must use TLS 1.3 (minimum TLS 1.2)
  • All secrets must be stored in Secret Manager
  • All sensitive data must be encrypted at rest
  • All integrations must implement rate limiting
  • All security events must be logged

12.2.2 Technology Constraints

Approved Technologies Only:

  • Use only approved technologies from [PAX-EA] 3. Technology Standards
  • New technologies require architecture review and approval
  • Open-source components require security review
  • Maintain supported versions (no end-of-life components)

12.2.3 Data Constraints

Data Governance Requirements:

  • All data integrations must align with [PAX-EA] 5. Data Architecture
  • All datasets must be registered in Dataplex
  • All PII must be classified and protected
  • All data transfers must comply with data residency requirements
  • All data quality issues must be monitored and reported

12.2.4 Operational Constraints

Operational Requirements:

  • All integrations must implement comprehensive logging
  • All integrations must emit metrics and traces
  • All integrations must have documented runbooks
  • All integrations must support zero-downtime deployment
  • All integrations must implement graceful degradation

13. Integration Reference Architectures

This section provides reference architectures for common integration scenarios.

13.1 Microservices Integration

Reference Architecture

Microservices communicate through well-defined APIs and asynchronous events, avoiding direct dependencies.

Components

  • API Gateway: Sensedia API Management Platform — centralized entry point, authentication, rate limiting, developer portal
  • Service Mesh (Optional): Service-to-service communication, observability
  • Event Broker: Sensedia Event Hub for asynchronous event distribution (integrates with Cloud Pub/Sub)
  • Service Registry: Service discovery and health checking

Communication Patterns

  • Synchronous: REST APIs for request/response
  • Asynchronous: Events for state changes and notifications
  • Query: CQRS with separate read models

Technology Stack

ComponentTechnology
API GatewaySensedia API Management Platform (cloud-hosted on AWS)
ServicesPython (FastAPI), Node.js
Event BrokerSensedia Event Hub (integrates with Cloud Pub/Sub)
Service DiscoveryCloud Run service discovery, Kubernetes DNS
ObservabilityCloud Logging, Cloud Monitoring, Cloud Trace

Best Practices

  • Design services around business capabilities
  • Implement circuit breakers for resilience
  • Use correlation IDs for distributed tracing
  • Version APIs and events independently
  • Deploy services independently

13.2 Cloud Integration

Reference Architecture

Cloud-native integration leveraging managed platform services for scalability and operational efficiency.

Components

  • Cloud Functions: Event-driven serverless integration
  • Cloud Run: Containerized API services
  • Cloud Pub/Sub: Event streaming and messaging
  • Cloud Scheduler: Scheduled integration jobs
  • Cloud Storage: File-based integration staging
  • Cloud Workflows: Orchestration of multi-step processes

Integration Patterns

PatternTechnology
API IntegrationMicroservices exposed via Sensedia API Management Platform
Event IntegrationSensedia Event Hub with Cloud Functions / Cloud Pub/Sub
Batch IntegrationCloud Scheduler with Cloud Run Jobs
File IntegrationCloud Storage with event triggers

Technology Stack

ComponentTechnology
ComputeCloud Run, Cloud Functions
StorageCloud Storage, BigQuery
MessagingSensedia Event Hub, Cloud Pub/Sub
OrchestrationCloud Workflows, Cloud Composer
MonitoringCloud Operations Suite

Best Practices

  • Leverage managed services to reduce operational overhead
  • Implement auto-scaling for variable workloads
  • Use regional redundancy for high availability
  • Apply infrastructure as code (Terraform)
  • Optimize for cost with rightsizing and autoscaling

13.3 Hybrid Cloud Integration

Reference Architecture

Integration spanning on-premises and cloud environments.

Components

  • VPN/Interconnect: Secure network connectivity
  • API Management: Sensedia API Management Platform — unified API management across environments
  • Event Platform: Sensedia Event Hub — consistent messaging across environments
  • Identity Federation: Unified authentication (Entra ID)
  • Data Replication: Bidirectional data synchronization

Integration Scenarios

ScenarioDescription
Cloud-to-On-PremCloud applications accessing on-prem systems
On-Prem-to-CloudOn-prem applications accessing cloud services
Data SynchronizationReplicating data between environments
Disaster RecoveryCloud as backup for on-prem systems

Technology Stack

ComponentTechnology
NetworkCloud VPN, Cloud Interconnect
API ManagementSensedia API Management Platform
MessagingSensedia Event Hub (connects to Cloud Pub/Sub)
IdentityEntra ID with federation
Data SyncCloud Composer, custom replication

Best Practices

  • Implement secure, redundant network connectivity
  • Use API gateway to abstract environment location
  • Implement caching to minimize cross-environment calls
  • Plan for network latency and failures
  • Maintain consistency across environments

13.4 Third-Party Integration

Reference Architecture

Integration with external partners, vendors, and SaaS applications.

Components

  • Partner API Proxy: Sensedia API Management Platform — standardized interface to external APIs
  • Authentication Gateway: Centralized credential management
  • Rate Limit Manager: Respect partner rate limits
  • Data Transformation Layer: Map between internal and external formats
  • Error Handling and Retry: Resilient external communication

Integration Patterns

  • API Integration: RESTful APIs, webhooks
  • File Integration: SFTP, cloud storage exchange
  • EDI Integration: Electronic Data Interchange for B2B
  • SaaS Integration: Native connectors, SDKs

Security Considerations

  • Store partner credentials in Secret Manager
  • Implement IP whitelisting where required
  • Validate all inbound data from external sources
  • Apply rate limiting to prevent partner throttling
  • Monitor and alert on integration failures

Technology Stack

ComponentTechnology
API ProxySensedia API Management Platform
File TransferCloud Storage, SFTP
AutomationCloud Functions, Cloud Run
SecretsSecret Manager
MonitoringCloud Monitoring, alerting

Best Practices

  • Implement circuit breakers for partner API calls
  • Cache responses to minimize external calls
  • Design for eventual consistency
  • Maintain partner integration documentation
  • Test against partner sandbox environments
  • Plan for partner API versioning and changes

14. References

Industry Standards

  • RESTful API Design: REST architectural constraints (Roy Fielding)
  • OpenAPI Specification 3.0+: API documentation standard
  • OAuth 2.0 (RFC 6749): Authorization framework
  • OpenID Connect: Identity layer on OAuth 2.0
  • JSON Schema: JSON data validation
  • CloudEvents: Event data specification
  • OpenTelemetry: Observability framework
  • Semantic Versioning: Version numbering standard

Security Standards

  • OWASP API Security Top 10: API security best practices
  • OWASP Application Security Verification Standard (ASVS)
  • NIST Cybersecurity Framework: Security controls framework
  • NIST SP 800-53: Security and privacy controls
  • ISO 27001/27002: Information security standards

Enterprise Architecture

  • TOGAF: Enterprise architecture framework
  • API-Led Connectivity: MuleSoft integration pattern
  • Microservices Patterns: Chris Richardson’s patterns catalog
  • Enterprise Integration Patterns: Hohpe and Woolf patterns
  • [PAX-EA] 1. Enterprise Architecture Overview
  • [PAX-EA] 3. Technology Standards
  • [PAX-EA] 5. Data Architecture
  • CSAF-001 Cybersecurity Architecture Controls Framework (CACF)

Annex A — Integration Use Case Catalog

This document is supplemented by a companion reference catalog that maps concrete Patria integration scenarios to the recommended platform or combination of platforms, based on the decision matrix defined in §5.0 Integration Platform Selection Guide.

Reference document: [PAX-EA] Integration Use Case Catalog

The catalog covers the following scenario categories:

  • Quantitative criteria for platform tiebreaking
  • Data exposure and consumption via API
  • System-to-system integration
  • Data pipelines (high volume / batch)
  • UI automation / legacy systems without API (RPA)
  • User task automation / AI-augmented workflows
  • Approval workflows / BPM
  • Typical hybrid scenarios at Patria
  • Anti-patterns to avoid
  • Integration platform decision tree