Introduction
ZEDEDA Edge Intelligence Platform provides a comprehensive foundation for deploying, orchestrating, and managing edge computing infrastructure and AI workloads directly where your business operates; at the edge.
Manufacturers monitor production lines. Energy companies manage remote assets. Retailers track inventory in real time. Logistics operators run global fleets. The decisions that drive revenue, cut costs, and protect workers don't happen in the cloud.
AI deployments at the edge have challenges not found in the cloud:
- Intermittent network connectivity
- Hostile environments – cybersecurity threats compounded by physical security risks
- No on-site IT staff, demanding more autonomous operations
- Legacy systems that often feed telemetry to Edge AI models
ZEDEDA Edge Intelligence Platform enables enterprises to securely run autonomous agents, curated AI models, and legacy applications on your edge hardware, turning localized data into immediate action.
How It Works
The platform unites three core layers of edge computing: infrastructure orchestration, AI model inference, and autonomous agents.
The platform and services are summarized as follows:
- Edge Intelligence Platform
- Edge Intelligence and Inference Services
- Edge Intelligence Service
- Edge Inference Service
- Edge Infrastructure Services (previously ZEDEDA Cloud)
- Edge Kubernetes App Flows
- Edge Kubernetes Service
- Edge Virtualization Service (Edge Nodes)
- Edge Virtualization Engine (EVE-OS)
- Edge Intelligence and Inference Services
ZEDEDA Edge Infrastructure Services forms the foundational layer, which includes ZEDEDA Virtualization Engine (EVE-OS), Edge Virtualization Service (Edge Nodes), and ZEDEDA Kubernetes Service and App Flows. This infrastructure layer allows you to orchestrate and manage thousands of edge nodes and legacy applications alongside modern containerized workloads with GitOps.
The ZEDEDA Edge Intelligence and Inference Services is composed of the ZEDEDA Edge Intelligence Service and the ZEDEDA Edge Inference Service. The service provides the complete pipeline to build, test, and deploy AI models directly to the edge. You can import models from external providers, benchmark their performance on specific edge hardware, and package them for deployment. Finally, you can deploy autonomous agents that analyze data and take immediate action on-site. The platform utilizes prevalidated agentic solution templates that bundle your model, runtime, and application logic into declarative Helm charts, which accelerates your overall deployment process.
Key Capabilities
The following diagram illustrates the operational capabilities of the ZEDEDA Edge Intelligence Platform, emphasizing its role in streamlining AI deployment and management.
The platform offers automated orchestration and lifecycle management, allowing you to efficiently scale models from the cloud to diverse edge hardware. Key technical features include hardware-specific optimization, container orchestration, and inference engine packaging to ensure high performance on low-cost silicon.
Additionally, the system prioritizes security through zero-trust architecture and multi-tenant organizational models. By providing centralized observability and pre-configured templates, the platform reduces the complexity and cost of maintaining distributed intelligence networks.
Overall, these tools represent a comprehensive ecosystem designed to bridge the gap between advanced machine learning and practical edge computing applications.
| Capability | Description | Business Value |
| Agentic Solution Templates | Declarative Helm charts that bundle model, runtime, preprocessing, and application logic. | Accelerate Edge AI deployments with validated solution blueprints. |
| Comprehensive Model Lifecycle Management | Model versioning, import, and promotion. Ingest models via Jupyter Notebook Integration, external providers, or manual upload. Asynchronous import from NVIDIA NGC, Hugging Face, AWS S3, SageMaker, Azure ML, Azure Blob Storage, and MLflow, with integrated credential management. Deploy directly to ZEDEDA Edge Kubernetes clusters via GitOps. | Guarantees traceability, reproducibility, and rapid iteration across the model lifecycle. |
| Inference Engine Packaging | Inference engine packaging for OpenVINO, NVIDIA Triton, vLLM, and Ollama, delivered as Helm charts. | Reduces engineering effort, promotes reuse, and enforces best-practice configurations. |
| Model Optimization | Hardware-specific model optimization for edge silicon. | Train on high-end cloud silicon, then optimize and run on cost-effective edge devices. |
| Model Benchmarking | Model benchmarking on NVIDIA and Intel silicon. Inference-speed, latency, throughput, and stress testing on a curated device pool including NVIDIA Jetson and Intel NUC. | Delivers data-driven sizing, cost optimization, and SLA assurance before production deployment. |
| Cloud-Managed Edge Node and Cluster Orchestration | GitOps integration for governance of agents, models, apps, and other workloads. Lifecycle management of edge nodes. | Zero-Touch Provisioning of full stack, from agents to operating system, without onsite IT staff. Automated, reliable rollouts and rollbacks across edge fleets. |
| Virtualization and Container Services | Hypervisor and container runtime management on diverse hardware profiles. | Run legacy brownfield workloads and new, containerized applications, models, and agents on the same device, reducing hardware capital expenditure. |
| Dashboards, Observability, and APIs | Central UI shows model inventory, organization health, benchmark results, deployment status, and user activity. Alerting, events, resource utilization, and analytics. Comprehensive REST APIs. | Visibility, reports, status, and performance of hardware, apps, models, and agents. Gives leadership immediate visibility into Edge AI adoption, operational health, and performance across the fleet. |
| Multi‑tenant Organization Model | Isolates models, users, and permissions per organization, team, project, or environment. | Enables secure collaboration, clear ownership, and a clean staging-to-production handoff. |
| Zero-Trust Security and Access Control | Secure by design. Hardware root of trust via Trusted Platform Module (TPM). Measured boot and remote attestation. Cryptographic device identification. Data encryption at rest and in transit. Distributed firewall for every app. Physical security via port deactivation and isolation. Role-based access control (RBAC). | Protects edge assets, including PII from video feeds, model weights, and application code, against both cyber and physical threats. |
| App and Model Marketplace and Catalog | Partner-provided and ZEDEDA-curated solutions available in a unified catalog. | Simplifies discovery and deployment of validated Edge AI solutions. |
Benefits
- Accelerate time to value with curated AI models, validated hardware configurations, and guided deployment workflows.
- Protect sensitive data and valuable model weights against cyber and physical threats using hardware-rooted security, encrypted agents, and remote attestation.
- Consolidate hardware capital expenditure by running legacy workloads and modern AI applications on the same physical device using comprehensive virtualization services.
- Ensure automated and reliable rollouts across your edge fleet with cloud-managed edge node orchestration and GitOps integration.
- Scale operations without requiring on-site IT staff through automated deployment and centralized observability dashboards.
Next Steps
Come back at the end of March to see what's next.