Join Us at KubeCon + CloudNativeCon 2024 in Salt Lake City

author-image

By

Experience the vibrant energy of autumn near the stunning mountains as Intel takes on KubeCon + CloudNativeCon North America 2024 in Salt Lake City, Utah, from November 12–15. Stop by booth #G5 to meet our team, grab exclusive swag, and check out one of our demos. Whether you're finalizing your conference agenda or just planning your next move, you won't want to miss what we’ve lined up for this year’s event. 

Tuesday, November 12

Start your Tuesday morning off right with Intel Open Source Security Evangelist Katherine Druckman and CNCF Ambassador Lori Lorusso as they guide you through the CNCF landscape in their Welcome and Introduction: A Hitchhiker’s Guide to the CNCF Landscape. With over 190 innovative projects, this introductory session is designed to help you navigate KubeCon with ease, ensuring you’ll find exactly what you’re looking for.   

If you’re not a morning person, the fun continues after lunch with a Project Overview: A Hitchhiker's Guide to the CNCF Landscape as Katherine and Lori continue their tour of the open ecosystem. If you’re interested in getting involved in open source, but not sure where to start, this is it.    

EnvoyCon

Using APIs to configure and manage network functions within containers, Intel’s Principal Engineer Mrittika Ganguli and Cloud Software Architect Jeff Shaw present a framework that shows the Dynamic Configuration and Scaling of VPN Concentrator and Envoy SASE Proxy in Multi-Tenant Edge. Alongside Aryaka’s CTO Srinivasa Addepalli and Distinguished Engineer Ritu Sood, they demonstrate how this setup can scale seamlessly as needs grow, adding more VPNCs, IPSec tunnels, and proxies as required. 
 

Istio Day

Kick off Istio Day by joining program co-chairs Intel Cloud Software Architect Iris Ding and Microsoft Senior Software Engineering Lead Keith Mattix for Welcome and Opening Remarks. Continue your day with Iris as she joins Solo.io Head of Open Source Lin Sun to Unlock the Full Potential of Generative AI via Microservices and Istio Service Mesh. By addressing challenges like selecting LLMs, using embedding models, and deploying a robust vector database, Iris and Lin explain how to use Kubernetes strategies to build scalable GenAI applications. 

Iris, Lin, and Keith sit down later in the day with Solo.io CTO Louis Ryan and Senior Architect John Howard as well as Microsoft Principal Engineer Mitch Connors to discuss their perspectives on the most recent Gartner hype cycle report on service mesh in the Panel: Navigating the Trough of Disillusionment. Hear why they feel service mesh has its best days ahead before wrapping up the day with Iris and Keith as they deliver the day’s Closing Remarks

Wednesday, November 13

The KubeCon + CloudNativeCon main event goes into full swing on Wednesday with a keynote from Intel Director of Software Ecosystem Strategy Shirley Bailes on The Future of GenAI: Cloud Native Blueprints with OPEA. The Open Platform for Enterprise AI (OPEA), an LF AI & Data Foundation project, offers a modular framework of microservices to streamline and supercharge the deployment of cloud native GenAI systems. In her keynote, Shirley explains how to use OPEA to easily launch GenAI applications on a Kubernetes cluster using a flexible microservices architecture.   

Wednesday is packed with a slew of great talks from Intel experts: 
 

  • Unlock the secrets of modern hardware architecture with Intel Principal Engineer Alexander Kanevskiy as he explains Architecting Tomorrow: The Heterogeneous Compute Resources for New Types of Workloads. You’ll gain the insights you need to make smarter decisions for optimizing your infrastructure's hardware resources. 
     
  • Looking to optimize your platform’s performance? Learn valuable, real-world insights from Intel Software Engineer Antti Kervinen and Google Software Engineer Dixita Narang as they take a systematic approach to Platform Performance Optimization for AI - a Resource Management Perspective. This talk is filled with practical takeaways, discoveries, and tips you can start using right away. 
     
  • Intel Principal Engineer Patrick Ohly joins Google’s Senior Staff Software Engineer John Belamaric and NVIDIA’s Distinguished Engineer Kevin Klues to discuss the current focus of the Kubernetes WG Device Management - Advancing K8s Support for GPUs. This working group aims to streamline the configuration and sharing of accelerators like GPUs and TPUs, focusing on APIs and features for effective hardware use in batch and inference workloads. Join this talk to discover updates in Kubernetes 1.31 and 1.32 and learn how you can help shape the future of Kubernetes for accelerated workloads. 

Thursday, November 14

Looking to lunch & learn? Join Intel’s sponsored  DEI Lunch + Workshop – “An Equitable Approach to Higher Team Performance” featuring a workshop on The Lift Up leadership approach, which emphasizes diversity, equity, and inclusion to achieve better outcomes for teams and organizations. Participants will engage in discussions and explore tools to reflect on their leadership styles and transition to a more equitable Lift Up leadership approach. 

Friday, November 15

AI/ML workloads on Kubernetes require top performance, especially when using multi-GPU setups that rely on network communication, that’s why Intel Principal Engineer Patrick Ohly and Google Senior Staff Software Engineer John Belamaric have joined forces again in Better Together! GPU, TPU and NIC Topological Alignment with DRA. Join this session to learn more about how the new Dynamic Resource Allocation (DRA) API enhances device management by allowing you to allocate specific GPUs, NICs, and TPUs for optimal performance. 

Visit us at Intel Booth #G5

Don’t miss your chance to meet our team and experience our demos firsthand at Intel booth #G5. While you're here, be sure to grab some swag—we have a variety of fun giveaways that you won't want to pass up. Plus, you can enter for a chance to win prizes!  

Booth Hours

Wednesday, November 13: 10:45‒8:00 (MT) 

Thursday, November 14: 10:30‒5:00 (MT) 

Friday, November 15: 10:30‒2:30 (MT 

OPEA with GMC

Stop by to see our OPEA demo featuring the GMC (GenAI Microservices Connector), an open-source, cloud native project that runs on Kubernetes. We have several examples designed to facilitate the adoption of GenAI—all running on affordable Intel Xeon and Intel Gaudi hardware. 

Supercharge AI Vision and Large Language Model + Retrieval-Augmented Generation Workflows with OpenVINO™

Discover how to create a smart shopping assistant using AI and multi-modal Gen AI capabilities, powered by the OpenVINO™ toolkit. In this demo, we’ll show how this technology enables seamless model development and memory-efficient deployment, enhancing in-store interactions for retailers while reducing the time to value for developers and domain experts. 

Infrastructure to AI Ops in Minutes

Experience the ease of handling compute, storage, and networking all in one streamlined process. Rakuten Cloud-Native Orchestrator on the Intel Platform can provision and manage analytics workloads in record time, offering you unparalleled flexibility and efficiency. We’ll show you how it works in this short demo. 

Offloading Network Functions onto Intel IPU in Red Hat OpenShift

In this demo, we’ll show how automated provisioning of an IPU on a Dell R760 server within a Red Hat OpenShift cluster creates a streamlined setup process with minimal manual input. Experience how OpenShift’s DPU-Operator efficiently manages the deployment and operation of the NGINX container. With enforced authentication and authorization, see first-hand how only verified users or systems will have access to the OpenVINO workload. Finally, you’ll see the significant compute and memory savings achieved by offloading specific tasks, like network functions, and balancing network traffic across multiple servers. 

About the Author

Nikki McDonald, Content Manager, Intel Open Ecosystem 

Focused on educating and inspiring developers for over a decade, Nikki leads the strategy and execution for open source-related content at Intel. Her mission is to empower our open source community to grow their skills, stay informed, and exchange ideas. An avid reader, you’ll never find her without her Kindle. Connect with her on LinkedIn and X.