Back to All Blogs

DigitalOcean and UPenn on the future of AI infrastructure—in and out of the classroom

Explore how DigitalOcean and the University of Pennsylvania are scaling AI infrastructure—and what they look for in the right partner. Real-world insights from the classroom to the data center.

06 / 3 / 2025
4 minute read
SOAI FlexTalk Blog

Building AI infrastructure that delivers: Lessons from the front lines

As organizations double down on AI investment, the pressure to scale infrastructure quickly and intelligently is reaching new heights. In our FlexTalk webinar, Benchmarking AI infrastructure readiness: Engineering resilience and competitive advantage, we welcomed Adam Knapp, VP of Engineering at DigitalOcean, and Kenneth Chaney, Associate Director of AI and Technology at the Penn Advanced Research Computing Center (PARCC), to discuss what it really takes to deliver scalable, resilient, and sustainable AI infrastructure.

Their experiences—one grounded in hyperscale cloud services, the other in a leading research university—paint a clear picture: successful AI deployment isn’t just about GPU clusters and data pipelines. It’s about people, partnerships, and purposeful planning.

State of AI Infrastructure Report

Just released: The insights discussed in this webinar are backed by our new 2025 State of AI Infrastructure Report. Explore key industry benchmarks, infrastructure challenges, and investment trends shaping the next generation of AI strategy. Get the Report

AI momentum is real—but so are the challenges

AI adoption is accelerating fast, with 90% of organizations either deploying or planning to deploy generative AI. Yet 44% say infrastructure constraints are the #1 barrier to progress.

DigitalOcean’s solution? Strategic investment in high-performance infrastructure—including a major expansion into the Flexential Atlanta data center—and a tightly integrated AI stack that can scale quickly from prototype to production.

For PARCC, the mission is different but no less ambitious: support a wide range of researchers with centralized, powerful infrastructure that simplifies experimentation at scale. Their deployment of an Nvidia DGX SuperPOD at Flexential Collegeville allows UPenn to consolidate compute and storage in one reliable, secure location.

  

What to look for in an AI infrastructure partner

Both speakers emphasized that selecting the right colocation or infrastructure partner is foundational to success. Their advice? Look for three things:

  • Power and density: Modern GPUs are power-hungry. You need a facility that can handle high density and has room to grow.
  • Networking capabilities: Low-latency East-West traffic between GPUs and strong ingress/egress are critical, especially for training AI models.
  • A true partner mentality: Infrastructure rollouts rarely go perfectly. You need a provider that works alongside you, especially when things don’t go as planned.

“You don’t want to be worrying about basics like power and cooling,” Chaney noted. “You want a partner that has a real plan for growth, because your needs will grow faster than you think.”

Planning Infrastructure Investments?
Get real-world insights and stats on AI infrastructure requirements in our 2025 State of AI Infrastructure Report. It’s your go-to guide for what IT leaders are prioritizing in the year ahead.

Higher education: A key driver of AI innovation

The University of Pennsylvania’s role in shaping AI infrastructure is not just academic. As a research powerhouse, UPenn is enabling groundbreaking experiments in model training, genomics, social science, and more.

But what makes their use case especially compelling is their commitment to democratizing access. By creating shared infrastructure that supports both seasoned AI researchers and those new to the field, PARCC is lowering the barrier to entry and accelerating innovation across the university.

As Chaney explained, “Supporting cutting-edge researchers actually helps simplify tools for everyone. As those tools become easier to use, more departments adopt AI—and that drives demand for infrastructure.”

Internal enablement: Culture, talent, and continuous learning

For DigitalOcean, infrastructure isn't the only focus—internal enablement is just as critical. Weekly operational reviews assess team throughput and customer value delivery. There’s also a strong emphasis on creating a learning culture, where teams adapt fast and don’t repeat mistakes.

From a talent perspective, both organizations take a blended approach: investing in existing teams, acquiring new talent (DigitalOcean’s acquisition of Paperspace, for example), and partnering with OEMs and experts to bring specialized knowledge.

Sustainable AI at Scale

Sustainability remains a top priority. DigitalOcean shared how its infrastructure choices—like rack-level density and power-efficient data centers—maximize performance per watt. They're also using AI internally to optimize incident response, improve forecasting, and reduce waste. At UPenn, sustainability is about maximizing the long-term impact of their investments and ensuring good science is done with the resources available.

Final advice: Don’t go it alone

When asked what advice they’d give organizations just beginning their AI infrastructure journey, both speakers echoed the same sentiment: don't do it alone.

“Build for flexibility from day one,” Knapp advised. “What you design today might look completely different six months from now. Choose infrastructure and partners that can evolve with you.”

Watch on-demand

You can watch the full session on-demand here to learn more about their strategies, tech stacks, and growth playbooks.

Watch on Demand

 

Accelerate your hybrid IT journey, reduce spend, and gain a trusted partner

Reach out with a question, business challenge, or infrastructure goal. We’ll provide a customized FlexAnywhere® solution blueprint.