The race to adopt artificial intelligence is reshaping industries, but unlocking AI’s full potential requires more than just algorithms—it demands the right infrastructure. Jason Hardy, Chief Technology Officer for AI at Hitachi Vantara, argues that hybrid cloud is the game-changer businesses need. In this engaging conversation, Hardy reveals how hybrid cloud bridges the gap between innovation and scalability, empowers enterprises to overcome data and infrastructure challenges, and paves the way for truly transformative AI solutions.
Why Hybrid Cloud Holds the Key to Transforming AI Adoption
Where does the hybrid cloud stand in the current technology landscape, particularly in the context of AI?
What should be the key motivators for CIOs to adopt hybrid cloud, especially with AI capabilities?
When adopting hybrid cloud solutions, the first priority should be understanding the type of data being used and the outcomes you aim to achieve. For example, sensitive data like regulatory or compliance information, which cannot or should not be stored in the cloud, is best managed on-premises. At the same time, businesses can take advantage of large language models (LLMs) from platforms like Hugging Face by bringing them on-premises, fine-tuning them, and avoiding the need to build their own foundational models. This approach provides the benefits of advanced AI capabilities in a secure, controlled environment.
However, there are cases, even in sensitive industries like banking, where cloud services can be used effectively. Tools such as GitHub Copilot, which rely on cloud integration, are excellent for tasks like code creation—provided they aren’t used for building sensitive applications. This highlights the hybrid cloud’s flexibility, allowing organizations to combine on-premises security with cloud-powered innovation.
What principles guide Hitachi Vantara’s hybrid cloud and AI strategy?
We operate as a truly hybrid organization, blending the best of cloud and on-premises solutions. For example, while we leverage tools like Copilot and partner closely with Microsoft for their cutting-edge technologies, we also prioritize building our own on-premises platforms. Whether it’s in rail networks, energy, or finance, our approach is intentionally hybrid: part cloud, part on-prem.
This dual design allows us to adapt seamlessly to customer requirements. If a customer prefers a cloud-based solution for scalability, we deliver that. If they need the same solution on-premises for control or compliance, we can provide that too. Our focus is on meeting customers where they are while embodying the flexibility and innovation of a hybrid organization ourselves.
Which workloads are best suited for hybrid cloud environments?
The decision to use hybrid or cloud environments hinges on performance sensitivity and operational requirements. For non-performance-sensitive tasks, where latency isn't critical—like edge processing that can afford round trips to the cloud—cloud deployment is often a viable option.
The cloud's value extends beyond just GPU or AI capabilities; it plays a pivotal role in data storage and workflow management. For instance, data might be captured and processed locally, then sent to the cloud for fine-tuning, adjustments, or integrating with data center workflows. This hybrid model supports diverse needs, from data storage to specialized services.
Use cases vary significantly. In manufacturing, where real-time decisions are essential—such as microchip production—cloud dependency isn't feasible due to latency constraints. Conversely, financial services, like non-sensitive risk analysis, can leverage cloud environments where immediate results aren't as critical. The choice ultimately depends on the business's specific needs, operational sensitivities, and data considerations.
How can businesses manage hybrid cloud costs, and what unexpected expenses should they prepare for?
Cloud costs can be unpredictable, largely because they depend on user behavior and workloads. For example, a poorly optimized codebase or unexpected data influx can lead to skyrocketing expenses.
Businesses should recognize that the cloud is primarily about flexibility and scalability, not cost savings. For predictable costs, on-prem solutions may be better suited, as they provide fixed pricing regardless of usage.
Why do many AI projects fail to move from proof of concept (POC) to production?
The challenges with AI adoption are multifaceted. One major hurdle is unmet expectations—where AI provides outputs that, when validated, fall short of quality standards, lack explainability, or fail to meet desired outcomes. This often stems from the immaturity of the technology or gaps in its ability to deliver valuable insights.
Another significant issue is the absence of proper data. Without well-structured, tagged, and vetted data, achieving meaningful AI outcomes becomes nearly impossible. Many projects stall as organizations scramble to resolve foundational data issues before even starting the AI process.
Beyond technical barriers, there's growing fatigue among businesses. After numerous proof-of-concepts (POCs) and pilots that fail to yield tangible results, teams often feel drained by the endless cycle of experimentation. This "innovation fatigue" can hinder enthusiasm and momentum.
That said, while a majority of projects may fail, the few that succeed—around 5–10%—often have transformative effects on the business. These high-impact use cases justify the investment, demonstrating that despite the hurdles, the potential rewards of AI adoption can far outweigh the failures. It's about finding the right balance between persistence and realistic expectations.
How is Hitachi Vantara addressing these challenges to help clients succeed with AI?
As a data-centric company, we understand the intricacies of data and AI. We've developed a comprehensive design service to help customers identify practical, achievable outcomes.
We encourage organizations to start with smaller, pragmatic AI projects, building maturity over time. Our approach includes:
- Assisting with data preparation and infrastructure.
- Designing tailored AI solutions.
- Training employees to understand and leverage AI technologies effectively.
By focusing on incremental success and practical applications, we guide our customers toward impactful AI outcomes.
What should be the metric to evaluate AI project results?
Measuring ROI on AI projects is challenging because the field is still evolving, and standardized metrics are yet to be established. At this stage, the key indicators should focus on whether the technology is improving processes or helping people perform better. For instance, are employees benefiting from actionable insights? Are operational efficiencies increasing? While traditional ROI remains difficult to quantify, assessing tangible improvements in workflows and outcomes is the best approach for now.
Can you share a use case of how Hitachi Vantara is helping different industries?
We’re working across our focus markets, such as rail, energy, manufacturing, and mining, to drive meaningful improvements. In the rail sector, for instance, we leverage AI to enhance safety systems by analyzing telemetry data to monitor the health of rail systems and track networks. This enables early detection of potential issues, improving both the safety and quality of rail operations.
In manufacturing, we’re using AI to optimize production processes, increase yields, and promote sustainability. By providing real-time data and better connectivity, we help manufacturers gain deeper insights into their operations, enabling informed decision-making.
In mining, we assist operators by analyzing equipment telemetry to enable predictive maintenance. This approach allows operators to address potential failures before they occur, extending the lifespan of machinery and ensuring operational efficiency.
Across all these sectors, our focus is on improving visibility and connectivity, empowering organizations to better understand their ecosystems and optimize their operations effectively.
Can you elaborate on Hitachi Vantara's collaboration with NVIDIA and how it enhances AI-driven capabilities?
The collaboration with NVIDIA spans multiple projects, such as the HMAX program, which involves Hitachi Rail, NVIDIA, and other Hitachi divisions. In this project, we combined NVIDIA’s real-time processing and compute expertise with our rail systems knowledge to create a more intelligent and efficient rail ecosystem. Beyond specific use cases, we’re working with NVIDIA to optimize their GPUs for mutual customers.
This includes infrastructure engineering, certification processes, and ensuring that customers across industries benefit from seamlessly integrated solutions. Together, we’re advancing AI capabilities across rail, energy, and other sectors.
AI should be handed responsibly, what is your view?
AI is a transformative platform with the potential to revolutionize daily operations and generate new ideas. Generative AI, in particular, offers immense opportunities to create innovative solutions. However, implementing AI irresponsibly poses risks like generating false data or producing hallucinations—outputs that are inaccurate or misleading. Responsible AI use requires robust governance, transparent processes, and mechanisms to ensure accuracy and accountability. It’s vital to strike a balance between innovation and ethics to realize AI’s full potential without compromising trust or integrity.