Use Cases Ai Hub

Bonisiwe Shabane
-
use cases ai hub

Generative AI continues to reshape how businesses approach innovation and problem-solving. Customers are moving from experimentation to scaling generative AI use cases across their organizations, with more businesses fully integrating these technologies into their core processes. This evolution spans across lines of business (LOBs), teams, and software as a service (SaaS) providers. Although many AWS customers typically started with a single AWS account for running generative AI proof of concept use cases, the growing adoption and transition to production environments have introduced new challenges. These challenges include effectively managing and scaling implementations, as well as abstracting and reusing common concerns such as multi-tenancy, isolation, authentication, authorization, secure networking, rate limiting, and caching. To address these challenges effectively, a multi-account architecture proves beneficial, particularly for SaaS providers serving multiple enterprise customers, large enterprises with distinct divisions, and organizations with strict compliance requirements.

This multi-account approach helps maintain a well-architected system by providing better organization, security, and scalability for your AWS environment. It also enables you to more efficiently manage these common concerns across your expanding generative AI implementations. In this two-part series, we discuss a hub and spoke architecture pattern for building a multi-tenant and multi-account architecture. This pattern supports abstractions for shared services across use cases and teams, helping create secure, scalable, and reliable generative AI systems. In Part 1, we present a centralized hub for generative AI service abstractions and tenant-specific spokes, using AWS Transit Gateway for cross-account interoperability. The hub account serves as the entry point for end-user requests, centralizing shared functions such as authentication, authorization, model access, and routing decisions.

This approach alleviates the need to implement these functions separately in each spoke account. Where applicable, we use virtual private cloud (VPC) endpoints for accessing AWS services. In Part 2, we discuss a variation of this architecture using AWS PrivateLink to securely share the centralized endpoint in the hub account to teams within your organization or with external partners. The focus in both posts is on centralizing authentication, authorization, model access, and multi-account secure networking for onboarding and scaling generative AI use cases with Amazon Bedrock. We don’t discuss other system capabilities such as prompt catalog, prompt caching, versioning, model registry, and cost. However, those could be extensions of this architecture.

President and Chief Revenue Officer, Google Cloud Our most intelligent model is now available on Vertex AI and Gemini Enterprise Published April 12, 2024; last updated October 9, 2025. A year and a half ago, during Google Cloud Next 24, we published this list for the first time. It numbered 101 entries. It felt like a lot at the time, and served as a showcase of how much momentum both Google and the industry were seeing around generative AI adoption.

In the brief period then of gen AI being widely available, organizations of all sizes had begun experimenting with it and putting it into production across their work and across the world, doing so... Chat interface that can answer user queries with relevant documents, suggested follow-up questions, and citations, based on your own data. Analyze a conversational transcription extracted from your call center and interact with it. Analyze your image using GPT4 and Azure Vision Services. Analyze your brand’s internet reputation. Analyze and chat with your documents using GPT4 and Azure Document Intelligence.

This 2024 Federal Agency Artificial Intelligence (AI) Use Case Inventory repository is a centralized consolidation of AI use case inventories from across U.S. Federal agencies, consistent with Section 5 of Executive Order (EO) 13960, “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” and pursuant to the Advancing American AI Act and OMB Memorandum M-24-10,... Federal agencies, with limited exceptions, are required to conduct annual inventories of their AI use cases and make this information publicly available. Federal agencies are to post a machine-readable CSV of all publicly releasable use cases on their agency’s AI website. For more information on AI Use Case reporting instructions, refer to the Guidance For 2024 Agency Artificial Intelligence Reporting found in the Additional Resources section. As of January 23, 2025, the following represents a summary of the current consolidated Federal dataset:

This data collection and repository is maintained by Morgan Zimmerman and Varoon Mathur. For questions about this repository, please open an issue or contact the maintainers at OFCIO_AI@omb.eop.gov. For agency-specific inquiries, please contact the relevant agency Chief Artificial Intelligence Officer.

People Also Search

Generative AI Continues To Reshape How Businesses Approach Innovation And

Generative AI continues to reshape how businesses approach innovation and problem-solving. Customers are moving from experimentation to scaling generative AI use cases across their organizations, with more businesses fully integrating these technologies into their core processes. This evolution spans across lines of business (LOBs), teams, and software as a service (SaaS) providers. Although many ...

This Multi-account Approach Helps Maintain A Well-architected System By Providing

This multi-account approach helps maintain a well-architected system by providing better organization, security, and scalability for your AWS environment. It also enables you to more efficiently manage these common concerns across your expanding generative AI implementations. In this two-part series, we discuss a hub and spoke architecture pattern for building a multi-tenant and multi-account arch...

This Approach Alleviates The Need To Implement These Functions Separately

This approach alleviates the need to implement these functions separately in each spoke account. Where applicable, we use virtual private cloud (VPC) endpoints for accessing AWS services. In Part 2, we discuss a variation of this architecture using AWS PrivateLink to securely share the centralized endpoint in the hub account to teams within your organization or with external partners. The focus in...

President And Chief Revenue Officer, Google Cloud Our Most Intelligent

President and Chief Revenue Officer, Google Cloud Our most intelligent model is now available on Vertex AI and Gemini Enterprise Published April 12, 2024; last updated October 9, 2025. A year and a half ago, during Google Cloud Next 24, we published this list for the first time. It numbered 101 entries. It felt like a lot at the time, and served as a showcase of how much momentum both Google and t...

In The Brief Period Then Of Gen AI Being Widely

In the brief period then of gen AI being widely available, organizations of all sizes had begun experimenting with it and putting it into production across their work and across the world, doing so... Chat interface that can answer user queries with relevant documents, suggested follow-up questions, and citations, based on your own data. Analyze a conversational transcription extracted from your c...