Turning personality reports into an AI coaching companion
A global people-development provider wanted to give managers and leaders more value from their psychometric reports after the workshop ended. Working with Steamhaus, they built an AI coaching assistant on AWS that sits alongside their existing digital products and gives managers instant, context-aware guidance based on their own personality data.
- Turned static personality reports into an interactive AI coaching assistant powered by Amazon Bedrock
- Built a production-grade, multi-AZ architecture on AWS using Amazon ECS Fargate, Amazon OpenSearch Serverless, Amazon S3 and Bedrock Knowledge Bases
- Reduced manager preparation time for key conversations by an estimated 50%, while increasing repeat usage of digital report
- Created a reusable GenAI blueprint the customer can now apply to future products
Scaling personalised coaching without scaling headcount
The customer’s products are used by organisations worldwide to develop leaders, improve team dynamics and support culture change. Historically the core value was delivered through a one-off digital personality report plus time with a coach.As their customer base grew, they faced a familiar challenge: how to keep the experience personal and high-quality without relying solely on human coaching capacity. Customers wanted something “always on” that could live inside existing apps and reports, speak in the brand’s voice, and only draw on approved content - not the public internet.They also needed an architecture that would meet enterprise expectations on security, performance and availability, and that could be operated by their existing engineering team.
From discovery workshop to production-ready design
Steamhaus ran a short discovery and design engagement, working with product, coaching and engineering teams to:
- Map the most valuable user journeys - e.g. “help me prepare for a difficult conversation with a colleague” or “what should I focus on over the next 90 days?”
- Review existing AWS usage and CI/CD patterns so the GenAI work would fit naturally into current ways of working
- Evaluate candidate foundation models on Amazon Bedrock against real example prompts and reports
- Agree the boundaries: which content could be used, how answers should be grounded, and what “safe” behaviour looked like
The output was a set of user stories, non-functional requirements and a target architecture that the customer’s team could help own and evolve.
A secure, RAG-powered assistant embedded in existing products
The production implementation uses a fully AWS-native, containerised design:
- Application tier on Amazon ECS Fargate - A backend service and lightweight chat UI run in Fargate tasks across multiple Availability Zones inside a dedicated VPC. An Application Load Balancer fronts the service and handles TLS termination.
- Content storage on Amazon S3 - Personality reports, workbooks and coaching materials are stored in S3 buckets, encrypted with AWS KMS. Updates to content trigger ingestion workflows.
- RAG using Amazon Bedrock Knowledge Bases and Amazon OpenSearch Serverless - Approved content is chunked and embedded using Amazon Titan Text Embeddings, then stored in an OpenSearch Serverless collection managed via Bedrock Knowledge Bases. This means the assistant can cite specific sections from the customer’s own IP when answering questions.
- Inference via Amazon Bedrock - The assistant uses an Anthropic Claude Sonnet model on Bedrock, called from the Fargate backend over a VPC interface endpoint so prompts and responses never traverse the public internet.
- Observability and guardrails - Amazon CloudWatch collects application logs, metrics and traces; alarms notify the customer’s support team via their existing incident tooling. Bedrock guardrails enforce basic safety policies and block disallowed topics.
When a manager opens their report, the front-end passes a short summary of the user’s profile plus their question to the backend. The service enriches this with relevant report pages and coaching content via the knowledge base before calling the model. Responses are streamed back to the UI with references to source material so users can trust, and verify the guidance.
Extending the pattern across the product portfolio
The initial launch focused on a single flagship personality product. Early analytics show increased repeat traffic to digital reports and strong qualitative feedback from managers and coaches.
Next, the customer plans to:
- Extend the assistant to additional assessments and languages
- Expose a secure API so partner organisations can embed the assistant in their own portals
- Use the same Bedrock + ECS + OpenSearch pattern for internal tools that help practitioners design programmes and workshops faster
Because the assistant is built using repeatable Terraform modules and standard AWS services, the customer can roll out these next steps with confidence that security, observability and operations will remain consistent.





.jpg)

.png)




