Download the case study
SolvPath: Scalable Gen AI-Powered Platform for E-Commerce Query Resolution
Category: E-commerce
Services: Gen AI Development, Cloud Architecture Design and Review, Managed Engineering Teams
- 95% or more accuracy in understanding user queries by leveraging Amazon Bedrock
- 90% accuracy and contextually relevant responses to user queries using RAG
- 99.9% uptime and real-time user interaction using Amazon OpenSearch
- 85% enhancement in the chatbot’s semantic understanding by integrating Amazon Titan
- 60% reduction in manual customer support ticket resolution
About SolvPath
SolvPath is an advanced AI-driven customer support interface designed to assist E-commerce platforms by delivering accurate, real-time answers to customer queries. It utilizes LLM capabilities to enable seamless natural language understanding, improving customer experience through contextual and accurate responses.
Challenges
- Accurately understanding diverse customer queries, including those with spelling errors or complex sentence structures, and providing contextually correct answers.
- Ensuring the assistant is available at all times for real-time interaction without compromising performance.
- Enhancing the assistant’s semantic abilities to understand the deeper intent behind queries and deliver more personalized answers.
- Designing a solution that scales efficiently to handle increasing user queries without downtime or performance degradation.
- Ensuring the secure storage and handling of sensitive customer data while processing vast amounts of user information.
Solutions
- Query processing integration: We integrated Amazon Bedrock to enhance SolvPath’s ability to interpret and respond to complex and varied customer queries with high accuracy.
- Real-time response system: A setup using Amazon OpenSearch was established to ensure seamless, low-latency customer interactions by processing data in real time.
- Retrieval-augmented generation setup: We incorporated Retrieval-Augmented Generation (RAG) with Amazon OpenSearch and Bedrock to generate accurate, reference-backed responses by accessing a vast knowledge base.
- Semantic understanding enhancement: We configured Amazon Titan to improve the platform’s ability to understand the intent behind user queries, allowing for more personalized responses.
- Model deployment and management structure: We leveraged Amazon Bedrock to streamline the deployment and management of large language models, ensuring smooth scaling as needed.
- Secure data storage implementation: We established a scalable, secure storage solution using Amazon S3 to manage large volumes of data and user information effectively.
- Scalable infrastructure setup: An infrastructure was built with Amazon ECS and AWS Fargate to ensure efficient scaling in response to increasing user demand while maintaining high performance.
- Monitoring and optimization system: A monitoring system was integrated using LangSmith to track performance metrics and enable continuous optimization of the platform.
Outcome
- 95% accuracy: Achieved high accuracy in understanding and processing complex customer queries.
- 90% contextual relevance: Responses provided were contextually relevant, improving customer satisfaction.
- 60% reduction: Reduced manual resolution of customer tickets by 60%, improving operational efficiency.
- 85% improved understanding: Enhanced semantic understanding, improving the assistant’s ability to deliver contextually rich answers.
- 99.9% uptime: Ensured continuous availability of the AI-powered interface, supporting real-time customer interactions.
Architecture Diagram
AWS Services
- Amazon Bedrock: We used Amazon Bedrock as a foundational framework for building and deploying AI-powered applications. It also ensures AI models’ reliability, scalability, and performance and facilitates seamless integration with other AWS services.
- Amazon Bedrock Claude-2: Our experts used Bedrock Claude-2 as the core language model for the chatbot to ensure it generates contextually relevant responses to user queries. It helps understand nuances in user queries and provides accurate information.
- Amazon Titan: We used Amazon Titan to enhance the chatbot’s semantic understanding and generate more context-aware responses. It also helps the chatbot understand the intent behind user queries, leading to accurate and relevant responses.
- Amazon S3: We used Amazon S3 to provide a scalable and secure storage solution for vital chatbot data such as FAQ databases and training data.
- Amazon OpenSearch: Our developers used Amazon OpenSearch to deploy and manage large language models like Claude-2 easily. We also used OpenSearch to enable real-time user interaction and provide timely and accurate responses to users.
- Amazon EC2: Our AWS experts used EC2 to host the chatbot app and provide computing power to process user queries and generate relevant responses.
- AWS Fargate: We used AWS Fargate to run and deploy the chatbot application as a container to ensure scalability and efficiency in resource allocation.
- Amazon ECS: Our developers used Amazon ECS to manage and scale containerized components of the AI-powered chatbot ecosystem.
- Amazon RDS (PostgreSQL): We used RDS to store structured data related to a chatbot, such as user queries, interactions, logs, and feedback, to improve performance.