Pharmaceutical
BMS ChatGPT & Generative AI Storefront Usability Testing
Conducted moderated usability testing with 20 users, delivering 8 key recommendations that improved AI assistant visibility and bug reporting.

Problem Statement
Bristol Myers Squibb launched an internal Generative AI Storefront that included BMS ChatGPT and other AI assistants. However, as a new platform, it was unclear whether users could intuitively navigate the interface, locate key features, and trust the system’s responses. Specific usability concerns included the clarity of the navigation structure, the discoverability of features like file upload and the “Report a Bug” function, and how users perceived the quality of AI-generated answers. These uncertainties risked hindering adoption and undermining user confidence in the platform.
Team Structure
Product Owner: Kandee Wenrich
UX/UI Designer: Eileen Sanchez
UX Researcher: Eileen Calub
Timeline
January to February 2025
Gather Project Requirements: 1 week
Participant Recruitment: 2 weeks
Usability Testing: 2 weeks
Analysis: 1 week
Methodology
Conducted stakeholder interview with product owner to identify key focus areas and project goals
Recruited 20 participants across HR, IT, R&D, and more areas across BMS, with a range of experiences with AI tools
2 had never used the Gen AI Storefront
10 had only used BMS ChatGPT on the Storefront
8 had used multiple AI assistants on the Storefront
Scheduled and moderated 30-minute usability testing sessions on Microsoft Teams
Conducted thematic analysis of data to identify top user pain points and actionable insights
Presented improvement recommendations to stakeholders, designers, and developers
Recognized research participants with compensation in the form of Bravo points, a virtual awards platform at BMS
Project Goals
Gather user feedback on the clarity and usability of the landing page navigation and overall layout
Assess the performance, speed, and reliability of the file upload feature in BMS ChatGPT during typical user tasks
Evaluate the relevance, accuracy, and helpfulness of responses generated by the AI assistants, including BMS ChatGPT, the BMS Policies Assistant, and IT Support Assistant
Determine the visibility and ease of access to the Report a Bug button and FAQs within the user interface

BMS ChatGPT interface

Pin feature in BMS ChatGPT

Document upload feature

Report a Bug button location
Findings
Analyzed 20 usability testing recordings total
Identified 13 actionable insights:
Users found the navigation and layout confusing, expressing frustration with hidden tools, unclear interface behavior, and a desire for all AI assistants to be visible on a single, easily accessible page.
Users found the assistant descriptions too vague, expressing a need for clearer explanations of each tool’s purpose and intended audience to better understand their relevance and functionality.
Users experienced frustration with slow file upload and processing times, often needing to refresh the page multiple times to get their files to load successfully.
Users were frustrated by the narrow chat window and fixed side panels, expressing a desire for adjustable panes to maximize screen space and improve readability during interactions.
Users struggled to locate the pin feature, indicating that its placement was unintuitive and not easily discoverable within the interface.
Users expressed frustration that the Generative AI Storefront isn’t integrated into Microsoft Teams, noting that the lack of direct access adds unnecessary steps and reduces convenience compared to using BMS ChatGPT within Teams.
Users reported that the IT Support Assistant sometimes provides inaccurate or unhelpful responses, including incorrect knowledge base articles for common issues.
Users were confused about the IT Support Assistant’s capabilities, expecting it to handle ticket-related tasks but instead receiving only knowledge articles, which led to mismatched expectations and frustration.
Users showed a clear preference for using myBMS over the IT Support Assistant, associating IT support tasks more strongly with myBMS and questioning the relevance of accessing support through the Generative AI Storefront.
Users noted that while the BMS Policies Assistant often provides generally correct information, the responses can lack important context or specificity—such as location-based details or actionable next steps—leading to incomplete answers.
Users appreciated that the BMS Support Assistant included linked sources and references to official documents, finding it helpful for verifying information and accessing supporting materials like SOPs.
Users found the “About” tab misleading, expecting version or background information rather than FAQs and contact details, and suggested renaming or reorganizing it to better align with common user expectations. Only 25% of users successfully located the FAQs.
Users found the “Report a Bug” link difficult to discover within the gear icon, associating the icon with system settings or admin functions rather than user feedback, and recommended making the option more visible and intuitive. Only 15% of users successfully located the link.

Affinity map for analyzing and organizing usability testing results in Miro
Recommendations
Provided 8 high-priority change recommendations to stakeholders, designers, and developers:
Expand visibility of assistants on homepage.
Provide detailed but concise descriptions for assistants.
Improve file processing speed so users don’t need to hit Refresh.
Allow users to adjust panel widths when chatting with an assistant.
Feature the Gen AI Storefront in MS Teams for ease of access.
Provide guidance on how IT Support assistant differs from BMS’ other support avenues.
Change “About” tab to “Help” or “Contact.”
Make Report a Bug link more visible. (Standalone button, add to “Contact” tab, etc.)
Upon presenting the insights to stakeholders and following up with the developers, all 8 recommendations led to improvement implementation in BMS ChatGPT and the Generative AI Storefront.