AI Agents in Trailblazer Community
Addressing knowledge discovery and knowledge sharing challenges in the Trailblazer Community through AI-driven features that streamline workflows, boost community interactions, and improve the overall user experience.
Context
Salesforce’s Trailhead is a dynamic online learning platform designed to educate users on Salesforce’s tools and other latest technologies. The Trailblazer Community, a key part of this ecosystem, facilitates user-driven knowledge sharing. However, challenges such as unanswered questions, content moderation inefficiencies, and low visibility of user queries hindered the community’s potential. With AI entering a new revolution, we used this project to deeply research its human aspects.
Project Structure
This project was was a part of my HCI Masters capstone project. I closely collaboration between Salesforce’s design team, AI specialists, and community managers. The team comprised three engineers, a product manager, and designers.
Gathering signal through user research
Trailblazer Community’s poor satisfaction rate and increasing churn required us to conduct multiple rounds of field and user research. We conduct 10+ interviews with expert and novice users, and moderators to learn their where and why they were facing such poor UX. As users told us about their workflows, a few issues came up more frequently:
Too many user-generated questions
Lots of user-generated questions—accounting for 80–90% of the content—resulting in a crowded environment where many inquiries go unnoticed.
Users struggle to clearly articulate their questions, which impedes productive discussions and reduces overall engagement within the community.
Manually intensive and time-consuming, as their are redundant and poorly framed questions, reducing the value of the platform.
Product goal: Enable users to effectively ask the right questions and discover accurate, relevant answers.
Design for engagement balance
Solutions need to account for the needs of every type of user from novice to active contributors.
Solution the underlying challenge
Identify patterns in poorly articulated user queries and establish mechanisms to address the underlying challenges.
Improve content accessibility
Identify inconsistencies and ensure better organization and accessibility for users seeking answers.
AI agents as a powerful means to an end
→ Solving the ‘text generation’ challenge
We recognized that the root issue wasn’t merely the high volume of content, but rather how that content was generated and managed. Large Language Models (LLMs) excel at parsing, rephrasing, and refining text.
→ Feature Prioritization
In order to prioritize what features would create the most value in the shortest amount of time, we worked with a PM and product ops to strategize a plan for MVP release. We ultimately decided to tackle three workflows that dominate the majority of a broker’s work: employee tasks and open enrollment.
AI agents as a powerful means to an end
→ Human aspect holds the experience
Humanizing the interaction and tone. User put more weight on human interactions and repeatedly said that AI is a catalyst and not a replacement. Making even the AI generated content more human in its interaction and tone will go a long way.
→ Avoid being over prescriptive
The goal of the community is to provide peer to peer learning, with the eventual onus on the user. Even with AI, we must emphasize the non-prescriptive nature of it responses, by providing guidance without being enforcing.
→ Clarity and transparency
Users need clear information about AI suggestions or decisions. Transparent labeling—such as “Suggested by AI” or “AI Flagged as Duplicate”—helps users understand why a recommendation appears, fostering trust and reducing confusion.
How we should be implementing it

Concise natural language description
Expressing attribute changes in straightforward sentences makes it easy for admins to understand the edits without needing to interpret complex data or technical jargon.

Emphasis on change detection and tracking
The design of the modal employs generous amounts of spacing to bring visual clarity and lower the cognitive overload.

Helpful visual status indicators
Colored tags are assigned to each change to provide immediate visual cues that help admins identify the type of change at a glance.
Design decisions 1: AI-Assisted Question Framing
The idea
LLMs to suggest improvements as a user types their question.
Users can freely accept, reject, or modify these AI recommendations.
This guidance clarifies context, ensures specificity, and aligns questions with community standards.
Low fidelity mockups
The goal of low fidelity mockups was to understand how we could design the relationship between the AI agent and the question framing widget
Final design
Keyword-driven question discovery
Surfacing answered questions based on typed keywords, users are guided toward existing, relevant content before posting, thus reducing redundancy.
Gradual, transparent AI suggestions
Gradual suggestions—while allowing users to accept or reject them—we maintain user control, and encourage self-improvement in question framing.
Contextual tagging
Recommending appropriate hashtags and groups, the AI ensures questions reach the right audience, enhancing answer accuracy
Design decisions 2: AI agents in answering generation
The idea
Initially offers context-specific hints and prompts.
It intentionally defers providing a full solution, encouraging human contributors to share insights first.
If no human answers after some time, the AI gives a comprehensive response.
Low fidelity mockups
The goal of low fidelity mockups was to understand how we can balance human agency and system assistance
Final design
AI as a secondary problem solver
By deferring full AI answers until human contributors respond, we maintain a human-first dynamic while ensuring unanswered questions are resolved
Tagging SME’s for targeted responses
By leveraging AI to tag known experts, we streamline engagement, connecting questions to the right community users and enhancing response efficiency.
Private answer ratings
By enabling users to privately rate human or AI answers, we foster honest, constructive assessments without publicly discouraging contributors.
Design decisions 3: AI-generated discussion topics
Encouraging healthy discussions
A small, unobtrusive widget that appears alongside discussion prompts, offering quick tips for constructive and respectful interaction.
A section for curated topics
Users know exactly where to go for fresh ideas, trending themes, and timely discussion points.
Encouraging like-minded connections
We designed subtle indicators—like hashtags, tags, or user badges—that highlight shared interests and thereby help users connect with similar people.
Redesigning IA for a post
Clear visual hierarchy
A quick and easy fix was to define segments for each information type, like question, body text, image, hashtags, and add generous amounts of white space.
Focus on the question itself
The new design ensures the user’s query takes center stage. The old one distracts from the main question with a barrage of supporting details.
Reflection
The experience of crafting these solutions underscored that technical sophistication alone does not guarantee value; clarity, trust, and genuine user empowerment matter equally—if not more. As we see AI become more ingrained into our products, this project helped me understand that humans still see AI as a means to an end, and only products that understand this are having better adoption.
Next steps
Salesforce team has planned a small-scale pilot with select user groups to validate designs, to see how effectively AI can improve question clarity and reduce duplicate posts.
This marked the start of a broader AI-driven improvement initiative in Salesforce Trailhead. Documenting these emerging patterns into a dedicated design system ensured a consistent framework to scale and evolve future innovations.