Product Design Product Strategy

LLM-Based Experience Design

2021.04 - 2023.11
Researcher & Designer
20 mins

This is ifa-banner.
01
TLDR
Too Long; Didn't Read

AlphaPrime’s technology is powered by Microsoft Azure, which provides the startup with the infrastructure and tools they need to develop and deploy their products quickly and efficiently. As a member of the Microsoft for Startups Pegasus Program, Azure also gives AlphaPrime access to Microsoft’s vast ecosystem of partners and resources, which helps them to stay ahead of the curve in the rapidly evolving clinical trials industry.

👇🏻 Check the press media below

We believe that in our two years of practice in the service industry, we have accumulated a substantial amount of know-how . We are the people who tinker with them the most in a way understand them better than the people who built them. because we know what we are good at and how to interact with them.

02
Understanding of LLM
Understanding of LLM
03
Product Strategy - AI Feature Set
Product Strategy - How we construct our AI Feature set
3.1 Clinical LLM Trend

The application of large language models in medicine encompasses various aspects. They are widely utilized for knowledge retrieval, research support, automation of clinical workflows, and diagnostic assistance. These models assist healthcare professionals in timely access to medical information, including disease progression, treatment methods, and drug interactions. Operating as medical educators, they provide real-time information and offer customized consultation services for activities like online medical consultations. Moreover, large language models aid medical researchers in managing vast amounts of medical literature, accelerating clinical study design, and advancing disease research. In clinical workflows, these models can automate patient information recording and clinical document writing, providing data-driven recommendations to support doctors in formulating treatment plans and making diagnostic decisions.

Reference: Large Language Models Illuminate a Progressive Pathway to Artificial Healthcare Assistant: A Review 3 Nov 2023

3.2 Why We Use Vertical Slicing?

We decided to use vertical Slicing. The goal of AI Features embedded in product systems is to deliver the most value with as little effort as possible. This implies building products and services in short iterations in order to learn together with stakeholders on the outcome and to further improve them. Therefore, we plan to excavate AI scenarios within each product line, creating closed-loop scenario paths to drive the flywheel of AI product growth.

3.3 The Trade-offs among UX and Technology Selection

As is well known, there are various methods to enhance the effectiveness and utility of LLM (Large Language Model). The main approaches include three types: Prompt Engineering, Fine-Tuning, and training custom models. Given the technological constraints of the company, currently, only Prompt Engineering is employed. User experience is also built upon this technological foundation, creatively applying various feasible Prompt Engineering techniques to deliver the optimal user.

Prompt Engineering only: Focus on improving the model output by just engineering the prompts used in the models and by potentially selecting different vendors for different prompts. It should be possible to employ a strategy of using different prompts based on different tasks, providing diverse interaction support for users to obtain better answers.

3.4 Tasks that Large Language Models (LLMs) excel at include

We are aware that LLMs can perform useful and more advanced tasks, as listed below:

04
LLM-Based General Design Considerations and Its Design Tactics
LLM-Based General Design Considerations and Its Design Tactics
1. Artificial Intelligence Hallucinations

The unpredictability and non-determinism of LLM pose challenges for user experience. In healthcare, what are the key strategies for dealing with the illusionary phenomena of LLM? In healthcare applications, several critical strategies can be employed to mitigate the illusions of LLM.

Algorithm-wise

1. Firstly, there is Human-in-the-Loop (HITL): involving individuals with domain expertise in the model development process, which can play a crucial role in reducing illusions.

2. Secondly, algorithmic corrections can be applied using traditional machine learning techniques (such as regularization and loss function penalties) to improve the generalization ability of LLM, reducing the occurrence of illusions.

3. Thirdly, fine-tuning involves adapting LLM for specific tasks or domains, thereby lowering the risk of the model producing false information. Lastly, improved prompts can be used to reduce illusions when the model outputs uncertain results, providing prompts such as "I don't know."

2. UX-wise

1. Transparency and Interpretability: Provide transparent information about the model's workings, data sources, and uncertainties. Present the model's reasoning process to users, enabling them to understand how the model arrives at specific conclusions.

2. Feedback Mechanism: Offer timely feedback to users, especially when the model's outputs are uncertain or may lead to misconceptions. This can be achieved through pop-up messages, labeling uncertain areas, or providing additional hints. Encourage users to provide feedback, particularly in cases where the model's output is questionable. User feedback is crucial for improving the model and reducing misconceptions.

3. User Education: Include concise and clear elements of user education in the interface, explaining the limitations of the model, explicitly pointing out potential misconceptions the model may generate, and reminding users to exercise caution with the output results.

4. Confidence Indicator: Display confidence indicators for the model's output results. If the model has low confidence in its output, users will find it easier to understand the uncertainty of the results and take appropriate actions.

2. Another key fact to consider is that LLMs are not perfect. It is necessary to devise a pattern that prevents users from noticing this 'imperfection.'

1. Avoiding inefficient Prompt utilization.

2. Assisting users with auto-fill suggestions by predicting what they may want based on historical data

3. Providing users with predefined options through templates based on commonly used prompts

3. Accuracy is more important than sensitivity

1. Interactive Feedback: In the event of delays, provide feedback to users through animations, loading indicators, etc., letting them know that the system is processing their requests. This helps prevent users from feeling stuck or frustrated.

05
Clinical Implementation Scenario
Clinical Implementation Scenario
5.1 AI Use Cases

Both structured text and tabular (Clinical Raw Data) LLM-based scenario are provided in Aurora product line.

5.2 Who Are Main Audience?
5.3 Clinical ChatBot Design
5.3.1 Personality:

The initial goals for the Chatbot on the Aurora platform were to achieve a moderate level of clinical trial consultation and system-guided functionalities. Therefore, the Chatbot's personality should strike a balance between professionalism and humanization to provide effective, trustworthy, and comfortable consultation services. The main focus includes:

1. Professionalism and Credibility: The Chatbot's personality should project a professional and credible image to establish user trust in its expertise in the medical field.

2. Clarity and Simplicity: The language of the personality should be clear and simple, avoiding overly technical or obscure medical terms to ensure easy understanding by users.

3. Privacy Emphasis: When dealing with personal health information, the personality should demonstrate respect and protection for user privacy, clearly stating the security measures in place for information handling.

4. Interactivity: The Chatbot's personality can enhance the user experience through friendly and approachable interactions, making users feel more comfortable and reassured.

5. Empathy and Care: Considering the sensitive nature of medical matters involving users' health and emotions, Chatbot's personality should exhibit empathy and care to provide a more humanized user experience.

5.4 AI Functionality Design

Use the design to incorporate distinctive identifiers for each product line.

5.5 Compute
5.5.1 Empowering better data and analysis with help from AI

Security considerations are incorporated into the design to assure customers that data is not leaving the system. Role-playing is also employed in the prompt engineering to enhance the accuracy of data requests.

In the interaction with ChatGPT, we prefer not to send data directly to it and request immediate results. This is because data may be substantial, and there is a risk of leakage. The more ideal approach is to have ChatGPT return data processing methods, and then execute these methods on our end to generate the desired results. This addresses a key concern for businesses in their day-to-day communications.

Prompt Plot Demo

Taking a few shots at enhancing performance, we aim to emulate the writing style of organizational structures and adjust the narrative style accordingly.

5.6 Construct

In Construct, end-users always pursue the speed and quality of database construction. In the clinical DM production process of generating Mock CRFs, a more intelligent workflow for the automatic generation of Mock CRFs is provided.

Overview of NER Demo

Offer custom function generator functionality to reduce coding barriers

06
How we Evaluate our AI Experience
How we Evaluate our AI Experience

We draw inspiration from the following guidelines to create a seamless user experience.

AI systems should behave during interaction. Use them to guide your AI product planning.

For applications using natural language processing, identify common failures so you can plan for mitigating them.

07
What I think is the AI Future
What I think is the AI Future
7.1 AI-native application & Agents

1. AI Native refers to products with AI embedded into their core. In other words, if AI wasn’t a part of the product, the product wouldn’t exist.

2. AI-based refers to existing products that implement AI to offer new features to users. It’s basically an add-on.

7.2 Multi Modal

Multi-Modal Approach: The sole means of communication with a unique and large model is currently through prompts, utilizing this mode of interaction. For internal corporate data, prompts serve as a temporary communication method. However, the goal is to ultimately reduce the cost of interacting with humans through the use of UI and multi-modal interaction methods. It's anticipated that, one day, we won't rely on the prompt-based approach to communicate with large models.

Previous:
Comprehensive CX Solution for Clinical Trials
Next:
Medical Monitoring Config Benchmarking Analysis