Delayed Knowledge in Neurocritical Care
Last month, a new Forbes Advisor report explained that 65% plan to use ChatGPT instead of traditional search engines when looking for information or answers, yet over 75% of consumers are concerned about misinformation from AI. ChatGPT and large language models are not search engines. Their knowledge is limited to a specific data set that is only a subset of what is found online and in text. There are known biases in how they present their knowledge such as when answering politically controversial topics.
Data set curation is the most important step in model development. Who is in charge of curating this data? How do we know when someone manipulates a data set? What biases are present in these data sources? Even if information is included in the model, how do we control how it is found? Modern search engines allow developers and website designers to control how information is found online using Search Engine Optimization. Will OpenAI and other companies create “AI Search Optimization”? These are all questions that we need to ask before adopting AI models for everyday use.
The most pressing issue with these statistics, however, is that these models only capture a snapshot of knowledge that is intermittently updated when a new one is released. This means that more than half of consumers are planning to rely on corporations to decide when knowledge is updated for consumer use. These decisions will be based on the goals of the company, whether they be profit, data harvesting, or something else. We can be sure those goals will not be transparency and openness. Additionally, knowledge changes in real-time and needs to be accessed in real-time. The adoption of ChatGPT models, at least with their current design, creates a delay in knowledge dissemination.
Leading companies are aware of delayed knowledge and are working toward solutions. In 2022, OpenAI explored WebGPT that allows for browser-assisted search. Unfortunately, this model has not made its way to consumer use. Even more unfortunate is the fact that major updates to GPT models will come at a premium. On March 13th, OpenAI released GPT-4 with many improvements over ChatGPT, however it is only accessible to those willing to subscribe for $20/month. If there is a solution to these problems, it will not be free.
Delayed knowledge is not a new phenomenon. Although we may not want to admit it, we know this quite well in neurocritical care. Updating guidelines currently requires significant resources to capture knowledge and then release it to the public. Just like training an AI model, there are significant barriers to gathering all information in one place.
In our field, we rely on consensus conferences to do this, not data sets. The last one was the Seattle International Severe Traumatic Brain Injury Consensus Conference (SIBICC) held in April 2019 in Seattle, Washington. We do not have generally accessible large language models for accessing this information like we do with ChatGPT, but we do have publications, webinars, and flag wavers in the field that are working toward adopting these updated guidelines. Despite these differences, there is one critical similarity: updating knowledge is a slow process.
By carefully watching AI companies, we can learn from their successes and be conscious of their failures. In the meantime, we should work toward creating adaptable guidelines that require less activation energy when translating them into routine use. Otherwise, the stagnant adoption of infrequently updated guidelines reinforces an out-of-date snapshot of the field, restricting revised information that would improve care.