A new white paper from Langer Research Associates details results of a pilot test using artificial intelligence to code open-ended survey responses into quantitative categories. Our aim was to see if our use of AI could simplify the time-consuming task of coding open-ends without sacrificing data quality.
Compared with human coding, we encountered poor category creation, misclassification and an inability to detect nuance or valence. While our investigation was limited in scope, results suggest the need for caution in using AI for open-end coding when data quality is a priority.
Further advances and additional testing may produce better results. We continue to monitor and test this and other potential applications of AI in our research practice.
