Track: VoiceTech |
Building Multiple Natural Language Processing Models to Work In Concert Together |
1.5 billion messages are sent in Slack every week. At Zoom's peak, 300 million virtual meetings occurred on their platform daily. Facebook hosts 260 million conversations on any given day. The amount of information and data exchanged on platforms like Facebook, TikTok, and ChatGPT is almost incomprehensible. These conversations are transforming social networks into conversation data brokers used to identify trends, associations, and changes in the world. To collect this data, we must first build Natural Language Processing (NLP) models to break down these conversations and classify what's being said to understand their context. This session will focus on creating and collecting datasets, using those datasets to develop machine learning models, and then covering strategies for leveraging multiple machine learning models for data mining. We will cover how to obtain and process conversation data from multiple audio and video input sources and how to use the NLP models created in this session to extract information or metadata (e.g., sentence classification, entity recognition, etc.). During this talk, we will have live demos and provide code/resources for everything covered in this session. |
|
Presentation Video |
Presentation Notes |
vonThenen-Building_Multiple_Natural_Language_Processing_Models_to_Work_In_Concert_Together.pdf |