Disaster tweet text and image analysis using deep learning approaches

dc.contributor.authorLi, Xukun
dc.date.accessioned2020-08-12T21:22:33Z
dc.date.available2020-08-12T21:22:33Z
dc.date.graduationmonthAugust
dc.date.issued2020-08-01
dc.description.abstractFast analysis of damage information after a disaster can inform responders and aid agencies, accelerate real-time response, and guide the allocation of resources. Once the details of damage information have been collected by a response center, the rescue resources can be assigned more efficiently according to the needs of different areas. The challenge of the information collection is that the traditional communication lines can be damaged or unavailable in the beginning of a disaster. With the fast growth of the social media platforms, the situational awareness and damage data can be collected, as affected people will post information on social media during a disaster. There are several challenges to analyzing the social media disaster data. One challenge is posed by the nature of social media disaster data, which is generally large, but noisy, and comes in various formats, such as text or images. This challenge can be addressed by using deep learning approaches, which have achieved good performance on image processing and natural language processing. Another challenge posed by disaster related social media data is that it needs to be analyzed in real-time because of the urgent need for damage and situational awareness information. It is not feasible to learn supervised classifiers in the beginning of a disaster given the lack of labeled data from the disaster of interest. Domain adaptation can be applied to address this challenge. Using domain adaptation, we can adapt a model learned from pre-labeled data from a prior source disaster to the current on-going disaster. In this dissertation, I propose deep learning approaches to analyze disaster related tweet text and image data. Firstly, domain adaptation approaches are proposed to identify informative images or text, respectively. Secondly, a multimodal approach is proposed to further improve the performance by utilizing the information from both images and text. Thirdly, an approach for localizing and qualifying damage in the informative images is proposed. Experimental results show that the proposed approaches can efficiently identify informative tweets or images. Furthermore, the type and the area of damage can be localized effectively.
dc.description.advisorDoina Caragea
dc.description.degreeDoctor of Philosophy
dc.description.departmentDepartment of Computer Science
dc.description.levelDoctoral
dc.identifier.urihttps://hdl.handle.net/2097/40815
dc.language.isoen_US
dc.publisherKansas State University
dc.rights© the author. This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectDeep Learning
dc.subjectDomain Adaptation
dc.subjectTweet Classification
dc.subjectDisaster Data Analysis
dc.subjectDamage Localization
dc.titleDisaster tweet text and image analysis using deep learning approaches
dc.typeDissertation

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
XukunLi2020.pdf
Size:
33.27 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.62 KB
Format:
Item-specific license agreed upon to submission
Description: