TY - GEN
T1 - Understanding and discovering deliberate self-harm content in social media
AU - Wang, Yilin
AU - Tang, Jiliang
AU - Li, Jundong
AU - Li, Baoxin
AU - Wan, Yali
AU - Mellina, Clayton
AU - O’Hare, Neil
AU - Chang, Yi
N1 - Funding Information: Yilin Wang and Baoxin Li was supported in part by an ARO grant (#W911NF1410371) and an ONR grant (#N00014-15-1-2722). Any opinions expressed in this material are those of the authors and do not necessarily reflect the views of ARO or ONR. Publisher Copyright: © 2017 International World Wide Web Conference Committee (IW3C2).
PY - 2017
Y1 - 2017
N2 - Studies suggest that self-harm users found it easier to discuss self-harm-related thoughts and behaviors using social media than in the physical world. Given the enormous and increasing volume of social media data, on-line self-harm content is likely to be buried rapidly by other normal content. To enable voices of self-harm users to be heard, it is important to distinguish self-harm content from other types of content. In this paper, we aim to understand self-harm content and provide automatic approaches to its detection. We first perform a comprehensive analysis on self-harm social media using different input cues. Our analysis, the first of its kind in large scale, reveals a number of important findings. Then we propose frameworks that incorporate the findings to discover self-harm content under both supervised and unsupervised settings. Our experimental results on a large social media dataset from Flickr demonstrate the effectiveness of the proposed frameworks and the importance of our findings in discovering self-harm content.
AB - Studies suggest that self-harm users found it easier to discuss self-harm-related thoughts and behaviors using social media than in the physical world. Given the enormous and increasing volume of social media data, on-line self-harm content is likely to be buried rapidly by other normal content. To enable voices of self-harm users to be heard, it is important to distinguish self-harm content from other types of content. In this paper, we aim to understand self-harm content and provide automatic approaches to its detection. We first perform a comprehensive analysis on self-harm social media using different input cues. Our analysis, the first of its kind in large scale, reveals a number of important findings. Then we propose frameworks that incorporate the findings to discover self-harm content under both supervised and unsupervised settings. Our experimental results on a large social media dataset from Flickr demonstrate the effectiveness of the proposed frameworks and the importance of our findings in discovering self-harm content.
KW - Computational health
KW - Mental health
KW - Mul-timodal data mining
KW - Social media mining
KW - User modeling
UR - http://www.scopus.com/inward/record.url?scp=85034738355&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85034738355&partnerID=8YFLogxK
U2 - 10.1145/3038912.3052555
DO - 10.1145/3038912.3052555
M3 - Conference contribution
SN - 9781450349130
T3 - 26th International World Wide Web Conference, WWW 2017
SP - 93
EP - 102
BT - 26th International World Wide Web Conference, WWW 2017
PB - International World Wide Web Conferences Steering Committee
T2 - 26th International World Wide Web Conference, WWW 2017
Y2 - 3 April 2017 through 7 April 2017
ER -