Show simple item record

dc.contributor.authorSingh, Mukhbir
dc.date.accessioned2016-03-28T19:08:32Z
dc.date.available2016-03-28T19:08:32Z
dc.date.issued2005
dc.identifier.isbn9780542117732
dc.identifier.isbn0542117738
dc.identifier.other305383647
dc.identifier.urihttp://hdl.handle.net/10477/45138
dc.description.abstractWith the advent of computers in most phases of research in communication these has been a steady move towards using software programs with various levels of sophistication for content analysis research. This study examines the issues related to the validity of using such tools for content analysis research and tests one such software for its functionality claims. The software was tested using different kinds of texts and examines its functional ability claims comparing it to humans. Results are analyzed and presented. There is an inconsistency in results. The program seems to work sometimes and sometimes does not work. This raises questions about validity of this research method, specifically how to know when it works and when it does not.
dc.languageEnglish
dc.sourceDissertations & Theses @ SUNY Buffalo,ProQuest Dissertations & Theses Global
dc.subjectCommunication and the arts
dc.subjectValidating
dc.subjectComputer
dc.subjectContent analysis
dc.subjectCATPAC
dc.subjectSoftware
dc.titleValidating computer content analysis: A case study of CATPAC
dc.typeDissertation/Thesis


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record