An empirical approach to modeling uncertainty in intrusion analysis

dc.contributor.authorSakthivelmurugan, Sakthiyuvaraja
dc.date.accessioned2009-12-18T19:59:13Z
dc.date.available2009-12-18T19:59:13Z
dc.date.graduationmonthDecemberen_US
dc.date.issued2009-12-18T19:59:13Z
dc.date.published2009en_US
dc.description.abstractA well-known problem in current intrusion detection tools is that they create too many low-level alerts and system administrators find it hard to cope up with the huge volume. Also, when they have to combine multiple sources of information to confirm an attack, there is a dramatic increase in the complexity. Attackers use sophisticated techniques to evade the detection and current system monitoring tools can only observe the symptoms or effects of malicious activities. When mingled with similar effects from normal or non-malicious behavior they lead intrusion analysis to conclusions of varying confidence and high false positive/negative rates. In this thesis work we present an empirical approach to the problem of modeling uncertainty where inferred security implications of low-level observations are captured in a simple logical language augmented with uncertainty tags. We have designed an automated reasoning process that enables us to combine multiple sources of system monitoring data and extract highly-confident attack traces from the numerous possible interpretations of low-level observations. We have developed our model empirically: the starting point was a true intrusion that happened on a campus network we studied to capture the essence of the human reasoning process that led to conclusions about the attack. We then used a Datalog-like language to encode the model and a Prolog system to carry out the reasoning process. Our model and reasoning system reached the same conclusions as the human administrator on the question of which machines were certainly compromised. We then automatically generated the reasoning model needed for handling Snort alerts from the natural-language descriptions in the Snort rule repository, and developed a Snort add-on to analyze Snort alerts. Keeping the reasoning model unchanged, we applied our reasoning system to two third-party data sets and one production network. Our results showed that the reasoning model is effective on these data sets as well. We believe such an empirical approach has the potential of codifying the seemingly ad-hoc human reasoning of uncertain events, and can yield useful tools for automated intrusion analysis.en_US
dc.description.advisorXinming (Simon) Ouen_US
dc.description.degreeMaster of Scienceen_US
dc.description.departmentDepartment of Computing and Information Sciencesen_US
dc.description.levelMastersen_US
dc.description.sponsorshipU.S. National Science Foundation under Grant No. 0716665. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.en_US
dc.identifier.urihttp://hdl.handle.net/2097/2337
dc.language.isoen_USen_US
dc.publisherKansas State Universityen
dc.subjectIntrusion Detectionen_US
dc.subjectUncertaintyen_US
dc.subjectLogicen_US
dc.subject.umiComputer Science (0984)en_US
dc.titleAn empirical approach to modeling uncertainty in intrusion analysisen_US
dc.typeThesisen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
SakthiyuvarajaSakthivelmurugan2009.pdf
Size:
593.59 KB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: