Nessus Parser v20a
November 14th, 2013
I fired my QA department, wait..that is me..ok…here is a good version of the Nessus Parser v20.
Categories: Uncategorized
I fired my QA department, wait..that is me..ok…here is a good version of the Nessus Parser v20.
Give yourself a raise! I can now parse all of my 5.2.4 v2 files. Keep up the good work
Another great release. Thanks, mate.
Just downloaded and ran the script. This is my first time using Perl and Nessus, but I noticed a few things that may require attention:
1) The ‘FQDN’ column on the ‘ScanInfo’ tab seems to be populating with the IP Address rather than the FQDNs. FQDN columns on other tabs appear to contain the correct info.
2) The ‘User Account Summary’ table on the ‘Summary Report Data’ tab does not contain any info for me.
These may be user error, but I thought I’d leave a comment just to check. Thanks!
great stuff thanks !!
Hey Cody,
great parser! I really like the output format it provides. I have some old nessusv1 files that I converted to nessusv2 using the webgui. When I try and parse them, however, I get an error that says “Can’t use string (“”) as a HASH ref while “strict refs” in use at parse_nessus_xml.v20.pl line 812.
Any way to work around this? I know the v1 files don’t have as much data as the v2 files and it looks like the conversion to v2 isn’t exactly populating hte “missing” data in a way that it can be parsed.
Thanks much!
I think we addressed this via email, if we did not post back here and I will see if I can’t help.
It is working as designed, I have a few tests to try and pick out the best data to put in the column.
I have to say that I really like how you put this together. It runs great on the smaller .nessus files that I have. One question though, when I try to run it on a .nessus file that is quite large (~250MiB) it drops an error about running out of memory. Do you know what this might be from? Or a way to fix it?
I suspect you are running this in Windows, that is only OS I have really seen this issue. I have run this on OSX and several linux versions with over 1GB of data, and while it took a while to parse the data it still completed fine.
Thanks for a very helpful tool! I saw this from the SANS post. Pretty cool!
Hello Bro
I got an error when I run the script. Is says something like this
Can’t locate XML/TreePP.pm in @INC …..
Can I know what is wrong? Thanks.
This is caused because the perl modules are not installed or are not installed in the location Perl is looking for them. I would load up CPAN and reinstall the modules.
In this post https://secure.bluehost.com/~melcarac/archives/161, I list the modules that you need.
Also you can open the script with text editor and you will see “use foo::bar;”, these are the modules you need to install.
Thanks mate. The solution worked like charm and your script is the best I have seen for a parser 🙂 Keep up the good job mate! 🙂
Love love love this tool. It answers everyone’s questions, and has enough pretty pics to keep the management types happy.
However, there is one issue with the overall summary function that makes management grumpy. It seems that the uniqueness factor is coupled to the individual Nessus files. For instance, we produce our Nessus results based on subnet, and therefore can produce upwards of 14 files. When uniqueness was calculated for our last run, it tallied 71 unique critical findings for the 7 files we had. I.e, each file had uniqueness, and the per file uniqueness counts were summed. When I filtered the results on plugin id alone, I calculated only 19 unique critical findings.
Feature request: please add plugin publication date. That gives an indicator on how behind we are.
Thanks for the fabulous work!
Thanks for the kind words, hopefully you’ll like what is coming next:)
You are correct this is based on file, as a consultant I would have server different scans and would like to be able to sort them differently. But what you can do is create pivot table, and it will give you data you are looking for. I can’t put pivot ables into the script otherwise I would have done that page in that manner.
As far as the feature request to add publication date…consider that done:)
Thanks for your effort and the great tool you gave the world.
Can’t wait for the new version! Can you give us a date?
Hi, is there any update of this script? I got Unmatched [ in regex; marked by <– HERE in m/Go[ <– HERE ./ at ../parse_nessus_xml.v20.pl line 1411 during parsin 2 Active Directory servers.
Hello!
Who can make .exe for Win?
Lika a https://secure.bluehost.com/~melcarac/archives/161 last comment
I’m receiving the same message on a Win 7 64 bit system with 8GB of memory. Any chance you know of the culprit? I plan on running the script in a Linux distro as I’m on a deadline.
Hi!
First of all, my name is Kevin and i work with nessus since 2010…im gonna tell you, this is the best parser ive ever seen. Thanks so much!
I have one question…is there a way to include the IP Address on the Critical, High, Medium, low and information sheets?
Thanks again!
Hi Cody,
Thanks for doing all this work. I finally got the script to run, but got the following message:
Creating Spreadsheet Data
Preparing Hosts Data
Finished Parsing XML Data
Create General Vulnerability Data
Creating Policy Compliance Data
Creating Nessus Report Spreadsheet
Can’t call method “add_worksheet” on an undefined value at C:\perl_tests\parse_nessus_xml.v20.pl line 1443
Thanks!
I have seen this when the user names have a “.” or something else that will is part of regex. I dont have a fix for it yet.
Contact the author of this blog – http://www.rmccurdy.com
I have seen this happen several times when using Windows, try on Linux or OSX.
This is added in v0.21, but is on a separate tab.
Try the new version and let me know if you still have the issue.
I’m on v0.21 and still getting the same problem.
Create General Vulnerability Data
Creating Policy Compliance Data
Creating Nessus Report Spreadsheet
Can’t call method “add_worksheet” on an undefined value at parse_nessus_xml.v21.pl line 1528.
Hi Cody, My team and I use your parser dozens of times each month. This is a wonderful resource to the community.
I’m gathering data from critical/high/medium/low/info tabs back to the host_scan_data tab by Excel formula, specifically I’m pulling back patch publication date. On the low worksheet there are no dates in cols W – Z. In the case of the ‘low’ worksheets, it looks like the date data may be in the CVSS columns; Solution and Synopsis may be in Metasploit columns, and data for a few other columns may be in unexpected places on this worksheet.
In the case of the ‘Information’ worksheet, some of the dates in cols W – Z are bumped in to adjoining columns, apparently based upon commas present in the ‘Synopsis’ text (e.g. PluginID 20301, 29217, 31422, 57041 70544). I also noted dates being nudged over on the Medium sheet (PluginID 73922), but I did not see this on Critical or High worksheets.
Are these easy fixes for a future release? Thank you!
I will take a look at this.