OVAL Interpreter -- Benchmark Creation and Usage

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

OVAL Interpreter -- Benchmark Creation and Usage

John Garrett

Greetings all,

 

It would seem that there are different “interpretations” of what certain OVAL “attributes” should be capable of doing.  I could be way off on this; but if my understanding of previous conversations is correct, then I am moving in the right direction by posting this and seeing what others output results in.

 

The other benefit to this, is that if someone else wants to hop on the Interpreter Creation bandwagon, all they have to do, if this concept catches, is simply test their Interpreter against known good content/instructions.  If things work out right, they will see that they either ARE or ARE NOT using their interpreter in the “community approved” model.  As well, current Interpreter developers can take this type of work and simply run it against any new releases they make.  Call it a “test your tester” type of code compilation.

 

First up I would like to start off with Symbolic Links and how the different interpreters work with them.  As symbolic links gave rise to this whole concept, I feel it would be only fitting to begin with them.

 

Attached are 8 files:

- The excel spreadsheet shows which V-ID corresponds to what type of “benchmark”, the expected outcome, and the actual results

- The instructional test file.  Read this and apply what it says to your system prior to running the content

- The four different XML files that make up a “SCAP” content piece

- The HTML results file (Easy to read)

- The OVAL Results file (This should be used for standardization across the board)

 

 

Hopefully the community will see value in what I have provided.  Definitely hoping to hear feedback on this.  Thank you.

 

 

V/r,

John W. Garrett

IA Engineer, GRSi

2457 Aviation Ave., Suite 102

North Charleston, SC 29406

843-566-1340

 

To unsubscribe, send an email message to [hidden email] with SIGNOFF OVAL-DEVELOPER-LIST in the BODY of the message. If you have difficulties, write to [hidden email].

symlink_bench-cpe-dictionary.xml (736 bytes) Download Attachment
symlink_bench-cpe-oval.xml (4K) Download Attachment
symlink_bench-oval.xml (13K) Download Attachment
symlink_bench-xccdf.xml (11K) Download Attachment
Symlink_bench-Directions.txt (1K) Download Attachment
Findings_Symlinks.xlsx (14K) Download Attachment
LOCALHOST_All-Settings_symlink_bench.htm (42K) Download Attachment
LOCALHOST_OVAL-Results_symlink_bench.xml (37K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: OVAL Interpreter -- Benchmark Creation and Usage

ries
Hi John,

I wanted you make sure you were aware of the SCAP Validation program. It’s designed to verify an interpreter’s compliance with the specification. And, they provide a ton of test content in their Validation Test Suite Bundle:

The most recent bundle has ~4mb of OVAL tests mapped to specific configurations and expected results.

I think--at a high level—they’re addressing your “test your tester” concerns. Or, at least, it’s a good starting point.

Best,
David Ries

On Jun 12, 2015, at 5:32 PM, John Garrett <[hidden email]> wrote:

Greetings all,
 
It would seem that there are different “interpretations” of what certain OVAL “attributes” should be capable of doing.  I could be way off on this; but if my understanding of previous conversations is correct, then I am moving in the right direction by posting this and seeing what others output results in.
 
The other benefit to this, is that if someone else wants to hop on the Interpreter Creation bandwagon, all they have to do, if this concept catches, is simply test their Interpreter against known good content/instructions.  If things work out right, they will see that they either ARE or ARE NOT using their interpreter in the “community approved” model.  As well, current Interpreter developers can take this type of work and simply run it against any new releases they make.  Call it a “test your tester” type of code compilation.
 
First up I would like to start off with Symbolic Links and how the different interpreters work with them.  As symbolic links gave rise to this whole concept, I feel it would be only fitting to begin with them.
 
Attached are 8 files:
- The excel spreadsheet shows which V-ID corresponds to what type of “benchmark”, the expected outcome, and the actual results
- The instructional test file.  Read this and apply what it says to your system prior to running the content
- The four different XML files that make up a “SCAP” content piece
- The HTML results file (Easy to read)
- The OVAL Results file (This should be used for standardization across the board)
 
 
Hopefully the community will see value in what I have provided.  Definitely hoping to hear feedback on this.  Thank you.
 
 
V/r,
John W. Garrett
IA Engineer, GRSi
2457 Aviation Ave., Suite 102
North Charleston, SC 29406
843-566-1340
 
To unsubscribe, send an email message to [hidden email] with SIGNOFF OVAL-DEVELOPER-LIST in the BODY of the message. If you have difficulties, write to [hidden email]. <symlink_bench-cpe-dictionary.xml><symlink_bench-cpe-oval.xml><symlink_bench-oval.xml><symlink_bench-xccdf.xml><Symlink_bench-Directions.txt><Findings_Symlinks.xlsx><LOCALHOST_All-Settings_symlink_bench.htm><LOCALHOST_OVAL-Results_symlink_bench.xml>

-David

--
David E. Ries
Partner
Farnam Hall Ventures 


To unsubscribe, send an email message to [hidden email] with SIGNOFF OVAL-DEVELOPER-LIST in the BODY of the message. If you have difficulties, write to [hidden email].