So far, we have always generated system input, i.e. data that the program as a whole obtains via its input channels. However, we can also generate inputs that go directly into individual functions, gaining flexibility and speed in the process. In this chapter, we explore the use of grammars to synthesize code for function calls, which allows you to generate program code that very efficiently invokes functions directly.
Let us start with our first problem: How do we fuzz a given function? For an interpreted language like Python, this is pretty straight-forward. All we need to do is to generate calls to the function(s) we want to test. This is something we can easily do with a grammar.
As an example, consider the urlparse()
function from the Python library. urlparse()
takes a URL and decomposes it into its individual components.
urlparse('https://www.fuzzingbook.com/html/APIFuzzer.html')
ParseResult(scheme='https', netloc='www.fuzzingbook.com', path='/html/APIFuzzer.html', params='', query='', fragment='')
You see how the individual elements of the URL – the scheme ("http"
), the network location ("www.fuzzingbook.com"
), or the path ("//html/APIFuzzer.html"
) are all properly identified. Other elements (like params
, query
, or fragment
) are empty, because they were not part of our input.
To test urlparse()
, we'd want to feed it a large set of different URLs. We can obtain these from the URL grammar we had defined in the "Grammars" chapter.
url_fuzzer = GrammarFuzzer(URL_GRAMMAR)
for i in range(10):
url = url_fuzzer.fuzz()
print(urlparse(url))
ParseResult(scheme='https', netloc='user:password@cispa.saarland:8080', path='/', params='', query='', fragment='') ParseResult(scheme='http', netloc='cispa.saarland:1', path='/', params='', query='', fragment='') ParseResult(scheme='https', netloc='fuzzingbook.com:7', path='', params='', query='', fragment='') ParseResult(scheme='https', netloc='user:password@cispa.saarland:80', path='', params='', query='', fragment='') ParseResult(scheme='ftps', netloc='user:password@fuzzingbook.com', path='', params='', query='', fragment='') ParseResult(scheme='ftp', netloc='fuzzingbook.com', path='/abc', params='', query='abc=x31&def=x20', fragment='') ParseResult(scheme='ftp', netloc='user:password@fuzzingbook.com', path='', params='', query='', fragment='') ParseResult(scheme='https', netloc='www.google.com:80', path='/', params='', query='', fragment='') ParseResult(scheme='http', netloc='fuzzingbook.com:52', path='/', params='', query='', fragment='') ParseResult(scheme='ftps', netloc='user:password@cispa.saarland', path='', params='', query='', fragment='')
This way, we can easily test any Python function – by setting up a scaffold that runs it. How would we proceed, though, if we wanted to have a test that can be re-run again and again, without having to generate new calls every time?
The "scaffolding" method, as sketched above, has an important downside: It couples test generation and test execution into a single unit, disallowing running both at different times, or for different languages. To decouple the two, we take another approach: Rather than generating inputs and immediately feeding this input into a function, we synthesize code instead that invokes functions with a given input.
For instance, if we generate the string
call = "urlparse('http://www.cispa.de/')"
we can execute this string as a whole (and thus run the test) at any time:
eval(call)
ParseResult(scheme='http', netloc='www.cispa.de', path='/', params='', query='', fragment='')
To systematically generate such calls, we can again use a grammar:
URLPARSE_GRAMMAR: Grammar = {
"<call>":
['urlparse("<url>")']
}
# Import definitions from URL_GRAMMAR
URLPARSE_GRAMMAR.update(URL_GRAMMAR)
URLPARSE_GRAMMAR["<start>"] = ["<call>"]
assert is_valid_grammar(URLPARSE_GRAMMAR)
This grammar creates calls in the form urlparse(<url>)
, where <url>
comes from the "imported" URL grammar. The idea is to create many of these calls and to feed them into the Python interpreter.
URLPARSE_GRAMMAR
{'<call>': ['urlparse("<url>")'], '<start>': ['<call>'], '<url>': ['<scheme>://<authority><path><query>'], '<scheme>': ['http', 'https', 'ftp', 'ftps'], '<authority>': ['<host>', '<host>:<port>', '<userinfo>@<host>', '<userinfo>@<host>:<port>'], '<host>': ['cispa.saarland', 'www.google.com', 'fuzzingbook.com'], '<port>': ['80', '8080', '<nat>'], '<nat>': ['<digit>', '<digit><digit>'], '<digit>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '<userinfo>': ['user:password'], '<path>': ['', '/', '/<id>'], '<id>': ['abc', 'def', 'x<digit><digit>'], '<query>': ['', '?<params>'], '<params>': ['<param>', '<param>&<params>'], '<param>': ['<id>=<id>', '<id>=<nat>']}
We can now use this grammar for fuzzing and synthesizing calls to urlparse)
:
urlparse_fuzzer = GrammarFuzzer(URLPARSE_GRAMMAR)
urlparse_fuzzer.fuzz()
'urlparse("http://user:password@fuzzingbook.com:8080?abc=x29")'
Just as above, we can immediately execute these calls. To better see what is happening, we define a small helper function:
# Call function_name(arg[0], arg[1], ...) as a string
def do_call(call_string):
print(call_string)
result = eval(call_string)
print("\t= " + repr(result))
return result
call = urlparse_fuzzer.fuzz()
do_call(call)
urlparse("http://www.google.com?abc=def") = ParseResult(scheme='http', netloc='www.google.com', path='', params='', query='abc=def', fragment='')
ParseResult(scheme='http', netloc='www.google.com', path='', params='', query='abc=def', fragment='')
If urlparse()
were a C function, for instance, we could embed its call into some (also generated) C function:
URLPARSE_C_GRAMMAR: Grammar = {
"<cfile>": ["<cheader><cfunction>"],
"<cheader>": ['#include "urlparse.h"\n\n'],
"<cfunction>": ["void test() {\n<calls>}\n"],
"<calls>": ["<call>", "<calls><call>"],
"<call>": [' urlparse("<url>");\n']
}
URLPARSE_C_GRAMMAR.update(URL_GRAMMAR)
URLPARSE_C_GRAMMAR["<start>"] = ["<cfile>"]
assert is_valid_grammar(URLPARSE_C_GRAMMAR)
urlparse_fuzzer = GrammarFuzzer(URLPARSE_C_GRAMMAR)
print(urlparse_fuzzer.fuzz())
#include "urlparse.h" void test() { urlparse("http://user:password@cispa.saarland:99/x69?x57=abc"); }
In our urlparse()
example, both the Python as well as the C variant only check for generic errors in urlparse()
; that is, they only detect fatal errors and exceptions. For a full test, we need to set up a specific oracle as well that checks whether the result is valid.
Our plan is to check whether specific parts of the URL reappear in the result – that is, if the scheme is http:
, then the ParseResult
returned should also contain a http:
scheme. As discussed in the chapter on fuzzing with generators, equalities of strings such as http:
across two symbols cannot be expressed in a context-free grammar. We can, however, use a generator function (also introduced in the chapter on fuzzing with generators) to automatically enforce such equalities.
Here is an example. Invoking geturl()
on a urlparse()
result should return the URL as originally passed to urlparse()
.
URLPARSE_ORACLE_GRAMMAR: Grammar = extend_grammar(URLPARSE_GRAMMAR,
{
"<call>": [("assert urlparse('<url>').geturl() == '<url>'",
opts(post=lambda url_1, url_2: [None, url_1]))]
})
urlparse_oracle_fuzzer = GeneratorGrammarFuzzer(URLPARSE_ORACLE_GRAMMAR)
test = urlparse_oracle_fuzzer.fuzz()
print(test)
assert urlparse('https://user:password@cispa.saarland/abc?abc=abc').geturl() == 'https://user:password@cispa.saarland/abc?abc=abc'
exec(test)
In a similar way, we can also check individual components of the result:
URLPARSE_ORACLE_GRAMMAR: Grammar = extend_grammar(URLPARSE_GRAMMAR,
{
"<call>": [("result = urlparse('<scheme>://<host><path>?<params>')\n"
# + "print(result)\n"
+ "assert result.scheme == '<scheme>'\n"
+ "assert result.netloc == '<host>'\n"
+ "assert result.path == '<path>'\n"
+ "assert result.query == '<params>'",
opts(post=lambda scheme_1, authority_1, path_1, params_1,
scheme_2, authority_2, path_2, params_2:
[None, None, None, None,
scheme_1, authority_1, path_1, params_1]))]
})
# Get rid of unused symbols
del URLPARSE_ORACLE_GRAMMAR["<url>"]
del URLPARSE_ORACLE_GRAMMAR["<query>"]
del URLPARSE_ORACLE_GRAMMAR["<authority>"]
del URLPARSE_ORACLE_GRAMMAR["<userinfo>"]
del URLPARSE_ORACLE_GRAMMAR["<port>"]
urlparse_oracle_fuzzer = GeneratorGrammarFuzzer(URLPARSE_ORACLE_GRAMMAR)
test = urlparse_oracle_fuzzer.fuzz()
print(test)
result = urlparse('https://www.google.com/?def=18&abc=abc') assert result.scheme == 'https' assert result.netloc == 'www.google.com' assert result.path == '/' assert result.query == 'def=18&abc=abc'
exec(test)
The use of generator functions may feel a bit cumbersome. Indeed, if we uniquely stick to Python, we could also create a unit test that directly invokes the fuzzer to generate individual parts:
def fuzzed_url_element(symbol):
return GrammarFuzzer(URLPARSE_GRAMMAR, start_symbol=symbol).fuzz()
scheme = fuzzed_url_element("<scheme>")
authority = fuzzed_url_element("<authority>")
path = fuzzed_url_element("<path>")
query = fuzzed_url_element("<params>")
url = "%s://%s%s?%s" % (scheme, authority, path, query)
result = urlparse(url)
# print(result)
assert result.geturl() == url
assert result.scheme == scheme
assert result.path == path
assert result.query == query
Using such a unit test makes it easier to express oracles. However, we lose the ability to systematically cover individual URL elements and alternatives as with GrammarCoverageFuzzer
as well as the ability to guide generation towards specific elements as with ProbabilisticGrammarFuzzer
. Furthermore, a grammar allows us to generate tests for arbitrary programming languages and APIs.
For urlparse()
, we have used a very specific grammar for creating a very specific argument. Many functions take basic data types as (some) arguments, though; we therefore define grammars that generate precisely those arguments. Even better, we can define functions that generate grammars tailored towards our specific needs, returning values in a particular range, for instance.
We introduce a simple grammar to produce integers.
INT_EBNF_GRAMMAR: Grammar = {
"<start>": ["<int>"],
"<int>": ["<_int>"],
"<_int>": ["(-)?<leaddigit><digit>*", "0"],
"<leaddigit>": crange('1', '9'),
"<digit>": crange('0', '9')
}
assert is_valid_grammar(INT_EBNF_GRAMMAR)
INT_GRAMMAR = convert_ebnf_grammar(INT_EBNF_GRAMMAR)
INT_GRAMMAR
{'<start>': ['<int>'], '<int>': ['<_int>'], '<_int>': ['<symbol-1><leaddigit><digit-1>', '0'], '<leaddigit>': ['1', '2', '3', '4', '5', '6', '7', '8', '9'], '<digit>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '<symbol>': ['-'], '<symbol-1>': ['', '<symbol>'], '<digit-1>': ['', '<digit><digit-1>']}
int_fuzzer = GrammarFuzzer(INT_GRAMMAR)
print([int_fuzzer.fuzz() for i in range(10)])
['699', '-44', '321', '-7', '-6', '67', '0', '0', '57', '0']
If we need integers in a specific range, we can add a generator function that does right that:
def int_grammar_with_range(start, end):
int_grammar = extend_grammar(INT_GRAMMAR)
set_opts(int_grammar, "<int>", "<_int>",
opts(pre=lambda: random.randint(start, end)))
return int_grammar
int_fuzzer = GeneratorGrammarFuzzer(int_grammar_with_range(900, 1000))
[int_fuzzer.fuzz() for i in range(10)]
['942', '955', '997', '967', '939', '923', '984', '914', '991', '982']
The grammar for floating-point values closely resembles the integer grammar.
FLOAT_EBNF_GRAMMAR: Grammar = {
"<start>": ["<float>"],
"<float>": [("<_float>", opts(prob=0.9)), "inf", "NaN"],
"<_float>": ["<int>(.<digit>+)?<exp>?"],
"<exp>": ["e<int>"]
}
FLOAT_EBNF_GRAMMAR.update(INT_EBNF_GRAMMAR)
FLOAT_EBNF_GRAMMAR["<start>"] = ["<float>"]
assert is_valid_grammar(FLOAT_EBNF_GRAMMAR)
FLOAT_GRAMMAR = convert_ebnf_grammar(FLOAT_EBNF_GRAMMAR)
FLOAT_GRAMMAR
{'<start>': ['<float>'], '<float>': [('<_float>', {'prob': 0.9}), 'inf', 'NaN'], '<_float>': ['<int><symbol-2><exp-1>'], '<exp>': ['e<int>'], '<int>': ['<_int>'], '<_int>': ['<symbol-1-1><leaddigit><digit-1>', '0'], '<leaddigit>': ['1', '2', '3', '4', '5', '6', '7', '8', '9'], '<digit>': ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], '<symbol>': ['.<digit-2>'], '<symbol-1>': ['-'], '<symbol-2>': ['', '<symbol>'], '<exp-1>': ['', '<exp>'], '<symbol-1-1>': ['', '<symbol-1>'], '<digit-1>': ['', '<digit><digit-1>'], '<digit-2>': ['<digit>', '<digit><digit-2>']}
float_fuzzer = ProbabilisticGrammarFuzzer(FLOAT_GRAMMAR)
print([float_fuzzer.fuzz() for i in range(10)])
['0', '-4e0', '-3.3', '0.55e0', '0e2', '0.2', '-48.6e0', '0.216', '-4.844', '-6.100']
def float_grammar_with_range(start, end):
float_grammar = extend_grammar(FLOAT_GRAMMAR)
set_opts(float_grammar, "<float>", "<_float>", opts(
pre=lambda: start + random.random() * (end - start)))
return float_grammar
float_fuzzer = ProbabilisticGeneratorGrammarFuzzer(
float_grammar_with_range(900.0, 900.9))
[float_fuzzer.fuzz() for i in range(10)]
['900.1695968039919', '900.3273891873373', '900.225192820568', '900.3231805358258', '900.4963527393471', 'inf', 'inf', '900.6037658059212', '900.6212350658716', '900.3877831415683']
Finally, we introduce a grammar for producing strings.
ASCII_STRING_EBNF_GRAMMAR: Grammar = {
"<start>": ["<ascii-string>"],
"<ascii-string>": ['"<ascii-chars>"'],
"<ascii-chars>": [
("", opts(prob=0.05)),
"<ascii-chars><ascii-char>"
],
"<ascii-char>": crange(" ", "!") + [r'\"'] + crange("#", "~")
}
assert is_valid_grammar(ASCII_STRING_EBNF_GRAMMAR)
ASCII_STRING_GRAMMAR = convert_ebnf_grammar(ASCII_STRING_EBNF_GRAMMAR)
string_fuzzer = ProbabilisticGrammarFuzzer(ASCII_STRING_GRAMMAR)
print([string_fuzzer.fuzz() for i in range(10)])
['"BgY)"', '"j[-64Big65wso(f:wg|}w&*D9JthLX}0@PT^]mr[`69Cq8H713ITYx<#jpml)\\""', '"{);XWZJ@d`\'[h#F{1)C9M?%C`="', '"Y"', '"C4gh`?uzJzD~$\\\\"=|j)jj=SrBLIJ@0IbYiwIvNf5#pT4QUR}[g,35?Wg4i?3TdIsR0|eq3r;ZKuyI\'<\\"[p/x$<$B!\\"_"', '"J0HG33+E(p8JQtKW.;G7 ^?."', '"7r^B:Jf*J.@sqfED|M)3,eJ&OD"', '"c3Hcx^&*~3\\"Jvac}cX"', '"\'IHBQ:N+U:w(OAFn0pHLzX"', '"x4agH>H-2{Q|\\kpYF"']
From basic data, as discussed above, we can also produce composite data in data structures such as sets or lists. We illustrate such generation on lists.
LIST_EBNF_GRAMMAR: Grammar = {
"<start>": ["<list>"],
"<list>": [
("[]", opts(prob=0.05)),
"[<list-objects>]"
],
"<list-objects>": [
("<list-object>", opts(prob=0.2)),
"<list-object>, <list-objects>"
],
"<list-object>": ["0"],
}
assert is_valid_grammar(LIST_EBNF_GRAMMAR)
LIST_GRAMMAR = convert_ebnf_grammar(LIST_EBNF_GRAMMAR)
Our list generator takes a grammar that produces objects; it then instantiates a list grammar with the objects from these grammars.
def list_grammar(object_grammar, list_object_symbol=None):
obj_list_grammar = extend_grammar(LIST_GRAMMAR)
if list_object_symbol is None:
# Default: Use the first expansion of <start> as list symbol
list_object_symbol = object_grammar[START_SYMBOL][0]
obj_list_grammar.update(object_grammar)
obj_list_grammar[START_SYMBOL] = ["<list>"]
obj_list_grammar["<list-object>"] = [list_object_symbol]
assert is_valid_grammar(obj_list_grammar)
return obj_list_grammar
int_list_fuzzer = ProbabilisticGrammarFuzzer(list_grammar(INT_GRAMMAR))
[int_list_fuzzer.fuzz() for i in range(10)]
['[0, -4, 23, 0, 0, 9, 0, -6067681]', '[-1, -1, 0, -7]', '[-5, 0]', '[1, 0, -628088, -6, -811, 0, 99, 0]', '[-35, -10, 0, 67]', '[-3, 0, -2, 0, 0]', '[0, -267, -78, -733, 0, 0, 0, 0]', '[0, -6, 71, -9]', '[-72, 76, 0, 2]', '[0, 9, 0, 0, -572, 29, 8, 8, 0]']
string_list_fuzzer = ProbabilisticGrammarFuzzer(
list_grammar(ASCII_STRING_GRAMMAR))
[string_list_fuzzer.fuzz() for i in range(10)]
['["gn-A$j>", "SPX;", "", "", ""]', '["_", "Qp"]', '["M", "5\\"`X744", "b+5fyM!", "gR`"]', '["^h", "8$u", "", "", ""]', '["6X;", "", "T1wp%\'t"]', '["-?Kk", "@B", "}", "", ""]', '["FD<mqK", ")Y4NI3M.&@1/2.p", "]C#c1}z#+5{7ERA[|", "EOFM])BEMFcGM.~k&RMj*,:m8^!5*:vv%ci"]', '["", "*B.pKI\\"L", "O)#<Y", "\\", "", "", ""]', '["g"]', '["", "\\JS;~t", "h)", "k", "", ""]']
float_list_fuzzer = ProbabilisticGeneratorGrammarFuzzer(list_grammar(
float_grammar_with_range(900.0, 900.9)))
[float_list_fuzzer.fuzz() for i in range(10)]
['[900.558064701869, 900.6079527708223, 900.1985188111297, 900.5159940886509, 900.1881413629061, 900.4074809145482, 900.8279453113845, 900.1531931708976, 900.2651056125504, inf, 900.828295978669]', '[900.4956935906264, 900.8166792417645, 900.2044872129637]', '[900.6177668624133, 900.793129850367, 900.5024769009476, 900.5874531663001, inf, 900.3476216137291, 900.5680329060473, 900.1524624203945, 900.1157565249836, 900.0943774301732, 900.1589468212459, 900.8563415304703, 900.2871041191156, 900.2469765832253, 900.408183791468]', '[NaN, 900.1152482126347, 900.1139109179966, NaN, 900.0634308730662, 900.1918596242257]', '[900.49418992478]', '[900.6566851795975, NaN, 900.5585085641878, 900.8678799526169, 900.5580757140183]', '[900.6265067760952]', '[900.5271187218734, 900.3413004135587, 900.0362652510535, 900.2938223153569, 900.6584186055829, 900.5394909707123, 900.5119630230411, 900.2024669591465]', '[900.5068304562362, 900.5173419618334, 900.5268996804168, 900.5247314889621, 900.1082421801126, 900.761200730868, 900.100950598924, 900.1424140649187, inf, inf, 900.4546924838603, 900.7025508468811, 900.5147250716594, 900.4943696257178, 900.814107878577, 900.3540228715348, 900.6165673939341, 900.121833279104, 900.8337503512706, 900.0607374037857, 900.2746253938637, 900.2491844866619, 900.7325728031923]', '[900.6962790125643, 900.6055198052603, 900.0950691946015, 900.6283670716376, NaN, 900.112869956762]']
Generators for dictionaries, sets, etc. can be defined in a similar fashion. By plugging together grammar generators, we can produce data structures with arbitrary elements.
This chapter provides grammar constructors that are useful for generating function calls.
The grammars are probabilistic and make use of generators, so use ProbabilisticGeneratorGrammarFuzzer
as a producer.
INT_GRAMMAR
, FLOAT_GRAMMAR
, ASCII_STRING_GRAMMAR
produce integers, floats, and strings, respectively:
fuzzer = ProbabilisticGeneratorGrammarFuzzer(INT_GRAMMAR)
[fuzzer.fuzz() for i in range(10)]
['-51', '9', '0', '0', '0', '0', '32', '0', '0', '0']
fuzzer = ProbabilisticGeneratorGrammarFuzzer(FLOAT_GRAMMAR)
[fuzzer.fuzz() for i in range(10)]
['0e0', '-9.43e34', '-7.3282e0', '-9.5e-9', '0', '-30.840386e-5', '3', '-4.1e0', '-9.7', '413']
fuzzer = ProbabilisticGeneratorGrammarFuzzer(ASCII_STRING_GRAMMAR)
[fuzzer.fuzz() for i in range(10)]
['"#vYV*t@I%KNTT[q~}&-v+[zAzj[X-z|RzC$(g$Br]1tC\':5<F-"', '""', '"^S/"', '"y)QDs_9"', '")dY~?WYqMh,bwn3\\"A!02Pk`gx"', '"01n|(dd$-d.sx\\"83\\"h/]qx)d9LPNdrk$}$4t3zhC.%3VY@AZZ0wCs2 N"', '"D\\6\\xgw#TQ}$\'3"', '"LaM{"', '"\\"ux\'1H!=%;2T$.=l"', '"=vkiV~w.Ypt,?JwcEr}Moc>!5<U+DdYAup\\"N 0V?h3x~jFN3"']
int_grammar_with_range(start, end)
produces an integer grammar with values N
such that start <= N <= end
:
int_grammar = int_grammar_with_range(100, 200)
fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)
[fuzzer.fuzz() for i in range(10)]
['154', '149', '185', '117', '182', '154', '131', '194', '147', '192']
float_grammar_with_range(start, end)
produces a floating-number grammar with values N
such that start <= N <= end
.
float_grammar = float_grammar_with_range(100, 200)
fuzzer = ProbabilisticGeneratorGrammarFuzzer(float_grammar)
[fuzzer.fuzz() for i in range(10)]
['121.8092479227325', '187.18037169119634', '127.9576486784452', '125.47768739781723', '151.8091820472274', '117.864410860742', '187.50918008379483', '119.29335112884749', '149.2637029583114', '126.61818995939146']
All such values can be immediately used for testing function calls:
fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)
call = "sqrt(" + fuzzer.fuzz() + ")"
call
'sqrt(143)'
eval(call)
11.958260743101398
These grammars can also be composed to form more complex grammars. list_grammar(object_grammar)
returns a grammar that produces lists of objects as defined by object_grammar
.
int_list_grammar = list_grammar(int_grammar)
fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_list_grammar)
[fuzzer.fuzz() for i in range(5)]
['[118, 111, 188, 137, 129]', '[170, 172]', '[171, 161, 117, 191, 175, 183, 164]', '[189]', '[129, 110, 178]']
some_list = eval(fuzzer.fuzz())
some_list
[172, 120, 106, 192, 124, 191, 161, 100, 117]
len(some_list)
9
In a similar vein, we can construct arbitrary further data types for testing individual functions programmatically.
This chapter was all about manually writing test and controlling which data gets generated. In the next chapter, we will introduce a much higher level of automation:
With these techniques, we automatically obtain grammars that already invoke functions in application contexts, making our work of specifying them much easier.
The idea of using generator functions to generate input structures was first explored in QuickCheck \cite{Claessen2000}. A very nice implementation for Python is the hypothesis package which allows writing and combining data structure generators for testing APIs.
The exercises for this chapter combine the above techniques with fuzzing techniques introduced earlier.
In the example generating oracles for urlparse()
, important elements such as authority
or port
are not checked. Enrich URLPARSE_ORACLE_GRAMMAR
with post-expansion functions that store the generated elements in a symbol table, such that they can be accessed when generating the assertions.
In the chapter on configuration testing, we also discussed combinatorial testing – that is, systematic coverage of sets of configuration elements. Implement a scheme that by changing the grammar, allows all pairs of argument values to be covered.
To widen the range of arguments to be used during testing, apply the mutation schemes introduced in mutation fuzzing – for instance, flip individual bytes or delete characters from strings. Apply this either during grammar inference or as a separate step when invoking functions.